In this podcast George Di Martino, TransRe’s Chief Information Officer, discusses technology and reporting systems in the (re)insurance industry.
Can’t listen now? Read the transcript (edited for brevity and clarity)
What were expert systems and what promise did they hold?
In the early 90’s, expert systems were the next coming of new computer systems and how to develop applications. I was at Dun & Bradstreet and was lucky enough to work on one of these projects. At the time, a consultant Coopers & Lybrand was also hired to help since they had prior experience using expert systems. We built this pricing system for apparel businesses because it was all manual. A person went through the financials, reviewed backgrounds of companies and came up with some type of mental algorithm to give a credit rating to these businesses.
We built the system using expert systems and the system incorporated the knowledge of what people were doing in their day-to-day job. We wanted the computer to be able to do it. Initially it was a programming language called Lisp which was used early in AI or expert systems and eventually it was rewritten in a language called ART. The database comprised all the questions that the analysts would go through and the computer generated the same rating as a person would. The system worked well, and they rated thousands of companies within seconds. It was a big success for Dun & Bradstreet.
So that sounds a lot like AI machine learning. I’m sure you have active initiatives at TransRe to use this kind of technology. In what way did it not work? Was there a way in which it didn’t fulfill its promise or is it still in use today?
I really don’t know if it’s still used today, this was 35 years ago but I do know it was probably used 20 years ago. I assume they’ve since moved on, but I really don’t know. We do have an Applied Data Group at TransRe that is looking at machine learning and technology has evolved since the late 80’s, early 90’s. We are looking at machine learning to deploy that type of technology for different problems that we now face. We don’t have anything concrete developed at this point, but it’s something that we definitely want to utilize in the future.
What were the characteristics of the system that made it successful?
I think it came with a 96% of matching the human logic for giving a rating. I don’t want to say it eliminated a lot of jobs. The group was small of around 30 people and they reassigned them so that they would focus on different types of analysis aside from only apparel businesses. The system did do a good job replicating the human thought process of coming up with a rating so that’s why they went with it.
How long were you at Dun & Bradstreet and when did you leave?
I started working on this project in 1992, 1993 and it went into production in late 1994. It was the hottest technology at that time, and it was actually fun. I love technology, so it was doing anything to solve a business problem and that’s what I love doing. Being in AI and expert systems was really my entry into reinsurance. I was recruited from Dun & Bradstreet in 1995 to Skandia America Group, which was the first reinsurance job for me. I was hired to look at expert systems and bring expert systems into the reinsurance arena. While there we built an underwriting pricing system, which was a similar concept to what we did at Dun & Bradstreet. We worked with underwriters and actuaries and built a property and casualty pricing system using exposure and experience type rating. That is how I moved into the reinsurance industry 30 years ago.
There are people in the insurance industry that think there’s a huge opportunity to do what you did 30 years ago. How do you reconcile that impression?
We used the technology called ART which is a big knowledge tree, a decision-making system. You’re inferring knowledge into this lavish tree and it comes up with results based on what you put in. You can take that and develop it into any language. It’s just the tools that you use that make it easier. Neural networks was known back then and again I haven’t followed AI and this technology recently, but neural networks have evolved tremendously over the years. I could have done this in any language, but it would have probably taken more time. Today there are specific tools such as Google, Microsoft that are available to people that use machine learning. The thing with machine learning is you’re basically pumping millions of records of information and it’s coming up with an algorithm to produce the results. My opinion, it’s a hit or miss because you don’t know how it’s going to come up with that answer. The knowledge-based systems used 30, 35 years ago took the human thought process and pieces of information that were used to make a decision. A person does that over and over again with various pieces of inputs. You can replicate that pretty readily, if a person is making a concrete decision based on facts. If a person is making a judgment call that’s tough to do. It’s a gut feeling and how do you program a gut feeling? I don’t think you really can. You could try to do it with machine learning, but I think the machine is going to come up with some type of patterns that are going to match and spit out results based on those patterns.
I personally categorize AI as all software making decisions but what do you think? Do you think they are the same or should we think of them differently?
I think they are all some type of AI. Any program you write can be considered AI, right. You’re giving some information and it’s coming up with a result for you. It’s all AI, it’s the underlying technology that you use. Building a logic type of expert system, I think you could easily follow and replicate the inputs and outputs, whereas, use some type of neural network or machine learning algorithm, it’s probably more abstract. I’m sure a lot of them can follow straight through, but some might not be able to and you’re not sure why it came up with that answer when a human might come up with a different solution.
How do you know who is right?
At the end of the day, you’re going to compare, and you’ll know who’s right by running tests.
What do you think are important projects to work on at TransRe? Can you tell us more about the TIRS system and how that works?
TIRS is our global reinsurance system. It’s an integrated treaty facultative assumed and ceded reinsurance system. We handle treaty and facultative business and within the last three years, we added a primary module to support our Fairco operations. We input policy level details into the system, and we use that to manage our assumed business and our retro business. It has underwriting, accounting, claims and actuarial modules and the system calculates IBNR. We don’t do pricing. It’s really an operational application to manage reinsurance companies’ systems worldwide. We’ve actually sold the system to other reinsurance companies and we have five active customers right now outside of TransRe, which I think is pretty cool. It’s not very often that you have a system that’s designed and built in house and you are able to sell it on the market amongst other systems. Another reinsurance company has to accept the fact that they’re being supported by one of their competitors. That’s a tough pill to swallow for a lot of people.
Have you ever felt a conflict to adopt your business model to cater to the demands of your customers as opposed to the demands of your internal requirements?
It does come up. TIRS has been actively used for 22 years at TransRe. It’s evolved from version one and we’re currently on version 14. We just relabeled the version to 2020, gave it a year association instead of just a number. We’ve been enhancing and developing TIRS over the last 20 years and there’s been few cases where another customer wanted something added. We do all business lines worldwide, so it has been minimal, anything that anybody has asked is probably already built in. I haven’t had any requests for new enhancements to the system in over 10 years.
TransRe has lot of edits and business rules built into the system but a lot of that gets bypassed by our customers. We don’t require them to input all the fields. There’s a minimum they are required to input in order to keep track of the treaty and be able to book accounts and claims. Otherwise the integrity of the system wouldn’t work. It makes their operations a little bit easier for processing. There is a lot of business rules that our employees go through when they activate a treaty or book an account that the customers don’t see, and we have to follow our underwriting policies that are built into TIRS. We don’t employ our policies to other customers.
What’s an example of such a policy?
The process of inputting the treaty includes approximately 50 fields that are mandatory out of 250 in the underwriting system that employees need to input before they can activate a treaty. For a customer that’s probably down to 15 or 16 fields. It’s great if they capture other fields because that will improve reporting, but we don’t force them to input other fields.
Can you see what the differences in the behaviors are of their users and yours?
Well we support them so whenever there’s a problem they will call us and we’ll help resolve the problem. It’s usually some type of technical issue. If they have business questions, we’ll walk them through and say “this is kind of the way TransRe does it so it’s truly up to you if you want to take those specific fields and use it. You have the same exact system that we use but we don’t want to impose our policies on you”.
Have you learned anything interesting or insightful from these users that you have been able to implement at TransRe?
No, not really. They’re basically writing reinsurance that we are probably on the same treaties as they are. A lot of these customers have other systems. TransRe has extensive history in terms of our data and our information. A lot of these companies are startups and I prefer to install TIRS into startups just because there’s no retrofit of historical data, that could be a total nightmare. Integrating two different systems is a big undertaking and it’s just not something that we’re willing to undertake. We have this sophisticated actuarial module where a lot of the clients don’t have the loss data to be able to utilize. They’re putting in loss ratio overrides and that works for them. It’s very simplistic in terms of calculating IBNR where at TransRe we have locked in the factors and we use the Bornhuetter-Ferguson approach. It’s kind of built into the system and we are able to use that algorithm to calculate IBNR.
If you’re able to license TIRS to other people that means your secret sauce isn’t in the system? Maybe the secret sauce is in the data?
That’s correct, the secret sauce is not in the system. It really is the data, data is knowledge and as far as I’m concerned data is king. TIRS makes a company smarter and at TransRe we prefer a smarter competitor. When they are able to review and know their results so they don’t underprice business, to us that’s a win-win. TIRS definitely eliminates the manual processing of teams of accountants and claims examiners. Well you’re always going to need claims examiners depending on your volume of claims, but you will be able to process those claims much more efficient in TIRS.
TIRS does have electronic interfaces with three major brokers: Aon, Guy Carpenter and Willis. They are able to send us electronic accounting and claim messages that load into TIRS and a good percentage of those messages automatically get processed and booked into TIRS without human intervention. Business automation is pretty key. The other component of TIRS is that you input your accounts, claims, treaties and you’re able to get a P&L in seconds. I can run a profit and loss report on a particular treaty, a line of business, a whole company in a matter of minutes. I see which lines are profitable or not profitable. So that’s very powerful that they can look at the results instantaneously.
How about data quality? Do you have any kind of embedded assessments of the quality of the claims data or is there some automated way of determining whether something makes sense?
There’s a lot of fields and validations that occur. All the financials are validated because we run our financials out of TIRS. They go into our general ledger and we produce our results with them. There are also a lot of free text fields where people input names and can mistype something. TIRS is not going to pick that up, it’s a system, but there are a lot of validations between the underwriting and the claim module. When I input a new claim, I have to pick which treaty and which section and the lines of businesses are automatically prefilled to show me only the lines that are written on this treaty. The data’s extremely clean. For reports, I can slice and dice the data by line of business, underwriter, country, territory, really anything you have captured in TIRS. You do run into issues with the free text fields and if somebody wrote a narrative on the description of the claim. There you really can’t do any structured reporting. The database is a relational database which is extremely structured, but we have a lot of unstructured data that’s associated with treaties and other entities.
We use something called Document Management System (DMS) and it acts as an electronic folder for a claim, an account, treaty or certificate where underwriters, technical assistance, really anybody can archive or store non-structured data (PDF file, excel spreadsheet, word document). Any type of file that is electronic can be stored and you can associate it to an object in TIRS. Reporting on non-structured data is very complex. This is where AI would be tremendous. If it could go through and read all the DMS documents (over 2.8 million) and make some sense of the losses in a specific class or line, that would be pretty cool technology.
Your historical claims database goes back decades and is riddled with data on cat losses and liability classes, all that kind of stuff and I’m wondering if there’s any treatment for that in the system itself to think about the claims data and whether there’s anomalies compared to this or that?
Look our data is definitely there and we can track catastrophe losses. Every claim associated with a cat is flagged to that cat so we can run reports specifically for that potential catastrophe. Our actuarial department really runs development triangles on all lines of businesses to come up with loss picks for those lines. Our advantage is that we have all this data. I think our actuaries are able to be more accurate than other companies because of the ease of getting the data out of the system and produce pricing algorithms in order for them to come up with better prices for renewals.
What is coming next for TIRS? What are some things that you’re working on that you find really exciting or interesting?
We are constantly enhancing TIRS based on business requests and it could be from cash allocations, to underwriting and reports. The next four to five years is probably when the next big change will happen to TIRS. We will redevelop the underlying architecture that TIRS is built on. TIRS is currently built in late 90’s technology and it’s a client server-based technology. It has served us extremely well. So much has changed over the last 20 years especially with the Internet. We are going to be transforming TIRS to a web-based technology using microservices, kubernetes, the latest technology available. We do have a big head start because we know what we have to build. It’s not going to be an easy task and will be monumental. TIRS has about 2 million lines of code that has to be redeveloped. I think a lot of it can be ported because we do have this tool that allows us to port a lot of it but it’s not going to be seamless. It will be a big project but I’m extremely excited about it. This makes me want to get up and do it because it’s great technology. I love learning things and it’s going to be a huge success for the company.
What’s the biggest thing that this upgrade is going to do?
The key thing is the underlying technology. TIRS is written in a language called PowerBuilder. PowerBuilder is still currently supported, and they have a new release coming out but finding talented people to be able to develop in Parable is becoming more of a problem. I think it will be exponentially more of a problem in 10 years from now. I’m trying to future-proof this system. You want to hire young motivated people that want to work on new technology. They don’t want to work on technology that was developed 20 years ago. I think it would be really tough to hire people to maintain and continually develop this application going forward if we don’t make this move. That’s really the biggest factor. I think this technology will still work 10 years from now, but the real question is whether or not we will have the people to support it.
My guest today is George Di Martino. George, thank you so much.