In this podcast Otakar Hubschmann, Head of Applied Data at TransRe, discusses machine learning and the future of technology in the (re)insurance industry.
Can’t listen now? Read the transcript (unedited version)
My guest today is Otakar Hubschmann. Otakar runs the applied data group at TransRe, a global reinsurance company. The group seeks to monetize information through data science and machine learning. Otakar’s background is in portfolio management and trading at international banks and hedge funds. Otakar welcome to the show.
Thank you. Great to be here, David.
So first question, it seems to me that there are two strategies in AI and machine learning. In my experience, the first is to kind of automate things at scale, so things humans can do. I think of image recognition being an important thing, which machines allow us to look at video and analyze it and understand it without a person actually looking at things which is valuable. The second is to teach us things by processing more data than humans could and so this can be done at scale as well. For instance, with recommendation engines where the data that the AI can observe lots of things and actually infer meaning where a human would not be able to do. But I think these are distinct sources of value. So first you can agree or disagree with that premise and second where does AI add value in reinsurance?
Well, good, big, broad question to start. AI broadly speaking or machine learning, which is like a subset of AI is broken up into supervised and unsupervised learning. So the thing you’re talking about when you give a computer pictures of dogs or cats is called supervised learning, they learn from examples that are shown and labeled, and we actually do this whenever you do one of those CAPTCHA’s, you know, where to prove you’re not a robot, you’re actually helping some AI somewhere, ML algorithm, better label pictures for use in something, for the Borg. The other one is a unsupervised and that’s more like, you throw a bunch of data at it. You don’t say anything about what the data represents and you use the machine learning algorithms to cluster the data and sort of try to separate the data into different groups. Then you would look back on the data and say, Oh, these groups look similar because of these reasons and these groups are separated for these reasons. So those broadly speaking are the two things, but yeah, I think the big, main advantages of ML machine learning in any industry, but certainly potential applications in insurance and reinsurance are to make decisions at scale and better optimize decisions that would take a much longer time having been funneled through a human. At the heart of any machine learning algorithm is some function. It’s usually a loss function you’re trying to optimize for. You’re trying to improve upon an error rate. And the idea of that the machine learns is because you show it more data and the optimization becomes better and better given more data. And if the hope is that if the algorithm can generalize information about a dataset, then you can provide a new data set in the real world what’s called an ‘out-of-sample’ data set and then the model and the algorithm will be able to generalize on that data. So broadly speaking, that’s sort of it, yeah. Scale is a huge thing and just better decision making at a much larger scales optimization of a number of different things.
Reinsurance though, here’s the thing that I see about that business, right? It’s super, super, highly capable of experts looking at masses of data. Making decisions on that data. Where does the machine squeeze into that process to add something to it?
Yeah. So that brings up a good point. Something that I always, you know, whenever I give my ‘Skynet is not going to take over’ talk or speech to underwriters or whatever line of business. The idea that you could have a machine come in or set of algorithms or whatever, take over from someone that’s been in the industry with a learned domain expertise and a set of heuristics that are sort of proven over time, that just is not going to happen maybe ever. But certainly not for many, many years. So the way I’ve always thought about it is when you think of, you know, Gary Kasparov played Big Blue, famous grand master in chess, lost to Big Blue, which was an IBM computer program, big, deep search algorithm. After that experience, he had always said that he thought that the most optimal setup would not be either like chess master or machine it would be chess master aided with machines, sort of a bionic middle place. I’ve always thought that you have sort of like these scaling again, back to your, I thought that was a good broad, you know, scaling and optimization algorithms and process. And then sitting on top of that is you have the domain expert, the actuary, the underwriter, the finance person, risk manager, you know, whatever line of business. The machine learning enables those individuals to take their domain expertise and apply it to a larger scale so they can make more decisions, better decisions, much faster than they could previously. It isn’t a matter of business being taken away from the human it’s enabling the human to scale up their business.
Okay. So this is more of the expansion scaling then side of things, right. Then teaching people, or is it more teaching people? What do you think?
I think it’s initially probably more scaling but the idea is that the feedback you get from the algorithms is like, I always think about it, you know, different features or attributes to a cert, policy or treaty, equal either some claims or not claims. Whether, you have severe claims, number of frequent claims, whatever. Over time you will get to see what attributes make up good pieces of business and what attributes make up bad business. I think initially it’s probably the scaling thing but the idea is very powerful that you could take, again, like the inborn learned heuristics that say, for instance, underwriters have when they’re trading and underwriting and couple that with new insights that are gleaned from the machine learning algorithm is telling you, you know what, maybe you haven’t seen this before, but this particular combination of business yields an outsized good return or an outsized bad return. So then they can begin to sort of make them into their set of like sort of more qualitative heuristics.
Do they buy it? You know what I mean? I know a lot of underwriters as do you, some of them are pretty old school and I imagine will be a little bit, I mean, let’s call them Luddites. I mean, it’s a bit a negative kind of connotation to that but I could see them resisting and saying, yeah that’s wrong.
Yeah. So, I’ve been at TransRe for four years now and what I found is like probably what you would expect, a pretty broad spectrum of those that buy in right away because they see the value and then sort of in the middle ground, those that buy in, but only if the machine learning results or the algorithmic results tell them the story that they want to hear. And then on the other side, like you said, it’s sort of like, ‘Hey, listen I’ve doing this for however many decades. There’s nothing that a machine could tell me that I wouldn’t already know’. I respect that.
My background coming from trading, I know that again, I keep on calling it the heuristic piece, but the rules of thumb piece is super important. That gut sense I do believe is something, but it can be again, like better optimized, so the idea that you would make better decisions using your gut. So it’s like algorithmically gut made decisions, which would avoid more mistakes, you know, to me intuitively that would be like a pretty cool thing. But to your point, there’s definitely a spectrum and I think insurance is in a phase right now, probably spurred on by the pandemic where people are like, you know, this is, you know, this is happening. It’s coming down the line. I used to always say jokingly ‘winter is coming’ in terms of this, the necessity. It’s not going to become a choice. It won’t be a choice. At a certain point, it will be, have you been using data enabled machine learning tools and have you tried to mine and monetize your data? So that will be happening in the background and there will become a certain point where people are going to wake up and be like oh no all these other companies were doing that, and they have such a head start on us. Now that everyone’s saying it’s important and we’re beginning to, you know, algorithmically automate the more homogeneous data sets, you know, there will be a point, like I said, and it will be weirdly very gradual and then it’ll become binary. I sound like I’m talking about the Singularity, but I’m saying one day people will wake up and say, man I’m really glad we started doing that five years ago or oh, no, we didn’t do that five years ago and now we have a lot of catch up.
Two thoughts come to mind. One is, you know, what way that actually manifest itself, suddenly your results are a little worse and you’re having trouble figuring out why other people are doing better. You just didn’t pick the right deals and didn’t get big enough lines on the right ones. Reinsurers actually don’t have that many decision points. It’s particularly in an open market, subscription market, you can get some private deals, I suppose, but for the most part, it’s do you go overweight or underweight on this or that treaty?
Right. I think that initially what I’ve seen is that it will be spurred on by people being worried about the fear of missing out, the dreaded FOMO. So they’re going to hear that and I already sort of see it with third party companies trying to come in and sell us stuff. Whether it’s data ingestion tools or sort of like algorithmic trading of underwriting lines, stuff like that. I’ve seen in my short time in insurance that companies are sort of spurred to action in this area because they see other companies doing it and they’re worried that they’re going to have to present to management and they don’t have anything to show and management is going to say, ‘Hey, I heard that they’re starting to do this thing at another company and why aren’t we doing it?’ Yeah, there’s such a huge spectrum of how that plays out. It’s very hard to say, but like I said, what I’ve seen so far is that it’s spurred on by the fear of missing out and then it’s sort of catch up and we have some of that going on in the pandemic with some of the automation things I think within the industry.
Interesting. It sort of speaks to maybe the delay between any strategy that gets implemented and how long before you really see real results from it. You think if you’re a mature company like TransRe and you have a portfolio of a bunch of deals and how hard are you really going yank the wheel no matter what happens, right. No matter what new information you get, how hard can you really yank the wheel? There’s always this unknown unknowns thing to it and you’re sitting around the underwriting committee saying, let’s just get off all these deals and get on all these ones. And it’s like, this just not how the world works here and so you have to make decisions on kind of secondary measures of quality. Which I think, remember when I was a reinsurance broker, it was kind of amazing, you sort of have at all levels, everybody, you know, have a bit of a herd mentality where somebody goes up publicly saying, Hey we did this as an organization. And now your CEO under this, I’d have to answer your shareholders and decide why you have not done such a thing so why because you never know what it’s going to work for years, either direction.
It’s bizarre, but that’s a good way to describe it. I think it is herd mentality because I’ve seen some Insurtechs in the industry that have been very well funded and then when they come in and they talk to us almost all the time, they have an idea that is at the very nascent stages of anything, and yet they’ve been funded. I think that is in part again to the fear of missing out thing. So they’ll come in and they have, the Wall Street Journal article says that they have the algorithm to change the industry, but then when you ask them, they say, ‘well you know what, to be honest, between us, it’s not really an algorithm as it is some like rules-based things, but we’re going to work on the algorithm and guess what we just got $300 million so that’s actually not something we have to worry about’. So there’s that, which I think is sort of inherent in a lot of startups anyway, this sort of, I don’t want to say fake it till you make it, but you don’t have the product you’re saying until you have the funding to have the product you’re saying.
I think to your earlier point, there does need to be some, you know, whether it’s like a not a direct bifurcation of so this is our business that we underwrite with all our usual qualitative things, and this is the business we do algorithmic. I don’t know if it’s, if it’s even going to be that harsh. I think there, like you said, there it needs to be some gradual spinoff towards the ‘road less taken’ where even if you could, you know, so this would be one scenario where you, you could incubate, you look at the same deals and you have the underwriter do their same processes, but at the same time, you can have the algorithm or whatever you want to call it. It sounds very sentience, but you know, it’s just like, whatever. It’s you have the secret sauce recipe working on those same deals, unbeknownst to the underwriter and then you present the findings after whatever a year, 18 months. That’s sort of more quantitative thing where this is how we did it/do it currently, this is how we could be doing in the future and are there any things to glean from the way that the machine did it in a strictly qualitative, a quantitative sorry, basis versus our qualitative basis?
That’s the way I sort of see it playing out. I mean to your point, I certainly wouldn’t want to sit in front of our board and say, ‘Hey, good news guys or bad news guys, we turn the key and you know the building exploded’. It sort of does need to be gradual and there does need to be a base established so that you can do a comparative testing versus the way that things are currently done. That’s the only way I believe that you get a true sustainable buy-in, doing the two things concurrently and then comparing how they do against each other. That’s one of the good things about the doing say underwriting an algorithmic way is that, once the model is up and running, you can take that version of the model, because I used to do it in trading all time. And then you tweak that model, five, 10, 15, 20 different ways and they all spin out and incubate at the same time, and they’re all running in the background, sort of like paper trading, right, not spending real money. Then you can take the one that does the best, or you can do some optimization across all of them to have them be an ensemble where basically all the models, vote on, you know, so you could do this and there’s a number of different ways. Again, it’s not like you’re hiring more people, one model or 20 models is like the same price.
Yeah. There’s an image in my mind of thinking about model kind of like multiplicity, I guess, where right now like a typical reinsurance company, you have this underwriting committee, whatever it is. So you’ll have five or six super experienced professionals in the hierarchy and they sit down around a table virtually or physically and they just go through the deals. They say, well this one, that one, and give me the pitch Jimmy this is your deal and Susan that’s your deal. You know, Jimmy’s first, Susan’s next and then they just sort of hash it out at the table. They talk about whether they want to do it or not, or how much they want to do and they come up with an outcome. So that’s six or seven models working there in people’s minds. Right. Now maybe you just add another one or maybe you add 50, or I don’t know, 2 billion or whatever number of models you can put in that chair, the AI chair. Is that a proper vision for how this could work?
I think that’s a good way to articulate it and the idea is that say we have one nice base model. And again, so I guess to contextualize a little bit, so when we talk about model, it’s like we take historical data that, so it would be, again, the two parts of the equation. It’s like the cert treaty policy information and the claims information and we’re basically trying to divine what pieces on the left hand of the equation equal the pieces on the right hand side of the equation. We throw a number of algorithms at it. Then we optimize for like the knobs and buttons on those algorithms. We try to make sure it’s not overfitting and overfitting is a big problem in machine learning. What that means is the algorithm remembers and learns the data set it trains on so well that when it sees a new data set, it’s very brittle and it breaks, it doesn’t model well on a new data set. We needed an algorithm to be able to generalize.
So the way I see it is you get your good, number one secret sauce algorithm and then you can spin that out to 15 or 20 and they all have, like I said, slightly different tweaks of the knobs, slightly different emphasis or weightings of different attributes. Then collectively what those models can do sitting in the AI chair is that they can say when we accentuate different aspects of the underwriting process are things that we think are important, whether it’s like emphasis on premium per what the limit is or where you’re attaching or where the sector is collectively from those 20 models, they may all broadly work and return sort of the same results, but we’ll get an interesting tale from each of the models where we can see outliers that would help us to broadly speaking make better decisions going forward.
Let’s come back to chess for a sec and I don’t want to overdo the chess metaphor cause that’s a closed system with certain rules, right? There’s like a finite number of decisions you can make in chess. It’s a big number, but it’s a finite number. So you could in theory, memorize the chess training set and solve the entire chess game. You made a distinction earlier on supervised and on unsupervised. So Gary Kasparov and the supervised learning of Deep Blue. They fed it a whole bunch of stuff that people had done in the past and gave examples of things that were good games. Then along comes Alpha Zero which simply gets the rules of chess and plays itself for a couple of days or something, millions of games. Now it’s like an alien intelligence and the thing is teaching humans how to play chess and they can’t imagine even before then chess had exceeded human capability but now this whole world and it’s inspiring. You probably, I don’t know if you’ve seen this stuff, but it’s kind of neat to sort of see how it just plays the game in a different way. It shows maybe there’s some path dependence to how humans have learned the game and there could be multiple realities that we could have learned. So here is like I said an alien intelligence from another universe. Could you imagine an algorithm unsupervised manner or learning reinsurance in this kind of way, let’s break this kind of model here of chess versus insurance. Can it be done? Something more ambitious than our chair with 20 models in it?
Yeah, sure. I think what you’re sort of talking to is the AlphaGo Zero, the multiple variations of the alpha, what I believe Google Deepmind is doing where each of the algorithms that are based towards games get increasingly better at learning, I believe some of them, they don’t even give them the rules of the games and they’re able to, you know, again, like, so the example was in this case, I believe was in Go and the model started beating all the Korean champions. There was a point at which in one of the games the computer model made what the Go experts considered an achingly beautiful move. I believe this talks to your idea of is there a point, you know, the idea of General AI when computers can pass the Turing test and they can do things that aren’t so rote and they become very nuanced and in like the case of Go, what many of the experts and masters had considered like a beautiful move sort of out of nowhere.
My short answer to that would be ‘I don’t know’ but one of the things I worry about sitting in my seat, my job, my role at TransRe is that somewhere in their garage or at a huge company, somebody completely unrelated to anything having to do with insurance or reinsurance is playing around with something where they can transfer that to the insurance and reinsurance industry and they basically say, well, we figured it out everybody. Then all of a sudden there’s a sea change in the industry. So I definitely think there’s a chance of that happening. Again, whether it’s one of these huge companies like Google or Amazon, or whether it’s somebody super smart tinkering around with a certain type of transfer machine learning or whatever, and they’re in their basement or garage. So to my mind, that’s the idea that we could do, you know, call it beautiful artistic underwriting, you know, that’s something I’m excited about more than fearful. But like I said, from a job security perspective, I’m probably sort of fearful that somebody is working on it, something completely orthogonal to insurance that all of a sudden works extremely well in underwriting.
I’m probably a little less worried than you. I think that I see so much of the value of really high-quality insurance and reinsurance underwriters are a little more experienced in reinsurance. Is there a knowledge of the market, their ability to figure out who’s going to swindle them and understand maybe their own kind of weaknesses as humans to make decisions. This veterinarian committee is there to kind of smooth out cognitive bias for individuals and bring in greater knowledge and maybe greater relationships to know who are the bad guys, who are the good guys. I think that if you had kind of perfect information about whether somebody is trying to, you know, think about the structure of reinsurance deals, they’re just so designed and even the culture of your talk in the reinsurance industry. It’s all designed to stamp up moral hazard.
It’s all designed to say we have two giant financial institutions who are, who the most sophisticated thing that they really want to get into is the quota share. These are very smart people who should be able to think of something a little more complex than half of it is mine and half of it is yours. The problem is their desire, their fear or moral hazard is so strong that that’s where they’re most financially competitive and it’s so easy for somebody to game other people using a different method. So to me, the technology, the revolutionary that you’re describing would need to find a way to communicate to somebody when they’re getting swindled better, because I see that as the secret sauce of underwriting. Unless the machine can somehow unearth some kind of trust a meter like you should trust this MGA startup selling long haul trucking. That to me would be pretty amazing but I have a hard time envisioning a computer, machines being better at figuring out good and bad people. What do you think about that?
Yeah, I think that is a good point. The ‘am I getting swindled’ meter. What you’re speaking to is that, you know, machines don’t right now, they don’t do well with human things, with the behavioral aspects of things. I wouldn’t, when it comes to the final decision for a deal and it’s between somebody who’s been in the industry 20 or 30 years and something that a computer is spitting out, I certainly would go with a human. However, I think that things work and the status quo works until the status quo suddenly does not work. I saw a number of different things in trading and Wall Street, when I was working there, you know, one example was how sacrosanct the idea of a certain level of commission rates were for doing trades.
You even see that now, I think the company Robinhood began the idea that you could not charge for executing trades was anathema to the entire industry. People had basically predicated their entire careers upon that rule. And it was you know, I remember when I first got in, people would say to me that reinsurance was a gentleman’s game. I always thought that was ‘man what a slippery slope that is’, everybody agrees on the rules until somebody doesn’t agree on the rules and finds a better way to do it and then it’s a race to the bottom. What I mean in this particular context is Robinhood says there’s going to be zero commissions for trading. I saw that same type of thing when, I’m old enough to remember when there used to be the idea of you could only charge in specific increments for trades. I believe the lowest was one 16th, which we call the teeny, and then things went to digital and you could charge whatever you wanted. Prior to that, the idea that we would go to digital, where you could bring it out a number of decimal places and charge, as low as that was crazy. The idea that, again, you can execute a trade for extensively for free. Although those companies are receiving, money from other companies for that order flow that they deliver, again, that had been an anathema to the industry, but then the paradigm changed and the gentleman’s game in that aspect completely broke. I think insurance and reinsurance is rife for that type of stuff and it’s fine until it isn’t fine.
You’re saying they get paid for the order flow. I only kind of in perfectly understand that business. You mean that the monetization strategy changed for those companies, those intermediaries?
Yeah. I, so okay to the extent that I hopefully understand it. A company like Robinhood will receive retail orders and then a company like Citadel, or one of these other quantitative trading places, they will pay Robinhood to see that order flow so they can get a better sense of the book size, we call the book is the bid and ask for a particular stock price. So the book size where things are trending, and so the Citadel or the quantitative hedge funds are paying for the information to help improve their picture of the market and to help improve their liquidity. So yeah, those are the types of things like I think about what, you know, what are the analogs insurance, reinsurance, how one of the things, you know, and I hope it’s not too controversial, but I will always was so interested/curious about why the industry was so dependent upon intermediaries and brokerage houses. I had sort of seen that dwindle and diminish on Wall Street where I used to be covered by five or seven people at a firm to give me information about a deal or whatever, if I was doing say M&A trading, or, you know, whatever the vertical was. And then I saw it go to three people and then it went down to one person. And I’ve always wondered how that I’m pretty sure the fat in the distribution system there there’s so much fat, and so many seeming inefficiencies that it’s rife to be disrupted. And, you know, you have an interesting situation because the people that are going to be disrupted are the actual lobby fighting against the disruption. They’re the underwriters, the brokers, the whoever transacting. So there is, you know, collusion is the wrong word, but there is a huge impetus and incentive to keep things status quo. And so that is, you know, that plays to the reticence of the adoption. Certainly, for the stuff in my job, like the lobbying are the actual people that you’re trying to help, you know, with the tools that you’re building.
Yeah. That’s interesting. Where do you get ideas for what you do? What’s the roadmap? How does that come about? Do you look elsewhere outside of the industry? Where does the plan come for TransRe? Where do you get inspiration?
Well, I think, I’ve always been a big proponent of sort of orthogonal application of things I’ve learned from other areas, whether it’s, you know, whatever science, physics, business, you know, certainly I come to things with a trader’s mindset having traded my whole career before TransRe. So I think about, you know, again, very directly, how do you monetize information? You know, there is generally speaking, always going to be income, incomplete information. So how do you jump to make that step in the monetization of the incomplete picture? The inspiration for me is still when it would work in, when I was trading, the idea that you could take a thought of yours where you thought there was a potential arbitrage opportunity, or there was an inequality between two data sets or something that somebody else hadn’t found that the alpha or the edge hadn’t been whittled away.
So when you took that and you made that into a model and then to see that make money, it is a really alchemic thing. And, you know, somebody had said that my group, they thought were sort of what my group did was sort of alchemy. It’s funny because alchemy is a little bit of a disparagement, but not when you think that alchemy was the beginnings of the things and in chemistry and the sciences and to somebody that doesn’t understand the innards and the whatever black box, which I totally believe should be not obfuscated. They should be seen by everybody, the idea that you can, again, I said idea, taking an idea, putting it into a model. The model works to find the inefficiencies acts on that idea, and you can make money from that is fantastic.
I go back to, I really like Richard Feynman, the Nobel physicist and he always would say the pleasure of finding things out was what drove him. And so to me, I feel very much in the same way where a new idea for anything, you know, I remember I had an idea for one of my best performing trading strategies I got from talking to somebody at a Mötley Crüe concert and the idea popped into my head. So being able to apply information from all aspects of your life, seeking out interesting people to talk to them, reading all over and broadly, trying to find conversations that tweak your brain, scratch your brain in a interesting way, you know, to me, all that can be applied again to, you know, what my job I believe is which is monetizing information in different ways. So I feel like I went a little bit on a tangent there, but it comes down to the game to me, which will be infinitely interesting is taking the ideas, taking thoughts, information, and trying to make money from it.
I think coming back to the original question, maybe we can kind of close on this sort of thought or exploring this thought is if I sort of have been convinced and you can tell me if I’m right on this or not. That actually, it’s the exploratory educational side of the algorithmic analysis, AI machine learning that is perhaps more interesting at the moment, right? So it’s what can we learn about the world that this algorithm can teach us? How do you and you can disagree with that, of course, but how do you communicate to, how do you teach somebody with such a thing? Because they’ll say it, you’re going to say you should turn this deal to them. And the algorithm says in that chair, and the person says, and the underwriter says, well, why, what do you know that, how do you answer that question? How does the algorithm, how do you design the output to be interpreted?
I watch and I think you don’t pretend that your information or the information you’ve seen from your model is any better than theirs. It’s basically, so we build a model and the model will spit out what’s called ‘feature importance’s’. Well, it will say, ‘Hey, you should be looking at this particular sector, this level of, of rate this level of limit attachment’, things like that. And so I will go back to the underwriter and I’ll say, okay, so for this deal or for your portfolio, it’s told me that these things were important. These were areas in which the decision tree algorithm, one of the nodes broke off and went to another thing so the decisions were predicated upon that specific attribute. And I will say, first of all, does this intuitively make sense to you? And if not, why? I will basically ask them to do what in effect is the human capture, where I say, I need you to help me for lack of a better term label your portfolio. So you have to say to me, this would be considered a strange piece of business, except in the notes you’ll notice that I have these three or four things. I take those three or four things and then they become new attributes to make the model better. So, you take again sort of generalized model and then you try to be very considerate about including all the sort of heuristic tweaks that would be incongruent with the model, but actually in real life, sort of makes sense, because you’re taking into account the qualitative expertise of the underwriter or whoever the professional is.
That’s interesting. So in a sense, you’re forcing them to stand by their guidance to you and saying, well over here on this model, on this deal over there, you trained it with this information and it’s just really giving you back that, but in a different differently.
Exactly. As for instance, we build out a model that takes, it was trained on a huge history of our facultative business and it will take certifications and it will say based on previous business, you are this percentage likely to write this again. So when I show that to the underwriters, the idea is that, you say they get 20 however, many 30 pieces of certs a day that they need to potentially look at. We’ll provide them with the top N search saying, you’re most likely to write these, so on and so forth all the way down so they can sort of, again, it’s to help scale and optimize the decision-making. And when I showed these to the underwriters and they say, you know now this is wrong. Like it says, I would write this 95% of the time we didn’t write this deal. I say, okay, well, why didn’t you write the deal? And they’ll say again, like I mentioned before, they say, well, this would look like a deal we’d write all the time, except it has this specific little feature to it. Then we’ll add the feature into the model. So, with an underwriter that’s open to the process, it’s really fun and terrific because they can see the value added in terms of making the model better to help them again. They get the feedback from decisions they’ve made in the past because you’re training on what admittedly are the biased cause biased. By that I mean, everyone has biases in their decision making. You have trained the model on the bias decisions of the underwriters. So when you feed it back to them, like you said, it’s well you made this decision to help infer the model’s decisions. So now, why did you change your decision? So then, in sort of a Bayesian sense, you know, Bayesian statistics, you basically just update your decision making basically in is like real life, right? You just update your probabilities and your decision making given new information. So that’s what we constantly do with the models to make them be able to generalize better the real world, because you don’t want to a hemophiliac model where if you cut it in the real world it bleeds out.
So last question, I think, how do you measure progress for your efforts from the models are indifferent?
Yeah. That’s a great question. Coming from an industry where my progress or my cumulative progress could literally be measured with you know, a single black or red number at the end of every day, because I was running a portfolio and I would have P and L every single day. It’s been an interesting switch to not having a P and L for my group. And I remember reading in one of Nassim Taleb’s books, he talks about, maybe it’s Black Swan, but he talks about if there had been a person that had created the door latch to prevent the 9/11 hijackings from happening and that door latch had been created before 9/11, you maybe would have never heard of that person. So it’s very hard to in some instances, prove out, certain things. Progress to me is probably by how adopted, how open I see people becoming like the sort of collective thought of people towards the tools. Hopefully down the line and not too far away, there’s going to be an actual measurement because we’re going to have, you know, automated ML models in production where I can actually see, after some time how they’re doing.
Looking to the future here, what is your kind of priority? What’s the thing you’re most excited about building, you’re saying a monitoring system, what are some other cool things about the frontier?
Well, you know, all this sort of abstracts away, one of the huge issues in the industry, which is data quality is generally speaking like terrible. There is a ton of time spent in trying to ge the data across all lines, standardized in the way that it’s brought into the company through TAs or whatever, how it’s ingested, how then it’s cleaned, how we have the name matching, so we can follow an insured all the way through the process. So there’s still work to be done, and we’ve made a lot of progress in that, but there’s certainly still work to be done to build a very strong foundational sort of data base, not in the database sense, but like an actual Basal data set, to be able to do the more sexy algorithmic things on top of that. What I see down the line is, again, like if you have a spectrum of like homogeneous data, heterogeneous data that is in lines of business, we begin to monetize the more homogeneous data sets and lines of business.
At the same time, I believe that like in the data sets that are going to be harder to model, there’s probably going to be very interesting opportunities in that because there’ll be sort of more of an arbitrage with the more heterogeneous like bespoke dataset. So down the line for me, it would definitely be broadly speaking automation of a number of the services and algorithmic underwriting of certain lines, and just, you know, a very speedy, scalable at your fingers, way to look at your portfolio, and just make better decisions, more decisions in faster amount of time.