AI In Finance- Bill Waid - 10.23.23

5:19PM Oct 24, 2023

Speakers:

Keywords:

ai

data

models

management

generative

decision

episode

business

infrastructure

secrecy

fico

transparency

provide

technology

outcome

process

bill

moat

copyright law

strategies

Welcome, everyone to the AI and financial services Podcast. I'm Matthew Demello, senior editor here at emerge Technology Research. Returning to the program is Bill Wade, chief product and technology officer at Fico. Bill has been with FICO for over two decades developing decision management systems. And prior to his time at FICO Bill worked for different software platforms in sales and systems engineering. He returned to the program today to talk about the challenges in winning executive buy in for data governance projects, how this can help FinServ teams level up the infrastructure necessary for transformative AI use cases, and the role decision management plays throughout the process. Without further ado, here's our conversation.

Bill, thanks so much for being back on the show. My pleasure, Matthew, looking forward to it. Absolutely. Just in terms of your background, you know, we've talked a lot about you know, where you're coming from a data perspective at Fico. What do you see as the best strategies? You know, in this process, a huge part of data governance is getting management on board winning executive buy in, what do you see as the best strategies for winning executive buy in for the infrastructure investments necessary to help FinServ leaders level up their operations with AI

capabilities? Yeah, I think inherently, most organizations know that they need a strategy to leverage data, and they need to apply some form of machine learning and AI technology in order to get the most out of that data to drive business outcomes, I think, I think is pretty well understood. The biggest challenge that I see happening is, there's hesitations around how do I link that effort that expense, both from an infrastructure and a people perspective, as an investment, and assure that I'm gonna get an outcome, it's that connection, that's probably the weakest of all of the technology offerings that are out there today. Because the business needs to be front and center to whatever decisions and processes are put in place.

Absolutely. And I know return on investment is the name of the game, whenever you're talking to management in a culturally in AI and data conversations, the point of friction ends up being, you know, if you're going to really invest in these systems, you need a very long sense of patience, you need a very long wick on that, on that stick of dynamite, you need to have an open mind for what return on investment really is. And this is, and it might not come up in hard dollar signs, which is a tremendously difficult thing for you know, especially like, you know, Harvard business graduates to really absorb, what do you find is the best way to communicate that sense of return on investment, if it's not in dollar amounts, or where it's going to lead to business capabilities that are going to move management in the direction of putting their name on this project?

Yeah, I think one of the one of the fundamental things is providing transparency and evidence. And evidence is the hardest thing to actually get, especially at the outset, because it's a bit of a trust exercise in that, okay, I'm gonna do this, and I'm gonna get great things on the other side. But if the business does not have transparency into what those benefits are going to be, even if they're the best machine learning models or AI technology in the world, they won't want to adopt it. One of the real world examples I've run into is that sort of punctuates This is, I was once talking to a CEO of auto finance company, and they had to update their behavior and risk models. And they've had these behavior and risk models sitting in use for years. And they came up with really good analytics off of a new set of performance data. But no one could tell the CEO what those new models were going to do to his pricing model. And they were core and integrated into them. And this is a this is a very simple example of of that transparency that I mentioned, you need to be able to when your your data scientists are building great machine models or analytics off the back of the data, provide that connectivity back to the business outcome in a way that provides confidence that those models are actually going to help. Even if they're, you know, well formed models that provide insight in dimensions previously unknown, until the business can get their head around. What is that going to do the outcome? It's difficult for them. And so that's why data science interest, data engineers, business marketers, all of the functions of an organization need to connect in that process. And there needs to be some mechanism to measure that. Those tools that actually provide that measurement, they're the ones that get the traction because they they build that confidence. And then the adoption takes hold.

I want to come back to that answer that you just gave in the example you cited in a moment. But I think for the sake of our audience, and carrying over from our last episode, it'll be really helpful to kind of illustrate a little bit more on your background. So everybody knows where the answers are coming from, you specialize in decision management systems, which we brought up a little bit in the first episode, tell us a little about just decision management as a discipline, what that means and what separates that from conventional approaches to AI adoption.

Yeah, actually, I have to admit my age here and confess that I've been in this space since the 1980s. And ironically, a lot of this was born out of the birthplace of AI and machine learning at that same time. And alarmingly a lot of the techniques and algorithms that were formulated, even from before the 80s, they remain the same, what's changed is the compute and the access to data, and the ability to bring them to bear. At that same time. Decision Management was really the process of bringing the human and the AI together into an automated way of making decisions, but provide the transparency and human guided outcome. So what the space is all about is enabling the business to actually take control of the decisions and leverage the machines, leverage them for what they're great at, which is processing the data, but still make the human judgment call in the end.

Yes. And just going back to that first answer you gave, it seems as though in the example that you were providing in terms of providing that confidence, as you were mentioning, it's instrumental that in that decision management process, that's where you need to prioritize evidence in transparency. Do I have that right?

Then you have it right, exactly. Any machine model that's actually providing processing of the data and making some determination out of that has to be actioned somehow. And that action needs to be understood in the dimensions of it. And that's where decision management makes the bridge because the integration and the actual use of that becomes a human decision, a human control decision, as if you can measure that, as a good set of tooling should offer, then you have confidence, because it's not only what did happen, it's also what's going to happen, you have this ability to simulate or otherwise look into the future and say, Well, what if I do make this change? What if I put a new model in place? What if I change my strategy? What if I change my pricing? What could happen based upon my past performance?

Right? And management is going to be looking for, you know, even if things don't turn out? Exactly, as you said, you have an explanation for the discrepancy as well. That's, that's a big element of transparency as well. I take it

absolutely. In some cases, that's that's the first place to start. Why did it

because you're never gonna get it. Right.

Exactly.

You no one ever gets it perfect on the first rodeo, and at least management has that much understanding. It also sounds in terms of decision management. That's really the going back to the first episode, you were mentioning, data governance is three elements, people process and technology. It sounds like decision management is really focused on those first two, before you even get to the technology, who's involved, how are they using that data? And how do you ensure the processes before you even, you know, put it into inputs for for a model for anything else that does have true meaning to the core business goals that management is already on board with? Does that make sense?

It does. And in fact, that's one of the benefits of a decision management approach is, as you declare whatever policy strategies, decisions, you want to make it actually as the connectivity between that and the data, that and the analytics. So it's inherent, it knows exactly what is influencing what data, and even the values of that data down to an individual transactional level or at the portfolio level. It keeps track of all of

that. Right. Right. Right. And that's where we get into, you know, I think it's finally time we've talked a lot about people and processes. Maybe it's time to talk about technology and models. You know, we've been dancing around it for at least an episode and a half at this point. But in the advent of new generative AI capabilities, large language models we've seen sweep through the sectors and in the media. I think a lot of FinServ leaders are thinking about it. You know, the infrastructure necessary to execute on these use cases right now, just in terms of the future? What data infrastructure do you think will be considered indispensable for offensive leaders when trying to improve their customer experiences with these tools?

Yeah, I would be remiss if I didn't say upfront that the actual use cases where you're going to apply that technology are a key part of the answer to the question. Another key part of the answer to the question is, one of the concerns around generative AI is, is the security and and the secrecy that you want to keep as part of your secret sauce. And so as a result of that, whatever infrastructure you invest in, it needs to be protected, it needs to be isolated from the rest of the world so that your, your data and your secrets aren't lost to the public. This is one of the concerns with generative AI today. And it does require an investment for each one of the organizations in order to pull that off. I think the next thing after that is storage is relatively cheap. That's not really the problem. The problem is to compute. And the necessary compute to actually implement a generative AI is predicated on the fact that this AI needs to be incremental, it needs to grow over time, it can't be something that's take some more traditional approach, sort of like the the extension of adaptive or self learning models. And if you can get that combination right here, your infrastructure is not as big of a problem as it would be if you take a more brute force

approach. Right? And what would be that that brute force approach,

traditionally, and I harken back to the initial forays into data scientists and investing in in analytics and machine learning was I'm going to build a huge data lake and I'm going to run right to crunch that data. The world of AI has evolved, you want intelligent AI elements that actually are always on, always aware, and always becoming smarter. And if you take that approach, your infrastructure requirements go way down, because you're not processing, you know, five years of historical data at a tranche, you're actually cumulating. That knowledge within the generative AI.

Absolutely. And you mentioned secrecy before this, this may need to be a completely different episode. I know I'm not an expert on copyright law. And you could almost argue that a lot of the world of generative AI and especially large language models has become possible due to the lack of updates and at least copyright law. But or at least patents or I know, it's a very popular thing to say that nobody has a moat. But all at the same time, you know, going into this new world of you know, a bespoke a landscape of bespoke models, one for every institution, it's going to be incredibly important to keep your model under wraps at least have a have a moat that long. But we might have to save that for another episode, where I can do a little bit more research before, at least before asking questions. But just that secrecy problem, I think is going to be big into the future, especially as we see, there's that Google memo that got very famous and made the rounds, nobody has a moat not even open AI. And that's true very much for the foundational models, but to see even how patent, you know, privacy to secrecy, for bespoke models is going to play out is going to be really interesting. Bill have had a fantastic time in these last two episodes, you know, really going over the insights from a data perspective with you. Thanks so much for being on the program with us.

My absolute pleasure, I look forward to the next opportunity.

If you enjoyed today's episode with Bill, don't forget to check out Episode One, we actually scheduled these episodes back to back, which is a rarity for our platform between the AI and business podcast. And this sister program, the AI in financial services podcast both from emerge. But we wanted to put these episodes back to back because I think a lot of the challenges that we're facing in financial services and how they look from a data perspective really hinges on the lessons from decision management as a discipline that bill offers in these two episodes. So I really encourage you to go back and check that out, especially if you wanted a little bit more context to the references that we made in the beginning to the first episode. On behalf of Daniel and the entire team here at emerge. Thanks so much for joining us today and we'll catch you next time on the AI and financial services podcast.