Nathalie Post
For our inaugural episode, today, I'm joined by Dennis de Reus, who is Head of Artificial Intelligence at ABN AMRO's Group Innovation.
Hi Dennis, great to have here on the podcast.
Dennis de Reus
Great to be here.
Nathalie Post
Before we deep dive and go into what the impact is of artificial intelligence on the financial industry. Can you tell me a little bit about yourself, your background, and basically what what got you here?
Dennis de Reus
So my background is that I'm currently leading artificial intelligence for ABN AMRO. I've only been doing that for about three months. Before this I've been with with McKinsey, also for several years. And before that, I've worked with IP Soft, which is a company that does automation in data centres, which I did not work on, and works on chatbots. And there I was globally responsible for their implementation. So it was a team of about 80, 90 people, mainly engineers and project leads. So I've done a lot in kind of conversational AI. But within the bank now, my responsibility is much broader. And we're doing topics that still vary from let's say, NLP, to things with images to things with data ranges, and just large data sets. If we look at, for example, fraud monitoring. So yeah, a bit of everything. I have no formal background in AI. I've studied chemical engineering.
Nathalie Post
So how did you get into this space initially?
Dennis de Reus
Yeah, step by step, I would say. So I went from chemical engineering into McKinsey digital and started working on IT architecture topics. And then I've always been active with IT and technology, and sort of loved working on those types of topics. And so kind of when the opportunity came up to switch to IP Soft and work on the implementation of their chatbots. I still remember seeing this demo and inputting, let's say, a page or two of text into this bot, and then asking a random question about that text and getting the right answers like, how would I programme this. So joining them was a great opportunity to learn more about that. And then when I moved back into McKinsey, it became a very natural fit to work more on what we would name there as advanced analytics. And so within analytics, I've done things like pricing optimizations with machine learning. Also, operational optimizations for the industrial parties, working on the chatbot for a telecom player, so it's been getting more and more in depth there. And I think that's the combination that eventually gave me the opportunity here to join at ABN as well, which I think is one of the most exciting places to work in AI. Given that we have such an enormous amount of data, together with I think, a very better and better everyday technology platform. And then also a group of people that really wants to do interesting and good things with this type of data. And finally, the challenge here versus kind of traditional tech companies is that we have to be very responsible on what we do with this data. More than showing somebody advertisements, if we're going to make models that say something about somebody's mortgage or mortgage application, we have to be very sure that we do the right thing. Yeah, that's a nice extra dimension that makes the work I think here att the bank.
Nathalie Post
Yeah. So how do you deal with that responsibility?
Dennis de Reus
And I mean, there's a personal in the in the company aspect to it. So I'll start with the second company aspect is very much that we have rules and guidelines on what data we can use for what. And that there is, let's say, if I want to use a specific data set every data set has an internal owner, I have to go to that owner and explain like, hey, I'm looking to use this data for this application. I'm going to need it for this many months. These are the outcomes we're trying to achieve. And this is how we look at things like privacy, etc, etc. in the course of this project, and then on top of that there's like a checklist for any privacy sensitive information, like how do we handle that? And then within the team, we have a privacy officer that is the kind of so we've got the first and a half line of defence. Yeah, you want people that directly work with that help on the questions, somebody would have like, Hey, can I use this data or not in this specific context? And so I think from a regulatory, or the internal regulations perspective, at least, we're quite strict on what you can do can't do when you have access when you don't, and how long you would have access, for example, sometimes that's difficult as well. So the central team wipes all the data from our environment every 90 days. But then we build it in a very reproducible way so that they could wipe it every weekend. I think that's, that's interesting birth in the bank. I think for me personally. Whenever you start the project, there's something where you think like, okay, so what am I actually optimising for and I think within the bank, at least the people that I've worked with directly, there's this sense of, okay, we want to do good and right by our customers. And so one of the projects we're doing is that we're looking at debt relief for people that get into a situation where they cannot pay their debts, perhaps because they've become ill, or they might have had an accident or something else could have happened. And I think for these people, it's very important that they get help on these type of issues as soon as possible. But it's also a very sensitive type of case. And so we're using AI, which we're building that right now, to automatically classify those requests and create the case, so that the employees that are there can actually focus more on handling the case than on the kind of the basic administrative work of the emails. But the next step for us would be to say, hey, which cases seem very likely that we would indeed give the debt relief. So we can pre short those using AI risk saying, look, the expectation is for us to allow this request to proceed. But the ones where we say, well, this is something that we would like to reject, we would actually not resort them, we would give them to the humans, in any case. So if you do not help someone with that relief, it's a very big decision, and impacts lives quite substantially. So I think, you know, the personal aspect is that when you set up a project like this, from a technical perspective, it's very easy to say, oh, this path is going to be the same for approved and denied. But for me, from a social perspective, we take a different approach, and saying, look, the ones where the user benefits, we can be more lenient on the decisions we make. And we might give this to somebody that just says, Yes, this looks okay. Versus the one where you're likely to reject it. Look, actually, let's not reject this, let's give this to a human person in the team that will actually do the full case evaluation as it happens today. So that's basically whatever the outcome of the model is, people are better off than they were before.
Nathalie Post
Yes. Very important. So how are these use cases established within the bank? Where are they coming from? How do you initiate them?
Dennis de Reus
This is an industry wide problem, yeah, one of the challenges I see as well. So the bank is a huge place, right? There's all the different departments. And so one part of it is drinking coffee with people, so to say, and make sure that you talk to people that might have challenges where where we feel as a team, or I would think that, hey, here is a place where AI could really make a difference. And very often, you'll find that people are looking to work on some of these things. At the same time, I do think this is a is an issue in general, because many people that are not specifically, let's say in a team that works with AI, they are not familiar with what you can do with AI or not. And so they might say, Oh, we have this perfect AI case. And it turns out to be extremely difficult. And then there is these cases where someone thinks this should be impossible. And it turns out to be relatively easy. And that's just because people in the business might not know how far we've progressed with either natural language processing or with image recognition or with various of these different things. So for them, like this concept of like, oh, you can read emails and put them in the right box, that must be very difficult. And it turns out the world actually, that's, like, give or take a few challenges is very doable. And so that's the case we started. And, and so it's, I think you still have to go out and find these cases. And the other part is you have to go out and educate people on what we can and cannot do. So that when they think of cases, they can come back to us. And it could be my team here with the AI Lab. It could also be different teams that are ready to do data analytics within the different business lines.
Nathalie Post
Yeah. So how do you deal with that education part? As in? How do you educate people with no formal background in technology on what are the possibilities of AI.
Dennis de Reus
We do something very nice. So again, this I think there's the bigger bank framework, so there are lots of trainings and introduction training, such as, for example, something that is kind of big data for managers, big data for executives, to get a general feeling for what they could do with data and AI as part of that. What we are building now is something where we are going to dry run this with legal in a bit and to help them build their own model. And it starts very simple, where they say, Well, if I need to recognise digits, let's say we could take the MS data set. I could throw something that says, well, you know, if the colours are mostly within this shape that kind of looks like a seven, that's probably a seven, and then you have them draw this? And then say, okay, so how well does your drawing score? And I'm sure they're going to get to 50 or 60%? Correct. And then there is the aspect of saying, well, you know, we could go a bit further. And let's say, you could say something, well, if it has a horizontal line and the top half, and a diagonal line, then it's probably a seven. And there's going to be lots of ones in there as well. But that's, that's interesting aspects and perhaps they can score a bit better. And then we could say, look, basically we could have the computer optimised this automatically and have them do that themselves. So I'm going to try and put lawyers in the situation where for two hours, they need to do something important. Yeah. Mostly they will just be pressing Enter, but to understand for them like, hey, this works, and this is how it works. Yeah. And to take away some of these misconceptions. So by extension, like one of those misconceptions is that when we be build a model. I've had many people ask like, oh, show, can we now run this next to our normal production system for the next six months to prove that is the solution works? And the answer is, of course, we can. But what if I train it on the data from a year ago, and I virtually run it next to the system from the last six months, right, we could have this decision in the next few hours, you could know whether it's working or not working. And these are very intuitive things. If you're not in the data science, or in the AI team, they're very non intuitive things. If you're now if your responsibility is to make sure that processes are followed in a compliant way, or that risk is managed in a certain way for certain loans. So to make people see that, and I think there is this opportunity, and people shy away from showing people how it works. And they make these generalisations like, oh, no, there's these toolboxes. And this is how they work with AI. And it also means that we don't give people the opportunity to actually really understand them learn how it works. And so to get much closer, so I'm never going to expect a lawyer to programme Python. But for them to actually see and play around with this a little bit, makes a big difference in terms of how they will start to reason about these technologies.
Nathalie Post
Yeah, yeah. Because you mentioned this little black box for a second. How do you deal with that black box thinking? And yeah, explaining the output, basically?
Dennis de Reus
Yeah. So I think it depends on this specific case. But in general, I think within a bank or within financial services, having an increased amount of interpretability, or explainability, for model becomes important, both because if you were making decisions, and either you're showing them to an employee who will make the final decision, or you're showing it at some point in the future, perhaps directly to a customer, being able to explain that, hey, we approve this because of these factors, is very important, both from explaining that to a customer, as well as saying to an employee like, well, we have concerns about these three factors. But if you can look at those and you agree, then you can approve, and otherwise, you should probably reject an application. So how do we work this out as a bank is we are currently so I think, step one is to make sure we understand that better. That's what we're running research. where we're looking at, hey, can we build models, where we look for that level of explainability, or interpretability, which is only one aspect of a number of things as a bank that we should work on. Where we're looking at cases where like, hey, your loan was or let's say your loan was rejected. But if your income would have been this much higher, or it's your house, you're trying to finance was this much cheaper? Or if certain indicators had a different value, then this could still be approved, then you could even show that to the employee who then says, Well, look, let's work on how we can make this financing work for you given those constraints. And that's something that's fairly doable. But disappointingly, there is not that much research on these topics. Yeah. And so we've become very advanced in making algorithms do all kinds of things. But explaining what happens has has not always gotten that same level of priority. I think as we see AI moving into much more real life processes beyond let's say, advertising and some some basic text, text mining applications, it's going to become, I think, a bigger and bigger topic.
Nathalie Post
Yeah. Because where do you expect that research to take place? Is that more the academic side or government or business? or all of them?
Dennis de Reus
Think it's going to be partly on the, mostly on the academic side probably. Also, because we are still in a very fundamental stage of how do we do these things? And I think once you get more and more of an academic body that shows like, hey, these are different approaches, this is what seems to work, what doesn't work. In as in kind of other technologies, you'll see that business will start to pick up and say, Hey, we need no assurance level of interpretability. How do we then pick up that interpretability? From the current models we have? And they might look in scientific literature and say, oh, there's these two or three things. And that is going to be shown for that will she translate into business? And business might be kind of the, the the spark ignites the research on the academic side. At the same time, I think many companies don't have the body and are not large enough to say, oh, let's spend 10 or 20 people full time and focus on interpretability unless of course you're like one of the global tech giants. But for a large Dutch bank. That's a huge commitment already. And so we pick up partnerships where we can work with universities. But I think there's definitely a real place for academia. And yeah, and I think the role of the government is going to be to stimulate that to some extent. And of course, to stimulate could be in two ways stimulate could be, let's say, fund and make, kind of make academics interested in working on these topics they're stimulated, could be setting certain expectations. And this is also happening to how companies can and will use AI. And if one of those expectations is that things need to be explainable to degree x, that means that we're kind of forced to work on these type of this type of research as well, which I think is a good thing.
Nathalie Post
Yeah, no, absolutely. Because how do you see explainability? In relation to ethics? Because AI ethics is a huge topic, do you see it directly? Or do you see it as separate topics in your perspective.
Dennis de Reus
We looked at the explainability, almost as a subset of the of ethical AI. I think bias is another one, something that expandability helps you to pick up on, but it's definitely its own topic. It's sort of very hard topic, like it's easy to remove a column that says gender. But it's also easy to take that completely out of the equation, as well as very likely to, if it's something that we used to share, you get to deduct something like that from other parameters. And the third one is also something we work on that when the team has this stability and soundness of predictions that if I tweak my inputs a little bit, do I get a very different answer. Because then even though my model might perform really well, and there might be people that are just slightly different, and get hurt by something like that. It's also something where we work with models, where we say, oh, rather than running a model once that's run at 100 or 1000 times, but with slightly different dropouts, and then see like, Hey, does this prediction actually have a good mean, and a low variability at that point, we believe me, what the model is predicting is probably kind of a stable prediction. versus if that variability is very high, we might come to the conclusion right. Now, regardless of how high or low the score is, the variability is so large, that probably the model is in a space where predictions are moving in various directions, and we're less sure. super interesting is that we've trained out on some fraud data. And you can see that fraudulent transactions have much higher variability than non fraudulent transactions. On average, this is tricky as well. So you can't see that everything with high variability is fraud, and vice versa. But it's an indicator interesting, see how that how that goes? So yeah, how does it fit in within ethical AI and realised intervention? Good question. Yeah, I think we need to solve a number of these topics. Especially if you look from the responsible position that for example banks have. And to a big part, it will come down to the people that are working with this and the choices they make, and the limits we set ourselves as a result. So we can, of course, do amazing things. If we say, well, let's not work on interpretability. And let's not work on bias, etc. But then, I don't think it's future proof in the sense out of what we want to achieve with these with AI in the end. And so I think the context of ethics, it's something that is not just a technical solution. It's much more of like, how do we train the people that work with it? How do we make sure that our models and our outcomes meet certain criteria around fairness around bias in general, around interpretability around kind of soundness or stability of the predictions? And that's going to be much bigger than just plugging in a technical solution.
Nathalie Post
Yeah. Because how do you bring out those capabilities within teams?
Dennis de Reus
Yeah, I have a strong opinion on this. So much, I think traditional industries as very much focused on saying, oh, we need data scientists. And so you get fairly deep and sometimes narrow profiles, where people are very good at modelling and working towards the truth and fit. Successful teams, I think, have at least some fraction of fairly broad generalists. So people that are comfortable modelling, but also comfortable working with data, but also comfortable interacting with the business understanding, in my case, for example, how a bank makes money, and how our business works and how we know what our mission is towards our customers. So that this kind of example of like, oh, it should just be four degrees higher, doesn't happen. But much more that we're kind of in sync with the business. And then I think the normal norms and values of the company come back. So if we are in the normal way of doing business, trying to be ethical and how we work with, with our people, with our customers, then by extension in such a theme, we would try and do so in the AI side as well. Yeah. And putting normal guidelines that make that possible. I think if you work in just in technical isolation, it's very easy to miss that. Because there's this disconnect between what ethics means in this business and pure data science that you could run on a cluster somewhere on your laptop. And there's a prediction that doesn't feel awful wrong that does not connect to kind of know what you want to achieve at a company.
Nathalie Post
And where do you find those people then, because you mentioned quite the broad profile that you're looking for. I know that you're building a team right now. And expanding fairly quickly.
Dennis de Reus
if anyone is listening, that has a background with, you know, able to model but also eager to interact with the business doesn't need to be banking, per se, right. But I think these type of broad profiles have broad experiences. So get a snapshot of our team is, we've had somebody that's at the bank for 25, 30 years. And in the last few years, said, Look, I want to do data science, and has been reschooling himself, which I think is very interesting for somebody that brings real power to the team in terms of knowing everything around the bank. And then being at a certain level of data sharing skills. We have somebody in the team starting, that has a background in building his own startup. Somebody that did some machine learning in AI, but also was very much operational and making sure that certain things got delivered to certain customers, etc. But definitely that intrapreneurial vibe, and has therefore sense of what it means to run a business and has a background, both kind of the academic side, as well as for the last few years working on artificial intelligence. And then we have someone on the team that has a background as a strategy consultant for a few years. And that then went into a much more academic type of side of applied research, and then went into a start where he was the lead on all the machine learning. And then we were able to convince him to join here. And so these type of profiles are difficult to get them straight out of university because of that broad level of experience. But that's, I think, yeah, makes for very powerful people. And they are not going to be the deepest experts on specific topics, when I find someone who has spent the last eight or 10 years of his life on just image recognition, and only applied to houses, then that person is going to be much better at doing image recognition in houses than anyone in my team. But my team tends to be happy working on the things that are going to make a difference for the business within the next few months. And hopefully afterwards, as well. But I think that's, that's a trade off. And that's where I stand now with the team. I think the vision further towards the future is that you'll see within the business lines, of close to where the action is happening, you'll see I think more of these in generalist profiles grow. I think central teams, which is my team, sometime down the road, might get more, more into specialisation. Because when the when teams that are in various business lines are so good from a, hey, we can run everything end-to-end as long as it's not something very, very niche. And but when it feels very niche, we need a place to go to ask about expertise that could either be externally or it could be like the central, center of excellence.
Nathalie Post
Yeah. Because I mean, I think hiring is such a big challenge within the AI space in general, I think there's so many different profiles out there and so much need versus demand or supply versus demand. That's really a big challenge. But what are the other challenges that you see within the industry? When it comes to AI?
Dennis de Reus
I think a number of things. One for me is this focus that has traditionally written on modelling where we're this kind of implicit people think that modelling is the hard part of implementing AI in a company. And I think all in all combined, it's probably less than a quarter of the effort. And so this concept of hey, do we have the right data? Is that data available, is it clean? Do I know how to work and interpret it, is also very important. And it's something that is like, Oh, yeah, we'll get the data. And then we'll go and do the modelling. And then eight weeks when you finally have the data, and it turns out, the modelling is only two weeks of work. And then the other part is, if you look at this end to end perspective, now I have a working model that makes useful predictions. How, as a business rule, I use this, let's say, every day or in every process, that we use this type of prediction, what is the infrastructure that actually runs and hosts my model in a way that tomorrow and the week after and the week after that it still works? And then all that combined, there's still only the technical aspects? How do we make sure that actually the predictions are right, and that, that it's valuable for the business and that it's actually something that the customer experiences in a positive way? Because I think that whole end to end chain adds an immense amount of work. And it's much more than modelling and I think that if I look at more from the McKinsey experience, but if I look many of the data science team, so the central teams that we've worked with, you can typically see that they started off as in oh, modelling is the hardest part. And then slowly expanding your scope. And very often the data and data cleaning part has become more and more important. I think also, a lot of models have never made it to value, because of the second part about going into production. Yeah. And so for example, I think it was an interesting article from booking.com. With the six lessons learned of, I think, 150 models in production. And a bunch of those are, there's some some stuff that relates to modelling, but a bunch of those are really around okay, so how do you properly design your experiments? And how do you measure and value what your models are doing? Or how do you make this difference between model performance and business performance, right, which these two were kind of a basic assumptions were different, they are the same, but they are not, I think, could have a model that doesn't perform as well. But by time to by influencing time to market or kind of how much easier and faster there's to deploy might have a huge difference in terms of how much value I can get from that. And also to have AI or machine learning in their case, as a, as an ingredient, or they name it Swiss Army knife in product development. And I think this comes back again, to having more people that know what AI is and how you work with AI. Because if you're designing new production and bolting on AI at the end, it's a very different thing than saying, Hey, we have this thing called AI in our toolbox. And if I design a product, how can I already, from day one, use these things to make better products? So if you could show, interestingly, that even for them, right, the lessons learned in an academic paper, relate a lot to kind of know, how do we make business impact and business value and go beyond just modelling performance? Interesting to see how also Booking, one of the leading companies in that sense, showed us this difference between just moving beyond just modelling.
Nathalie Post
Yeah, yeah, you kind of touched on the whole perception of AI as well. What are your thoughts on that, because there's a lot of different interpretations of what AI can and cannot do a lot of buzz.
Dennis de Reus
There's definitely this aspect of people coming in saying other robots will take over, right. So we have this process now. And there's all these different steps, and we're doing all these difficult things. And there's a whole team of people working on it. And there will be this artificial intelligence that will do this whole thing end to end. And all these people will no longer have work, right? This, this kind of vision of doom of AI taking over all the available jobs. And there's one person in the end that is responsible for all the AI's, and he's the only one that still deserves an income. And the reality is very, very far from that, we've come tto this point with AI that were very good in very specific, narrow things. And so if you look at this case that we're doing with this automatic classification of emails, there is like this 20 or 30 step process. And there's a few steps that we think that we can do faster and better and cleaner with AI, then then the current process. And so those are the ones we decide to kind of support with AI. But then all the other steps, their steps on on this personal contact are the steps on very impactful life changing decisions for people where AI could help and support that's very much humans at the steering wheel. And that will stay for a long time. But I think this, for people not familiar with artificial intelligence, the misconception is something when I talk to my parents, they see all the AI as the same. So the AI that classifies these emails is the same type of AI that would drive a car. And that's not the case, right? My this ai, ai quote, unquote, can classify can read a text and classify it as one of 20 types of email period. It cannot drive a car. It can also not read other types of emails, it this is what it does, and it does only that. And so, again, I think it comes back to the education piece. There's a lot we can do. And there's there's a lot and a lot of untapped potential. And at the same time, I think, yeah, using the words artificial intelligence has led to this perception that it's this very broad type of intelligence. And that's not something where we are, yeah. And something that we have to grow into. So I think we're working more or less, I think across the board on for larger teams of AI that work, and so one part is really around personalization. And I think a very good example is on the job board that we have on our website. Now, Ana is a chatbot that you can see when you are logged in, and you can ask questions around things that you can see on that specific page. And we're doing hundreds of chats a day, I think now, and that's growing because the team is learning from basically we can see the questions that Ana does not answer, and then we see hey, there's a category of questions here that we seem too get a lot. So let's actually put those answers in as well. And so that that's the one that's getting broader. And it works well. So well, that now internally for our it desk, we're using the same bot. But then with a different training set, of course, to do that as well. And I think as we're going through these four things on this first one around personalization, and especially chat, an interesting observation is that chatbots lead to more interactions and more personal interactions, I think, with your customers than the typical FAQ page. But I think also, as, as at least, I found out back in the time that I was working with IP Soft and also later in McKinsey is that when you put a chatbot and and it takes 50,000 conversations in a month, that does not mean it reduces the number of calls with 50,000 a month, perhaps 5000. And that's, that, I think, that shows the ability for these type of tools to have a much more personal interaction with customers, but also with customers, to some extent, seem to want to use this. So that the the number of contact points and the engagement seems to go up, versus just having the ability to do a self service FAQ, or phone call. At the same time I've had personally where I stand, if I'm going to interact with a company, I much prefer human interaction than bot interactions still today. But sometimes bot interactions can be very fast and efficient, and therefore still be a very good experience. I think the second part, what we're working on with AI and a bank is really around our ability to make better decisions. And that means that we have a lot of processes where somebody has to look at a large amount of data, and then make a decision. And this could, for example, be a mortgage application. So you have to look at so much income statements at the house at the information notary provides, etc, etc. And there's a lot of information and every given that same information to somebody else, or somebody else, they will follow the same instructions. And there's even going to be a margin where people might have, I might have approved this and somebody else might not have approved it, I think so with AI, we are much more in a position to help these people go to this much larger data set and to distil the things that they really need to look at, and make therefore more consistent and better decisions. The third part, and I think this is something we'll see, in almost every industry that experience, that AI is about efficiency, and so tthe case where we classify emails, there's an example of this, where it's really around, we have a process that we think we can automate with AI tooling that we could not automate before. And so that takes administrative work mostly away. I don't think, like for some time, it will not be very deep decision level, as well, I think the decision features really are more advising people what to do. Yeah. And I think the fourth part, which could be a combination of those, above is making the bank saver. And that ranges from using machine learning for anti money laundering or fraud detection. To see machine learning to identify phishing emails, or spear phishing emails. Spear phishing is really tricky because it targets a specific person. But we can, we can show actually, that our models are sometimes better than the typical products on the market for this. And then within that same bucket, you can see this interesting part where it's somebody in a basement can train a model that could, your voice could transform that into the voice of our CEO. This is very tricky. Right then, for the bank, perhaps it's something which I like, where we have very structured processes around this. But imagine this company with 200 people, and you get a phone call, right? You're in charge of accounting from the CEO. And it's like he's on a different phone number, but it's definitely his voice. And, and then he asked you to wire some money. So this concept of fraud has existed for a long time over email and text messages. But now, you can do this with transforming your voice. And I think it was. a Google blog, if I remember correctly from just a week or so ago, that showed that, like, with fairly limited data, like literally a few minutes of spoken text, you could do these type of transformations. And people said, wow, this is really cool, because I could leave a message for someone on behalf of someone that was passed away that kind of a good emotional memory for like his mother's birthday, I think and his father has passed away, because of illness. At the same time, somebody will abuse this and do this type of fraud. And so one way to protect yourself against is to train people and other ways to say, Hey, can we detect these types of things automatically. And so I would see this as one of the topics of in the safer bank, some point forwards, and you can see this even in video, right? I mean, it's possible that this whole concept of putting somebody's face on somebody else's face in a video is kind of scary, how fast that has evolved. Somebody invented this andd now it's everywhere. And you have to change your processes basically know the person actually needs to sign with his reader because we can't go from the voice. Or alternatively, we need to have something that the text that voice has been tampered with. There's interesting things that's happening there to get more important, as well as the of the other side of the security community are starting to use this more and more.
Nathalie Post
Yeah, no, exactly. And I think also a lot of organisations are popping up that are then detecting deep fakes, so it's like, a bit meta.
Dennis de Reus
Actually it's an interesting topic in itself, is that I think the industry has been growing very fast. And now we have this thing of a few people that say, oh, we can have a deep fake detector. And suddenly, this is a company. Yeah, and so the amount of less than 10 people AI companies is exploding, at least in my perception. And a lot of them are selling a tool, right, we have one or two neural nets, or some model that we've trained that does A, B or C, that's what we sell. And I think long term, a lot of these tools can be replicated, especially if it's around her having or not having access to certain types of data. And so I'm curious to see what happens to those companies, rather they get kind of bought into larger companies, if they would grow a portfolio and at some point do become kind of more products than tools. It's interesting to see how this, we have this market, though a lot, a lot of small companies, both consultancies, and you're gonna feel ah we provide this specific tool to do x. And it's challenging for us as a bank was the amount of governance we have from running and introducing new application is quite serious. Also, because of course, because of data access, and etc, etc. So if we introduce one of those tools. And it's like, if we go to a whole suite, we have to do a certain assessment, if we do one tool, we have to go through a similar type of assesment. So it's not easy to actually work as a corporate with these small kind of very niche, single shop type of applications, even though they could be very powerful by themselves.
Nathalie Post
So how does this consideration take place, then? Because do you actually work with some of those smaller companies and smaller tools? Or is this really on an exceptional basis.
Dennis de Reus
I think at various levels, so we do proof of concepts. And this could be with smaller or larger companies. But I think our, our chatbot, for example, with one of the largest providers in the world. Then, at the same time, if we are very serious as a customer in using a tool, we also invest in those companies from time to time. So we have our digital investment fund. And so I think one of the most of the larger, let's say, either data or AI or machine learning toolings that that we work with, we also for example, have some some stake in that.
Nathalie Post
Okay. Cool. I think we touched upon a lot of different topics.
Dennis de Reus
I think we did.
Nathalie Post
Yeah, definitely. So maybe, for listeners to summarise it briefly, what are the key takeaways?
Dennis de Reus
I think looking towards the future, I think sort of one key takeaway is that we're doing amazing stuff at ABN AMRO. But I think looking more forward, I think that there's a number of trends in AI. And what I would expect, and what I think is interesting to see is one, this broadening of skill sets you see in teams, from very technical to, I think, much more business savvy much more across the chain, and bring in specialists where needed, I think there's there's going to be this expansion of this technical view in type of work as well to kind of taking an perspective, that also includes, let's say, ethics and bias, interpretability, then I think there's there's already happening trends, and it will get stronger on taking this end to end perspective on getting the data right, making sure that you have the right kind of access layer for data, the ability to have, you know, clean and annotated data and data ownership governance, the whole set, as well as you know, good platforms for modelling, for model development, but also for hosting models in production and then evaluating and monitoring if they work properly. And including this whole end-to-end view towards business value, and how do we make sure that what we build is actually valuable and actually gets used by the business on a day to day basis? And we will start evaluating, I think, at some point in the future, the data teams for the business performance rather than have you brought you into production or not. I think a fourth one is this aspect of data becoming more important. So even if you have the right tools and the right governance and systems, companies will be able to distinguish themselves will be better either in performance or in how they're perceived by the market. By having the right data and using that more in a more integrated fashion. That could even be beyond the limits of a single company, I could totally see the banks working together on non competitive areas, to share data, or to at least go train models, for example, with fancy ways of not sharing data, but still having that type of benefit, for example, to prevent fraud, because if we work together to prevent fraud, then everybody except for the criminals is better off. So that's typically one of those cases where it actually helps to be able to pull more of that type of insight together. Yeah, and then I think, finally, we'll see, as we use this type of technology more and more in the business that the same non techie people will start to get a better feeling for what this is, and how they can use it. And you'll see that successful people are able to really make AI an integral part of their part of the business. And so I think we'll see in the leadership side, more and more understanding and more and more feeling for AI and how we're where we can use it. And therefore the number of use cases really to to grow as well, and hopefully in a more integrated fashion than this project by projects type of basis. So I have a very high hopes, but then I'm in a chair that is supposed to have very high hopes.
Nathalie Post
Yeah, thank you. This summary was perfect. And thank you so much for your time.
Dennis de Reus
Happy to help.
Nathalie Post
Great.