Yellowtail Conclusion is part of the Conclusion ecosystem

Opportunities

Artificial intelligence and the human dimension

Artificial Intelligence (AI) is on the rise. The introduction of ChatGPT caused an unprecedented disruption and brought AI closer to many people than ever before. Can we speak of a revolution? And what does AI mean for financial services? Data Consultants and AI experts Skip van der Meer and Haris Kodžaga see opportunities. A lot of opportunities. But they warn about the ‘black box problem’, among other things.
Artificial intelligence en de menselijke maat

AI is not new and has been used for a long time. Think of linear regression, one of the most widely used algorithms in machine learning. This allows you to see how strong the relationship is between two or more variables. You can then use these relationships to make predictions. This happens, for example, in insurance: young men statistically have a higher risk of car damage and therefore pay more premium for their car insurance. Based on historical data, the algorithms of insurers know that age and gender are important variables for the amount of the premium.

Exponential growth

The next level in AI is deep learning. A deep learning model can find new relationships itself and then find predictive patterns. The special thing about this is that these relationships and patterns could not have been devised by a human before. In other words, deep learning models are able to reason in different ways than humans. However, it is not very clear how these models do this exactly. Since almost all scientific research is published online, you see breakthrough after breakthrough in science. Deep learning works the same way, resulting in exponential growth in AI. The effects of that growth are becoming increasingly visible and the steps are becoming increasingly larger. In that respect, there is now really a revolution that will still have a lot of impact.

Better customer experience

In the short term, you will see that AI will improve the customer experience via chatbots. Chatbots often still provide a difficult experience: the system can handle the most frequently asked questions, but if the question becomes more specific, things quickly get stuck. Deep learning can significantly improve this process. Suppose you can ask the chatbot on a pension portal to explain certain pension terms. That can contribute positively to the customer experience. For example, there is an AI application that makes government letters easier to read for low-literate people. You could also translate that option into mortgage and insurance documents. That also offers opportunities for the sector.

AI as the ultimate tool for the credit acceptor

If you go one step further, an AI system can also validate and accept credit applications with the accompanying documents. With Optical Character Recognition (OCR), you can already use pattern recognition to extract data from standardized documents such as

a passport or an employer’s statement and then store this data separately. The use of source data already goes a step further than OCR and will soon make analyzing documents completely redundant. If this can eventually also be done with more complex documents, such as rental and purchase agreements, then a credit acceptor will have considerably less administrative work and algorithms can play an important role in the acceptance process.

A mortgage within a week

You can expect that AI will automate the mortgage application process more in the more distant future. The AI ​​assistant retrieves the necessary data and any documents, the AI ​​underwriter assesses the file and the AI ​​notary ensures the digital passing. The further developed chatbot answers simple questions and the mortgage advisor ultimately checks everything. As a consumer, you can buy a house within a week without bureaucratic red tape. These are all possible applications with the help of AI. The only question is: when? Many parties have pressed the AI ​​accelerator, that much is clear. At the same time, we also know that the financial sector is cautious about adopting new technology. There is therefore a big difference between what is technically possible and when market parties will implement that as a leading process. The role of challengers will be important in this.

The race

OpenAI and Google are in a race to become leaders in the field of AI. That also has something democratizing, because it gives more and more people access to knowledge. But what do you do with dangerous knowledge on questions like ‘how do I make a bomb?’. How do you guide the development of such a machine in the right direction? How do you also give such a system a moral compass – and is that even possible? The fact that deep learning can make connections itself is fantastic, but that is precisely where the difficulty of this form lies. The explainability of the algorithm is a complex matter. It is a kind of black box. Where does the system get its knowledge from? What is the output based on? The larger the model becomes and the more data there is to draw from, the more difficult it is to trace. And how do you prevent the system from reasoning on the basis of selectivity? Suppose there is a neighborhood where many young people live who cause damage with their cars. How do you then overcome the risk that this entire neighborhood is linked to bad driving behavior and the good drivers who live there become the victims? These are complicated matters that must be constantly considered during these developments.

Always an answer

Another point is that AI systems are deterministic. They always give an answer, even if it is not correct. In that case, you get the most appropriate answer. But you do not know whether that is also correct. In this way, a false answer can be produced very convincingly. A good example is the following. We asked ChatGPT what Key Control Dashboard is, a platform from Yellowtail Conclusion. ChatGPT did not have a good answer to that. But when we commissioned an article about Key Control Dashboard and cybersecurity, a whole article did roll out. This type of contradictory output is also called ‘hallucinating’. A term used when an AI gives a ‘confident’ response based on “what it knows” (i.e. what it has been trained on), but is not entirely in line with reality.

Self-responsible

If you use an AI application for professional purposes, you must therefore always be alert. Ultimately, you are responsible for the AI ​​output that you use. In our opinion, it is better for the time being to let human knowledge be leading and to use an AI application as an assistant. One does not replace the other, but the two complement each other. Think of an AI application that records a mortgage advice meeting and makes a report of it including findings and recommendations. That is useful. But the mortgage advisor is and remains ultimately responsible for that report, also from the Wft obligation and duty of care.

The human aspect

With AI, you can’t do without a human touch. You have to constantly check: does it work? Do we think the output is good enough or do we still need to tinker with it? We also feel that responsibility at Yellowtail Conclusion. We are an innovative company and we do a lot of research into how we can make the financial lives of consumers easier with AI. With more insight and fair opportunities for everyone – and without the risks of the black box problem. For us as Data Consultants, that is exciting and beautiful at the same time. One thing is certain: AI will bring a lot to Yellowtail Conclusion and the consumer, precisely because we keep the human aspect in mind.

Skip van der Meer and Haris Kodžaga

Hypact Advisor

Want to know more about our services?

Contact us and delve deeper into the possibilities. Discover how our services and solutions can contribute to your organization.

Waar kunnen we je mee helpen