AI Act and risk aversion
The aim of the AI Act is to promote the development and introduction of safe and reliable AI systems on the European market. The AI Act also aims to protect the fundamental rights of EU citizens and stimulate innovation in the field of AI in Europe. I notice that many companies are uncertain about what the AI Act means for them. This also applies to pension funds. The Pension Federation has advised funds to draw up their own AI policy stating what a fund does and does not want to use AI for. I notice that pension funds are now coming up with policy proposals that are often quite restrictive – much more restrictive than is necessary under the AI Act. And that is a shame, because this risk aversion means that funds are missing out on great efficiency opportunities.
Path of least resistance
A number of funds state in their policy that it is not permitted to interact directly with the customer via AI, for example via an AI-driven chatbot. Because, they believe, the risk is too great that a participant will receive incorrect information. But that participant still needs information and wants answers to certain questions about his or her pension. So what will that participant do? He or she will choose the path of least resistance and will turn to ChatGPT, for example, for answers to questions. But the chance that that participant will receive incorrect information there is much greater. In my opinion, as a fund, it is better to choose a route that you have control over yourself, such as with a safe AI-driven chatbot. That is simply allowed according to the AI Act for this “low risk” application, as long as you are transparent to participants that they are interacting with an AI chatbot. Having a chatbot yourself also gives you an advantage as a pension fund, namely that you learn what questions your participants have. You can then respond to that smartly in your communication. You miss that opportunity if you “send away” the questioners to another platform.
From cloud fear to AI fear
In many ways, the AI Act builds on the Digital Operational Resilience Act (DORA), the European regulation aimed at making the financial sector more resilient to cyber threats. Many funds already have their risk management in order. AI is just a small step further in this. Nevertheless, many pension funds are very afraid of the risks of AI. That reminds me of the “cloud fear” of a few years ago. People thought that data storage in the cloud was much less secure than in their own data centers. Especially with the American Patriot Act, which is at odds with the European General Data Protection Regulation (GDPR). However, Microsoft, as a major cloud party, has security levels that you can never achieve as an individual organization. These security measures are well aligned with the requirements that DORA sets for risk management. That is why organizations ultimately switched en masse to the cloud. Nevertheless, there are parties within the financial sector that, due to risk aversion, still work exclusively with their own data centers. But in my opinion, that only provides a false sense of security. For AI, contract security is already in place from the cloud. And your organization’s data remains yours and is not used to calculate models, for example. It really is safer than is often thought. Certainly if, as with our tools, it is applied safely. For example, by using Microsoft’s European data centers and the business version of their services.
Don’t let your policy work against you
Being too strict with the use of AI applications can also make it more difficult for pension funds to comply with certain legislation. For example, the Digital Accessibility Act [the WCAG] is coming. The aim of this law is to enable consumers with a disability to independently use products and services, including digital financial services. A web text must therefore be perceptible to blind and partially sighted people, for example by having it read aloud. With AI, this reading aloud can be solved well and affordably. But if, as a pension fund, your AI policy states that it is not permitted to interact directly with participants via AI, you will have to choose a different – much more expensive – solution for this target group. And don’t forget: overpriced services are also a risk. Preventing innovation that could have yielded many benefits costs money and that is ultimately to the detriment of the participants.
Continue exploring possibilities
There is no such thing as zero risk. It is about the balance between innovation, risk management and compliance with legislation. It is good to continue exploring how you as a fund can use AI to improve your services and increase your efficiency. At Yellowtail Conclusion, we have been designing, building and managing groundbreaking digital and data-driven software solutions for the financial sector for more than 20 years. Also with AI. For example, we offer our customers chatbot technology and a service such as VisionAI. VisionAI extracts data from documents with all kinds of formatting and can also perform checks on them immediately via business ruling. The result: a faster process of higher quality at lower costs. We only bring AI tools to the market when we have sufficient risk management around them. For example, this is not yet the case with our language-to-speech tool. Based on our knowledge of the financial sector and technology, we always look at what is possible, also when it comes to AI. That is the strength of Yellowtail Conclusion. The fact that we have implemented SOC 2 provides additional security that we are managing our risks properly. That is also important from the AI Act. That is why I invite pension funds not to interpret the AI Act too strictly in advance and to look primarily at the possibilities that AI offers. Of course with an eye for risk management and the framework of the law, just as we do.
Do you want to know how AI can ensure greater efficiency, higher quality of service and lower costs at your pension fund? Please feel free to contact us.