The Application of AI in the Explain Process
When applying AI to the explain process, there are a number of points that need to be taken into account. These concern the past, present and future:
- Past: the past financial behaviour
- Present: what is the current financial situation (such as income and assets)
- The future: forecasting
These are components in which AI may have an application and can provide benefits. The power of AI lies mainly in its application to the past and predicting the future. This power may lie less in determining the current situation, as that is more trivial.
When applying AI to the past, the past financial behaviour can be analysed. For example, by categorising transaction data, the customer’s spending patterns can be examined. You can tailor these categories to the specific situation of a customer to substantiate why a loan is justified. This may be the case, for example, if a customer’s actual expenditure is lower than the average NIBUD expenditure.
In forecasting, certain characteristics in payment behavior can be included. In general, this involves, for example:
- Taking inflation into account
- Taking income growth into account
- Taking the type of work and its level into account
- Determining what a responsible monthly payment is based on income, expenses and payment morality
This technique allows you to create a quantitatively substantiated argument for an explain file, without much human intervention.
With the advent of complex Large Language Models (LLMs), such as GPT-4, an advisor can increasingly automate more work. For example, these types of models could also be able to generate suitable advice for a client. However, this should only be seen as a setup for advice that still requires human intervention. The outcome of these types of models has not been verified. The reason behind this is also called the black box problem. The black box problem, which relates to the lack of clarity about how an AI model reasons and arrives at an answer, leads to doubt about the direct correctness of that answer. Within the framework of the explain process, this problem can be addressed by having an advisor check the results of the model, whereby the four-eyes principle is applied to the output of the model.
Although it is understandable why there are concerns, AI is not a technology that should replace people. Rather, consider it as a powerful tool that makes human work easier. The fact that AI does not replace the work of humans but only eases it implies that the user, such as a mortgage advisor, retains primary responsibility for using the output of the model, including that of an AI model.
When assessing someone’s creditworthiness with AI, a human (in this case a mortgage advisor) must perform a quality control on the output of the model
By applying AI, the explain process can look like this step by step:
- Customer comes to advisor, indicates what his wishes are and the advisor requests the documents to assess creditworthiness
- Advisor concludes that there is a shortage based on the set lending standards
- Advisor thinks that the loan is responsible and decides to write an explain
- Advisor asks customer to share data (Yes? Continue with AI assessment; No? Current manual process)
- AI analyses the customer file and gives an assessment of the creditworthiness (chance that the loan will or will not be repaid)
- Mortgage advisor performs quality control on the output of the model
- Mortgage advisor writes the explain using AI output
- Four-eyes principle by Midoffice employee
- Loan approval
What is the influence of AI on the error sensitivity, labor intensity and duration of the explain process?
Digitization, apart from AI, makes a process consistent, which is an advantage. The explain process could also be automated more by means of a rule-based model. In this case, the established rules will always be followed. This ensures that, within that automated process, human errors no longer occur. It is human to deviate from the standard in certain situations. You eliminate this by automating/digitizing the process.
If a model makes errors, we can detect them. For example, by a human check, such as an advisor or acceptor. The model is also continuously improved, which means that it makes fewer and fewer errors.
Computers are inherently faster in performing calculations and processing large amounts of data. It is therefore obvious that the labor intensity and duration of the process decreases when part of the explain process is automated. Labor intensity would then be in the maintenance and optimization of the model/system.
In general, you will achieve a lot of profit in the explain process in terms of error sensitivity, labor intensity and time in processing an application if you use/digitize a model.
The assessment that an AI model can make of someone’s creditworthiness takes a maximum of a few minutes. The time depends mainly on the complexity of the model, but also on the available computing power.
What are the opportunities of applying AI to the explain process?
In addition to the opportunities already discussed, such as standardization, reduction of human error and increased efficiency, there are other potential benefits of applying AI to the explain process.
AI also creates opportunities within the mortgage market. Yellowtail Conclusion has long been committed to expensive tenants and self-employed persons. In general, Yellowtail is committed to people who are in principle perfectly capable of obtaining a mortgage, but do not qualify for it because they do not fall within the existing frameworks. The application of AI in the explain process offers these people a fair chance of obtaining credit. It can ensure equal opportunities and a fairer assessment of someone’s creditworthiness. The number of self-employed persons in the Netherlands has increased significantly in recent years, while the standards have not changed. As a result, you have to fall back on an explain process. If you can automate that process to a certain point, then there is certainly a profit in that.
What are the risks of applying AI to the explain process?
Negligence is clearly a risk. In the explain process, multiple people could take over the output of the AI model without critically examining it and testing its quality. For example, because advisors get the feeling that the output is always correct and therefore find it unnecessary to check for errors. As a result, customers could receive an irresponsible loan or a fraudulent file could be accepted.
Furthermore, there is a risk that the explain process is automated too tightly. As a result, important context in a customer file could not be included and the personal nature of the explain process could be lost.
In a model that simply applies the ‘three strikes, you’re out’ principle, there is a risk that a customer will be rejected on the basis of, for example, gambling behavior, a period of overdraft and interaction with a debt collection agency. Despite this data, the customer may in reality have a solid financial position and can perfectly well bear the responsibility for a loan. However, the model ignores crucial context, such as being an unwanted participant in the postcode lottery and experiencing financial difficulties as a result of a personal crisis. Emphasising these contextual elements underlines the importance of human intervention, as this is the only way to do justice to the full situation in such cases.
When an AI model indicates that there is a significant chance that a customer will not repay the loan, the advisor has the opportunity to investigate the basis of this assessment. For example, the advisor can ask the customer for additional context. If the advisor nevertheless considers the application to be justified based on this context, he can integrate these considerations into the explanation and deviate from the AI assessment. This emphasises the importance of human assessment and insight into the specific situation, which the AI model may have overlooked.
Risks related to transparency can perhaps be addressed by initially validating the standards via a panel of experts, such as mortgage underwriters, who assess under which circumstances a file is or is not approved. The resulting policy is then abstracted into a set of rules, which are again checked by the experts. With their approval, clear rules can be established for implementing AI in the explaining process. By following these rules, the outcome of an AI model can be very understandable and transparent.
This rule-based model can be part of a collection of models that function as a whole, also called an ‘ensemble’ of models. In this case, for example, a deep learning model can be combined with a heuristic (rule-based) model, which offers numerous possibilities.
Data representativeness and quality
Ensuring that the sample data used to train a model is sufficiently representative is crucial, and this poses a specific risk when using AI in the explaining process. If the data used to train the model is not adequately representative and of good quality, this can lead to incorrect and/or subjective results. Here, the ‘human in the loop’ in the explaining process and the feedback loop on the model remains essential. This allows us to detect and correct possible subjectivity, such as discrimination. Such subjectivity can arise from training data that is not sufficiently representative.
To what extent is AI suitable for application to the explain process?
The explain process is very suitable for application to the explain process. The technology has been at a good enough level for years to be applied to this process. It is not due to the algorithms and the architecture. It is mainly due to a willingness to take the step and the availability of data. The data in particular is still an obstacle.
With the introduction of the PSD2 regulations, the collection and sharing of data has been simplified, but starting up the collection of bank transaction data remains a challenge. Financial institutions, such as banks and credit providers, in particular have a strong position in this. When a consumer wants to take out a loan, the credit provider can request the necessary data and use the customer file for analyses. This data can then be used to improve models. However, due to the initial lack of this data, start-up problems arise. This means that the desired system is not immediately available, but that a build-up phase is required.
During this build-up phase, you go through the following steps:
- Collect data
- Annotate data
- Implement feedback loop
- Build model until desired model is achieved (i.e. optimize)
These steps mark the initial phase where the system is optimized and tuned to the specific requirements, with the ultimate goal of achieving the desired model.
Additional Considerations for Using AI
Customers are willing to share data if the benefit to the customer is compelling enough.
One reason why companies are hesitant to implement AI could be that they are not ready for this step. This could be partly due to uncertainty about what is and is not allowed when using AI, especially in relation to collecting more data for lending and the associated risk.
As you have more data at your disposal, the need for thorough due diligence also increases. After all, having data means that you are obliged to check it and act on it. As an advisor, you are obliged to notice unmentioned details and verify them with the applicant. For example, a monthly contribution to the church of 200 euros could be such a detail. However, the effect on the lending capacity depends on the nature of this. The use of AI, which is known for its improved information processing, means that the standard for recognizing these details may be raised. Nevertheless, the AFM’s “Visie Roboadvies” qualifies this assumption in 4.1 by stating that an algorithm must perform at least at the level of a human.
Furthermore, the increased dependence on AI entails more risks in the area of data security and data privacy. This includes questions about which data should or should not be requested and to what extent one can go in this.
Ultimately, economic developments could make companies more cautious, with an emphasis on conservative action instead of investing in innovation, such as in the implementation of AI. Would you like to know more about this? Please feel free to contact us.