Explainability of AI Models as the Prevailing Theme at the AI in Industry and Finance Conference
The AI community is shifting its focus from improving the accuracy of algorithms to explainability of AI models. This shift is dictated by an increased impact AI has on businesses and everyday lives. Interpretability of AI algorithms was one of the main topics at the 3rd COST Conference on Mathematics for Industry in Switzerland. The event took place on the 6th of September 2018 in Winterthur. We presented our most recent work in recommender systems for financial advice in the Finance track of the conference.
Opacity, Neutrality, Stupidity
In his keynote, Prof. Dr. Marcello Pelillo formulated these as the three challenges of AI. Many AI algorithms, especially deep neural networks, come in a form of opaque black boxes. It is impossible to explain in human language how the outputs depend on inputs. Neutrality of algorithms is of utmost importance when they drive life-affecting decisions, such as the length of prison sentences. Yet, the algorithms are as biased as the input data generated by biased humans. Identifying and eliminating this bias is crucial to obtain fair results. Finally, stupidity of AI is the inability of an algorithm to deal with insignificant changes in the input data. One example is noise in pictures. The keynote speaker outlined that humans and computers live in different similarity spaces. Namely, what we view as similar can be very different to a computer.
Accuracy vs Explainability of AI Models
Interpretability of AI algorithms often comes at the expense of accuracy. The most transparent algorithms, such as linear regression, are often inaccurate. Very accurate deep neural networks are a classic example of black boxes.
One way to deal with this trade-off is to feed simpler explainable algorithms with better data. Dr. Yannik Misteli gave an insightful talk about applying decision trees to allocation of clients into advisory service models in private banking. Although decision trees are less accurate than more sophisticated models, their simplicity and explainability are appealing. Besides, the logic presented by decision trees tends to agree with intuition, making them easier to sell to senior management. It turns out that the accuracy of prediction by decision trees can be greatly improved by pre-filtering the input data for noisy labels.
An alternative way to approach the explainability of AI models is to employ post-hoc explanations. Dr. Thomas Buettner discussed a method to pull explanations out of non-explainable models. This method is referred to as LIME: local interpretable model-agnostic explanations. The approach consists of approximating complex predictors with simpler local predictors which are easy to explain. For example, one could approximate a complex non-linear separator with a linear one in the close neighbourhood of a data point.
Explainability of Recommender Systems for Banking
Explainability of AI models becomes especially important when the decision made by an algorithm can have serious consequences. One of such cases is financial advice. If a client accepts a recommendation to invest in a financial product, she can potentially suffer substantial losses. The client’s decision on whether to follow an advice can rely on the credibility of explanation behind it. We presented our work in explainable recommender systems for banking in the afternoon Finance track. In our talk, we considered two use cases: retail and private banking.
The goal of a retail banking recommender is to make customized proposals of products such as credit cards and mortgages. We found the k-Nearest neighbours approach based on user features to be the right fit for this use case. The most common user features in a client’s neighbourhood serve as explanations for recommendations. For instance, we recommend a youth savings account to a client who belongs to a neighbourhood where this product is popular and one of the common features is ‘Age group = 18 – 24 years’.
In the private banking use case, a recommender serves to provide personalized proposals of financial instruments, such as stocks and bonds. We first applied state-of-art matrix factorisation-based collaborative filtering here with good results. However, explanations remained a missing piece. In our attempt to come up with post-hoc explanations, we approximated the algorithm locally. While providing credible explanations, the approach requires an additional computational step. Luckily, we came upon a CF algorithm that is both explainable and accurate. It allocates users and items into co-clusters – communities of clients consuming subsets of products. Explanations for recommendations are provided by co-cluster membership. “You bought Nestle and Coca Cola shares in the past. Other clients who invested in those also bought Danone shares” is an example explanation.
Interpretability of AI models has certainly become a hot topic in the AI community. When designing a model which can have a significant impact on people and businesses, focusing on accuracy of predictions is not enough. Transparency, explainability and fairness of model results are just as important.