Kuiper, Ouren, Berg, Martin van den, van der Burgt, Joost, Leijnen, Stefan
Explainable artificial intelligence (xAI) is seen as a solution to making AI systems less of a "black box". It is essential to ensure transparency, fairness, and accountability - which are especially paramount in the financial sector. The aim of this study was a preliminary investigation of the perspectives of supervisory authorities and regulated entities regarding the application of xAI in the financial sector. Three use cases (consumer credit, credit risk, and anti-money laundering) were examined using semi-structured interviews at three banks and two supervisory authorities in the Netherlands. We found that for the investigated use cases a disparity exists between supervisory authorities and banks regarding the desired scope of explainability of AI systems. We argue that the financial sector could benefit from clear differentiation between technical AI (model) explainability requirements and explainability requirements of the broader AI system in relation to applicable laws and regulations.
How do we balance the potential benefits of deep learning with the need for explainability? People distrust artificial intelligence and in some ways this makes sense. With the desire to create the best performing AI models, many organizations have prioritized complexity over the concepts of explainability and trust. As the world becomes more dependent on algorithms for making a wide range of decisions, technologies and business leaders will be tasked with explaining how a model selected its outcome. Transparency is an essential requirement for generating trust and AI adoption.
We've all heard it before: "Win or go home." Whether in business or on the playing field, the pressure to win is intense. And in today's financial services industry, the winner can literally take all. As banks struggle to adapt in the throes of digital disruption, executives find themselves squeezed to use artificial intelligence (AI) or machine learning (ML) models to power their digital transformation initiatives forward. The industry's use of computational finance models to make decisions is nothing new.
Artificial intelligence (AI) has become increasingly pervasive and is experiencing widespread adoption in all industries. Faced with increasing competitive pressures and observing the AI success stories of their peers, more and more organizations are adopting AI in various facets of their business. Machine Learning (ML) models, the key component driving the AI systems, are becoming increasingly powerful, displaying superhuman capabilities on most tasks. However, this increased performance has been accompanied by an increase in model complexity, turning the AI systems into a black box whose decisions can be hard to understand by humans. Employing black box models can have severe ramifications, as the decisions made by the systems not only influence the business outcomes but can also impact many lives.
Artificial intelligence (AI) has become increasingly pervasive and is experiencing widespread adoption in all industries. Faced with increasing competitive pressures and observing the AI success stories of their peers, more and more organizations are adopting AI in various facets of their business. Machine Learning (ML) models, the key component driving the AI systems, are becoming increasingly powerful, displaying superhuman capabilities on most tasks. However, this increased performance has been accompanied by an increase in model complexity, turning the AI systems into a black box whose decisions can be hard to understand by humans. Employing black box models can have severe ramifications, as the decisions made by the systems not only influence the business outcomes but can also impact many lives.