Goto

Collaborating Authors

Investor View: Explainable AI

#artificialintelligence

What is driving the demand, how incumbents are responding, and how startups are already tackling explainability 2.0 Explainable AI helps a user understand the machine's decision-making process. Instead of discussing methods of explainable AI (e.g., LIME, SHAP, etc.), below are some dimensions to wrap our heads around the concept. What explainable AI means depends on the user, the object being explained, and the underlying data. It is such a broad and rapidly developing field that when discussing explainable AI in-depth, it is good to have a mental framework of how it fits these dimensions. Most examples in this article are products built for business decision makers analyzing tabular data.


Exploring Explainable AI in the Financial Sector: Perspectives of Banks and Supervisory Authorities

arXiv.org Artificial Intelligence

Explainable artificial intelligence (xAI) is seen as a solution to making AI systems less of a "black box". It is essential to ensure transparency, fairness, and accountability - which are especially paramount in the financial sector. The aim of this study was a preliminary investigation of the perspectives of supervisory authorities and regulated entities regarding the application of xAI in the financial sector. Three use cases (consumer credit, credit risk, and anti-money laundering) were examined using semi-structured interviews at three banks and two supervisory authorities in the Netherlands. We found that for the investigated use cases a disparity exists between supervisory authorities and banks regarding the desired scope of explainability of AI systems. We argue that the financial sector could benefit from clear differentiation between technical AI (model) explainability requirements and explainability requirements of the broader AI system in relation to applicable laws and regulations.


Unleashing the power of machine learning models in banking through explainable artificial intelligence (XAI)

#artificialintelligence

The "black-box" conundrum is one of the biggest roadblocks preventing banks from executing their artificial intelligence (AI) strategies. It's easy to see why: Picture a large bank known for its technology prowess designing a new neural network model that predicts creditworthiness among the underserved community more accurately than any other algorithm in the marketplace. This model processes dozens of variables as inputs, including never-before-used alternative data. The developers are thrilled, senior management is happy that they can expand their services to the underserved market, and business executives believe they now have a competitive differentiator. But there is one pesky problem: The developers who built the model cannot explain how it arrives at the credit outcomes, let alone identify which factors had the biggest influence on them.


Artificial Intelligence in Finance: Quo Vadis?

#artificialintelligence

The global financial sector is undergoing a period of significant change and disruption. Advances in technology are enabling businesses to fundamentally rethink the way in which they generate value and interact with their environment. This disruption has taken the umbrella term Fintech and it denotes all technologically enabled financial innovation that results in new business models, applications, processes, products, and services. At the centre of this disruption are the developments in information and internet technology which have fostered new web-based services that affect every facet of today's economic and financial activity (Bank for International Settlements, 2020). This creates enormous quantities of data.


Supporting Responsible Use of AI and Equitable Outcomes in Financial Services

#artificialintelligence

At the AI Academic Symposium hosted by the Board of Governors of the Federal Reserve System, Washington, D.C. (Virtual Event) Today's symposium on the use of artificial intelligence (AI) in financial services is part of the Federal Reserve's broader effort to understand AI's application to financial services, assess methods for managing risks arising from this technology, and determine where banking regulators can support responsible use of AI and equitable outcomes by improving supervisory clarity.1 The potential scope of AI applications is wide ranging. For instance, researchers are turning to AI to help analyze climate change, one of the central challenges of our time. With nonlinearities and tipping points, climate change is highly complex, and quantification for risk assessments requires the analysis of vast amounts of data, a task for which the AI field of machine learning is particularly well-suited.2 The journal Nature recently reported the development of an AI network which could "vastly accelerate efforts to understand the building blocks of cells and enable quicker and more advanced drug discovery" by accurately predicting a protein's 3-D shape from its amino acid sequence.3 In November 2018, I shared some early observations on the use of AI in financial services.4