Goto

Collaborating Authors

Results


Unleashing the power of machine learning models in banking through explainable artificial intelligence (XAI)

#artificialintelligence

The "black-box" conundrum is one of the biggest roadblocks preventing banks from executing their artificial intelligence (AI) strategies. It's easy to see why: Picture a large bank known for its technology prowess designing a new neural network model that predicts creditworthiness among the underserved community more accurately than any other algorithm in the marketplace. This model processes dozens of variables as inputs, including never-before-used alternative data. The developers are thrilled, senior management is happy that they can expand their services to the underserved market, and business executives believe they now have a competitive differentiator. But there is one pesky problem: The developers who built the model cannot explain how it arrives at the credit outcomes, let alone identify which factors had the biggest influence on them.


Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies

arXiv.org Artificial Intelligence

As AI systems demonstrate increasingly strong predictive performance, their adoption has grown in numerous domains. However, in high-stakes domains such as criminal justice and healthcare, full automation is often not desirable due to safety, ethical, and legal concerns, yet fully manual approaches can be inaccurate and time consuming. As a result, there is growing interest in the research community to augment human decision making with AI assistance. Besides developing AI technologies for this purpose, the emerging field of human-AI decision making must embrace empirical approaches to form a foundational understanding of how humans interact and work with AI to make decisions. To invite and help structure research efforts towards a science of understanding and improving human-AI decision making, we survey recent literature of empirical human-subject studies on this topic. We summarize the study design choices made in over 100 papers in three important aspects: (1) decision tasks, (2) AI models and AI assistance elements, and (3) evaluation metrics. For each aspect, we summarize current trends, discuss gaps in current practices of the field, and make a list of recommendations for future research. Our survey highlights the need to develop common frameworks to account for the design and research spaces of human-AI decision making, so that researchers can make rigorous choices in study design, and the research community can build on each other's work and produce generalizable scientific knowledge. We also hope this survey will serve as a bridge for HCI and AI communities to work together to mutually shape the empirical science and computational technologies for human-AI decision making.


Towards Explainable Artificial Intelligence in Banking and Financial Services

arXiv.org Artificial Intelligence

Artificial intelligence (AI) enables machines to learn from human experience, adjust to new inputs, and perform human-like tasks. AI is progressing rapidly and is transforming the way businesses operate, from process automation to cognitive augmentation of tasks and intelligent process/data analytics. However, the main challenge for human users would be to understand and appropriately trust the result of AI algorithms and methods. In this paper, to address this challenge, we study and analyze the recent work done in Explainable Artificial Intelligence (XAI) methods and tools. We introduce a novel XAI process, which facilitates producing explainable models while maintaining a high level of learning performance. We present an interactive evidence-based approach to assist human users in comprehending and trusting the results and output created by AI-enabled algorithms. We adopt a typical scenario in the Banking domain for analyzing customer transactions. We develop a digital dashboard to facilitate interacting with the algorithm results and discuss how the proposed XAI method can significantly improve the confidence of data scientists in understanding the result of AI-enabled algorithms.


A Practical Tutorial on Explainable AI Techniques

arXiv.org Artificial Intelligence

Last years have been characterized by an upsurge of opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although they have great generalization and prediction skills, their functioning does not allow obtaining detailed explanations of their behaviour. As opaque machine learning models are increasingly being employed to make important predictions in critical environments, the danger is to create and use decisions that are not justifiable or legitimate. Therefore, there is a general agreement on the importance of endowing machine learning models with explainability. The reason is that EXplainable Artificial Intelligence (XAI) techniques can serve to verify and certify model outputs and enhance them with desirable notions such as trustworthiness, accountability, transparency and fairness. This tutorial is meant to be the go-to handbook for any audience with a computer science background aiming at getting intuitive insights of machine learning models, accompanied with straight, fast, and intuitive explanations out of the box. We believe that these methods provide a valuable contribution for applying XAI techniques in their particular day-to-day models, datasets and use-cases. Figure \ref{fig:Flowchart} acts as a flowchart/map for the reader and should help him to find the ideal method to use according to his type of data. The reader will find a description of the proposed method as well as an example of use and a Python notebook that he can easily modify as he pleases in order to apply it to his own case of application.


Trustworthy AI: From Principles to Practices

arXiv.org Artificial Intelligence

Fast developing artificial intelligence (AI) technology has enabled various applied systems deployed in the real world, impacting people's everyday lives. However, many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc., which not only degrades user experience but erodes the society's trust in all AI systems. In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, alignment with human values, and accountability. We then survey leading approaches in these aspects in the industry. To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to development and deployment, finally to continuous monitoring and governance. In this framework, we offer concrete action items to practitioners and societal stakeholders (e.g., researchers and regulators) to improve AI trustworthiness. Finally, we identify key opportunities and challenges in the future development of trustworthy AI systems, where we identify the need for paradigm shift towards comprehensive trustworthy AI systems.


Explainable AI (XAI) for PHM of Industrial Asset: A State-of-The-Art, PRISMA-Compliant Systematic Review

arXiv.org Artificial Intelligence

A state-of-the-art systematic review on XAI applied to Prognostic and Health Management (PHM) of industrial asset is presented. The work attempts to provide an overview of the general trend of XAI in PHM, answers the question of accuracy versus explainability, investigates the extent of human role, explainability evaluation and uncertainty management in PHM XAI. Research articles linked to PHM XAI, in English language, from 2015 to 2021 are selected from IEEE Xplore, ScienceDirect, SpringerLink, ACM Digital Library and Scopus databases using PRISMA guidelines. Data was extracted from 35 selected articles and examined using MS. Excel. Several findings were synthesized. Firstly, while the discipline is still young, the analysis indicates the growing acceptance of XAI in PHM domain. Secondly, XAI functions as a double edge sword, where it is assimilated as a tool to execute PHM tasks as well as a mean of explanation, in particular in diagnostic and anomaly detection. There is thus a need for XAI in PHM. Thirdly, the review shows that PHM XAI papers produce either good or excellent results in general, suggesting that PHM performance is unaffected by XAI. Fourthly, human role, explainability metrics and uncertainty management are areas requiring further attention by the PHM community. Adequate explainability metrics to cater for PHM need are urgently needed. Finally, most case study featured on the accepted articles are based on real, indicating that available AI and XAI approaches are equipped to solve complex real-world challenges, increasing the confidence of AI model adoption in the industry. This work is funded by the Universiti Teknologi Petronas Foundation.


Accelerating Entrepreneurial Decision-Making Through Hybrid Intelligence

arXiv.org Artificial Intelligence

AI - Artificial Intelligence AGI - Artificial General Intelligence ANN - Artificial Neural Network ANOVA - Analysis of Variance ANT - Actor Network Theory API - Application Programming Interface APX - Amsterdam Power Exchange AVE - Average Variance Extracted BU - Business Unit CART - Classification and Regression Tree CBMV - Crowd-based Business Model Validation CR - Composite Reliability CT - Computed Tomography CVC - Corporate Venture Capital DR - Design Requirement DP - Design Principle DSR - Design Science Research DSS - Decision Support System EEX - European Energy Exchange FsQCA - Fuzzy-Set Qualitative Comparative Analysis GUI - Graphical User Interface HI-DSS - Hybrid Intelligence Decision Support System HIT - Human Intelligence Task IoT - Internet of Things IS - Information System IT - Information Technology MCC - Matthews Correlation Coefficient ML - Machine Learning OCT - Opportunity Creation Theory OGEMA 2.0 - Open Gateway Energy Management 2.0 OS - Operating System R&D - Research & Development RE - Renewable Energies RQ - Research Question SVM - Support Vector Machine SSD - Solid-State Drive SDK - Software Development Kit TCP/IP - Transmission Control Protocol/Internet Protocol TCT - Transaction Cost Theory UI - User Interface VaR - Value at Risk VC - Venture Capital VPP - Virtual Power Plant Chapter I


Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence

arXiv.org Artificial Intelligence

The societal and ethical implications of the use of opaque artificial intelligence systems for consequential decisions, such as welfare allocation and criminal justice, have generated a lively debate among multiple stakeholder groups, including computer scientists, ethicists, social scientists, policy makers, and end users. However, the lack of a common language or a multi-dimensional framework to appropriately bridge the technical, epistemic, and normative aspects of this debate prevents the discussion from being as productive as it could be. Drawing on the philosophical literature on the nature and value of explanations, this paper offers a multi-faceted framework that brings more conceptual precision to the present debate by (1) identifying the types of explanations that are most pertinent to artificial intelligence predictions, (2) recognizing the relevance and importance of social and ethical values for the evaluation of these explanations, and (3) demonstrating the importance of these explanations for incorporating a diversified approach to improving the design of truthful algorithmic ecosystems. The proposed philosophical framework thus lays the groundwork for establishing a pertinent connection between the technical and ethical aspects of artificial intelligence systems.


Explainable Artificial Intelligence Approaches: A Survey

arXiv.org Artificial Intelligence

The lack of explainability of a decision from an Artificial Intelligence (AI) based "black box" system/model, despite its superiority in many real-world applications, is a key stumbling block for adopting AI in many high stakes applications of different domain or industry. While many popular Explainable Artificial Intelligence (XAI) methods or approaches are available to facilitate a human-friendly explanation of the decision, each has its own merits and demerits, with a plethora of open challenges. We demonstrate popular XAI methods with a mutual case study/task (i.e., credit default prediction), analyze for competitive advantages from multiple perspectives (e.g., local, global), provide meaningful insight on quantifying explainability, and recommend paths towards responsible or human-centered AI using XAI as a medium. Practitioners can use this work as a catalog to understand, compare, and correlate competitive advantages of popular XAI methods. In addition, this survey elicits future research directions towards responsible or human-centric AI systems, which is crucial to adopt AI in high stakes applications.


Explainable AI for Interpretable Credit Scoring

arXiv.org Artificial Intelligence

With the ever-growing achievements in Artificial Intelligence (AI) and the recent boosted enthusiasm in Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application, such that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. An interesting concept that has been recently introduced is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. Evaluation through the use of functionallygrounded, application-grounded and human-grounded analysis show that the explanations provided are simple, consistent as well as satisfy the six predetermined hypotheses testing for correctness, effectiveness, easy understanding, detail sufficiency and trustworthiness. Credit scoring models are decision models that help lenders decide whether or not to accept a loan application based on the model's expectation of the applicant being capable or not of repaying the financial obligations [1]. Such models are beneficial since they reduce the time needed for the loan approval process, allow loan officers to concentrate on only a percentage of the applications, lead to cost savings, reduce human subjectivity and decrease default risk [2]. There has been a lot of research on this problem, with various Machine Learning (ML) and Artificial Intelligence (AI) techniques proposed. Such techniques might be exceptional in predictive power but are also known as black-box methods since they provide no explanations behind their decisions, making humans unable to interpret them [3]. Therefore, it is highly unlikely that any financial expert is ready to trust the predictions of a model without any sort of justification [4]. With regards to credit scoring, lenders will need to understand the model's predictions to ensure that decisions are made for the correct reasons.