Collaborating Authors

Desiderata for Explainable AI in statistical production systems of the European Central Bank Artificial Intelligence

Explainable AI constitutes a fundamental step towards establishing fairness and addressing bias in algorithmic decision-making. Despite the large body of work on the topic, the benefit of solutions is mostly evaluated from a conceptual or theoretical point of view and the usefulness for real-world use cases remains uncertain. In this work, we aim to state clear user-centric desiderata for explainable AI reflecting common explainability needs experienced in statistical production systems of the European Central Bank. We link the desiderata to archetypical user roles and give examples of techniques and methods which can be used to address the user's needs. To this end, we provide two concrete use cases from the domain of statistical data production in central banks: the detection of outliers in the Centralised Securities Database and the data-driven identification of data quality checks for the Supervisory Banking data system.

Neuro-evolutionary Frameworks for Generalized Learning Agents Artificial Intelligence

The recent successes of deep learning and deep reinforcement learning have firmly established their statuses as state-of-the-art artificial learning techniques. However, longstanding drawbacks of these approaches, such as their poor sample efficiencies and limited generalization capabilities point to a need for re-thinking the way such systems are designed and deployed. In this paper, we emphasize how the use of these learning systems, in conjunction with a specific variation of evolutionary algorithms could lead to the emergence of unique characteristics such as the automated acquisition of a variety of desirable behaviors and useful sets of behavior priors. This could pave the way for learning to occur in a generalized and continual manner, with minimal interactions with the environment. We discuss the anticipated improvements from such neuro-evolutionary frameworks, along with the associated challenges, as well as its potential for application to a number of research areas.

Towards Global Explanations for Credit Risk Scoring Machine Learning

In this paper we propose a method to obtain global explanations for trained black-box classifiers by sampling their decision function to learn alternative interpretable models. The envisaged approach provides a unified solution to approximate non-linear decision boundaries with simpler classifiers while retaining the original classification accuracy. We use a private residential mortgage default dataset as a use case to illustrate the feasibility of this approach to ensure the decomposability of attributes during pre-processing.

Advances and Open Problems in Federated Learning Machine Learning

Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.

Secure and Robust Machine Learning for Healthcare: A Survey Machine Learning

Recent years have witnessed widespread adoption of machine learning (ML)/deep learning (DL) techniques due to their superior performance for a variety of healthcare applications ranging from the prediction of cardiac arrest from one-dimensional heart signals to computer-aided diagnosis (CADx) using multi-dimensional medical images. Notwithstanding the impressive performance of ML/DL, there are still lingering doubts regarding the robustness of ML/DL in healthcare settings (which is traditionally considered quite challenging due to the myriad security and privacy issues involved), especially in light of recent results that have shown that ML/DL are vulnerable to adversarial attacks. In this paper, we present an overview of various application areas in healthcare that leverage such techniques from security and privacy point of view and present associated challenges. In addition, we present potential methods to ensure secure and privacy-preserving ML for healthcare applications. Finally, we provide insight into the current research challenges and promising directions for future research.