Not enough data to create a plot.
Try a different view from the menu above.
Páez, Andrés
Understanding with toy surrogate models in machine learning
Páez, Andrés
Unlike regular models, these very simple models--often referred to as toy models--are not required to be linked to the real world through structural similarity or resemblance relations. They are not meant to be approximations of the target world system, and in some cases, they are not even required to be representational. In semantic terms, they do not accurately map onto their targets. Despite these limitations, they are still useful in understanding theoretical concepts and possible configurations of the target system. Paradigmatic examples of toy models include Boyle's law and the Ising model in physics, the Lotka-Volterra model in population ecology, and the Schelling model in the social sciences (Weisberg, 2013). In recent years, philosophers of science have become interested in toy models (Grüne-Yanoff, 2009; Luczak, 2017; Reutlinger et al., 2018; Frigg & Nguyen, 2017; Nguyen, 2020). The main purpose of this literature is to explore the nature of these models and examine how they perform their epistemic function. Despite lacking the regular descriptive and predictive features of full-scale scientific models, they often offer an elementary understanding of a phenomenon. Their definitions of "toy model" differ as well as their assessment of the importance of representation in modelling generally, but they all agree that toy models play an important epistemic role in scientific research, exploration, and pedagogy.
Axe the X in XAI: A Plea for Understandable AI
Páez, Andrés
In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term "explanation" in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors' claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also provide a more general argument as to why the notion of explainability as it is currently used in the XAI literature bears little resemblance to the traditional concept of scientific explanation. It would be more fruitful to use the label "understandable AI" to avoid the confusion that surrounds the goal and purposes of XAI. In the second half of the chapter, I argue for a pragmatic conception of understanding that is better suited to play the central role attributed to explanation in XAI. Following Kuorikoski & Ylikoski (2015), the conditions of satisfaction for understanding an ML system are fleshed out in terms of an agent's success in using the system, in drawing correct inferences from it.
Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions
Longo, Luca, Brcic, Mario, Cabitza, Federico, Choi, Jaesik, Confalonieri, Roberto, Del Ser, Javier, Guidotti, Riccardo, Hayashi, Yoichi, Herrera, Francisco, Holzinger, Andreas, Jiang, Richard, Khosravi, Hassan, Lecue, Freddy, Malgieri, Gianclaudio, Páez, Andrés, Samek, Wojciech, Schneider, Johannes, Speith, Timo, Stumpf, Simone
As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse fields to identify open problems, striving to synchronize research agendas and accelerate XAI in practical applications. By fostering collaborative discussion and interdisciplinary cooperation, we aim to propel XAI forward, contributing to its continued success. Our goal is to put forward a comprehensive proposal for advancing XAI. To achieve this goal, we present a manifesto of 27 open problems categorized into nine categories. These challenges encapsulate the complexities and nuances of XAI and offer a road map for future research. For each problem, we provide promising research directions in the hope of harnessing the collective intelligence of interested stakeholders.
Moore's Paradox and the logic of belief
Páez, Andrés
Moores Paradox is a test case for any formal theory of belief. In Knowledge and Belief, Hintikka developed a multimodal logic for statements that express sentences containing the epistemic notions of knowledge and belief. His account purports to offer an explanation of the paradox. In this paper I argue that Hintikkas interpretation of one of the doxastic operators is philosophically problematic and leads to an unnecessarily strong logical system. I offer a weaker alternative that captures in a more accurate way our logical intuitions about the notion of belief without sacrificing the possibility of providing an explanation for problematic cases such as Moores Paradox.