Goto

Collaborating Authors

 scientific explanation


Emerging categories in scientific explanations

Magnifico, Giacomo, Barbu, Eduard

arXiv.org Artificial Intelligence

Clear and effective explanations are essential for human understanding and knowledge dissemination. The scope of scientific research aiming to understand the essence of explanations has recently expanded from the social sciences to include the fields of machine learning and artificial intelligence. Important contributions from social sciences include [18, 17, 22, 13, 5, 11] with works that examine critical aspects such as causality (cause-and-effect relationships), contrast (distinctions between differing scenarios), relevance (applicability of explanations), and truth (accuracy and verifiability of explanations). However, machine learning and natural language processing focus more on operational definitions and on the importance of constructing datasets, as seen in studies by [21, 23, 6]. Since explanations for machine learning decisions must be both impactful and human-like [10, 3, 20, 12, 4], a major challenge lies in developing explanations that emphasize proximal aspects -- details that are immediately relevant, direct and related to the user -- over broad algorithmic processes [21].


Fine-tuning ChatGPT for Automatic Scoring of Written Scientific Explanations in Chinese

Yang, Jie, Latif, Ehsan, He, Yuze, Zhai, Xiaoming

arXiv.org Artificial Intelligence

The development of explanations for scientific phenomena is essential in science assessment, but scoring student-written explanations remains challenging and resource-intensive. Large language models (LLMs) have shown promise in addressing this issue, particularly in alphabetic languages like English. However, their applicability to logographic languages is less explored. This study investigates the potential of fine-tuning ChatGPT, a leading LLM, to automatically score scientific explanations written in Chinese. Student responses to seven scientific explanation tasks were collected and automatically scored, with scoring accuracy examined in relation to reasoning complexity using the Kendall correlation. A qualitative analysis explored how linguistic features influenced scoring accuracy. The results show that domain-specific adaptation enables ChatGPT to score Chinese scientific explanations with accuracy. However, scoring accuracy correlates with reasoning complexity: a negative correlation for lower-level responses and a positive one for higher-level responses. The model overrates complex reasoning in low-level responses with intricate sentence structures and underrates high-level responses using concise causal reasoning. These correlations stem from linguistic features--simplicity and clarity enhance accuracy for lower-level responses, while comprehensiveness improves accuracy for higher-level ones. Simpler, shorter responses tend to score more accurately at lower levels, whereas longer, information-rich responses yield better accuracy at higher levels. These findings demonstrate the effectiveness of LLMs in automatic scoring within a Chinese context and emphasize the importance of linguistic features and reasoning complexity in fine-tuning scoring models for educational assessments.


Understanding XAI Through the Philosopher's Lens: A Historical Perspective

Mattioli, Martina, Cinà, Antonio Emanuele, Pelillo, Marcello

arXiv.org Artificial Intelligence

Despite explainable AI (XAI) has recently become a hot topic and several different approaches have been developed, there is still a widespread belief that it lacks a convincing unifying foundation. On the other hand, over the past centuries, the very concept of explanation has been the subject of extensive philosophical analysis in an attempt to address the fundamental question of "why" in the context of scientific law. However, this discussion has rarely been connected with XAI. This paper tries to fill in this gap and aims to explore the concept of explanation in AI through an epistemological lens. By comparing the historical development of both the philosophy of science and AI, an intriguing picture emerges. Specifically, we show that a gradual progression has independently occurred in both domains from logical-deductive to statistical models of explanation, thereby experiencing in both cases a paradigm shift from deterministic to nondeterministic and probabilistic causality. Interestingly, we also notice that similar concepts have independently emerged in both realms such as, for example, the relation between explanation and understanding and the importance of pragmatic factors. Our study aims to be the first step towards understanding the philosophical underpinnings of the notion of explanation in AI, and we hope that our findings will shed some fresh light on the elusive nature of XAI.


Axe the X in XAI: A Plea for Understandable AI

Páez, Andrés

arXiv.org Artificial Intelligence

In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term "explanation" in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors' claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also provide a more general argument as to why the notion of explainability as it is currently used in the XAI literature bears little resemblance to the traditional concept of scientific explanation. It would be more fruitful to use the label "understandable AI" to avoid the confusion that surrounds the goal and purposes of XAI. In the second half of the chapter, I argue for a pragmatic conception of understanding that is better suited to play the central role attributed to explanation in XAI. Following Kuorikoski & Ylikoski (2015), the conditions of satisfaction for understanding an ML system are fleshed out in terms of an agent's success in using the system, in drawing correct inferences from it.


Mapping Knowledge Representations to Concepts: A Review and New Perspectives

Holmberg, Lars, Davidsson, Paul, Linde, Per

arXiv.org Artificial Intelligence

The success of neural networks builds to a large extent on their ability to create internal knowledge representations from real-world high-dimensional data, such as images, sound, or text. Approaches to extract and present these representations, in order to explain the neural network's decisions, is an active and multifaceted research field. To gain a deeper understanding of a central aspect of this field, we have performed a targeted review focusing on research that aims to associate internal representations with human understandable concepts. In doing this, we added a perspective on the existing research by using primarily deductive nomological explanations as a proposed taxonomy. We find this taxonomy and theories of causality, useful for understanding what can be expected, and not expected, from neural network explanations. The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal of model explainability; is it understanding the ML model or, is it actionable explanations useful in the deployment domain?


Giant's Causeway was formed in a matter of DAYS - and not over thousands of years, study claims

Daily Mail - Science & tech

Every year, millions of tourists flock to Northern Ireland to visit Giant's Causeway - an unusual formation of around 40,000 hexagonal stone columns descending gently into the sea. Theories on the stones' formation range from them being built by a mythical giant Finn McCool to more scientific explanations. Now, Dr Mike Simms, curator of natural sciences at National Museums NI, has put forward the first new theory since 1940. Dr Simms considered why the extraordinary geological features are found at sea level only. To mark Unesco's International Geodiversity Day today, he has explained why he believes they were caused by an event which took just days - and not thousands of years as previously thought.


Why are we afraid of sharks? There's a scientific explanation.

National Geographic

Sharks, especially great whites, were catapulted into the public eye with the release of the film Jaws in the summer of 1975. The film is the story of a massive great white that terrorizes a seaside community, and the image of the cover alone--the exposed jaws of a massive shark rising upward in murky water--is enough to inject fear into the hearts of would-be swimmers. Other thrillers have perpetuated the theme of sharks as villans. But where did our fear of sharks come from, and how far back does it go? We're going to need a bigger boat: Take a look at the design history of Jaws and its iconic cover https://t.co/dRdRPILF7L


A multi-component framework for the analysis and design of explainable artificial intelligence

Atakishiyev, S., Babiker, H., Farruque, N., Goebel1, R., Kima, M-Y., Motallebi, M. H., Rabelo, J., Syed, T., Zaïane, O. R.

arXiv.org Artificial Intelligence

The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, which have created high expectations for industrial, commercial and social value. Second, the emergence of concern for creating trusted AI systems, including the creation of regulatory principles to ensure transparency and trust of AI systems.These two threads have created a kind of "perfect storm" of research activity, all eager to create and deliver it any set of tools and techniques to address the XAI demand. As some surveys of current XAI suggest, there is yet to appear a principled framework that respects the literature of explainability in the history of science, and which provides a basis for the development of a framework for transparent XAI. Here we intend to provide a strategic inventory of XAI requirements, demonstrate their connection to a history of XAI ideas, and synthesize those ideas into a simple framework to calibrate five successive levels of XAI.


A general framework for scientifically inspired explanations in AI

Tuckey, David, Russo, Alessandra, Broda, Krysia

arXiv.org Artificial Intelligence

Explainability in AI is gaining attention in the computer science community in response to the increasing success of deep learning and the important need of justifying how such systems make predictions in life-critical applications. The focus of explainability in AI has predominantly been on trying to gain insights into how machine learning systems function by exploring relationships between input data and predicted outcomes or by extracting simpler interpretable models. Through literature surveys of philosophy and social science, authors have highlighted the sharp difference between these generated explanations and human-made explanations and claimed that current explanations in AI do not take into account the complexity of human interaction to allow for effective information passing to not-expert users. In this paper we instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented. This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations. We illustrate how we can utilize this framework through two very different examples: an artificial neural network and a Prolog solver and we provide a possible implementation for both examples.


The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

Páez, Andrés

arXiv.org Artificial Intelligence

In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post-hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post-hoc interpretability that seems to be predominant in most recent literature.