Goto

Collaborating Authors


Explainable Artificial Intelligence: a Systematic Review

arXiv.org Artificial Intelligence

This has led to the development of a plethora of domain-dependent and context-specific methods for dealing with the interpretation of machine learning (ML) models and the formation of explanations for humans. Unfortunately, this trend is far from being over, with an abundance of knowledge in the field which is scattered and needs organisation. The goal of this article is to systematically review research works in the field of XAI and to try to define some boundaries in the field. From several hundreds of research articles focused on the concept of explainability, about 350 have been considered for review by using the following search methodology. In a first phase, Google Scholar was queried to find papers related to "explainable artificial intelligence", "explainable machine learning" and "interpretable machine learning". Subsequently, the bibliographic section of these articles was thoroughly examined to retrieve further relevant scientific studies. The first noticeable thing, as shown in figure 2 (a), is the distribution of the publication dates of selected research articles: sporadic in the 70s and 80s, receiving preliminary attention in the 90s, showing raising interest in 2000 and becoming a recognised body of knowledge after 2010. The first research concerned the development of an explanation-based system and its integration in a computer program designed to help doctors make diagnoses [3]. Some of the more recent papers focus on work devoted to the clustering of methods for explainability, motivating the need for organising the XAI literature [4, 5, 6].


Computers and Thought

Classics

E.A. Feigenbaum and J. Feldman (Eds.). Computers and Thought. McGraw-Hill, 1963. This collection includes twenty classic papers by such pioneers as A. M. Turing and Marvin Minsky who were behind the pivotal advances in artificially simulating human thought processes with computers. All Parts are available as downloadable pdf files; most individual chapters are also available separately. COMPUTING MACHINERY AND INTELLIGENCE. A. M. Turing. CHESS-PLAYING PROGRAMS AND THE PROBLEM OF COMPLEXITY. Allen Newell, J.C. Shaw and H.A. Simon. SOME STUDIES IN MACHINE LEARNING USING THE GAME OF CHECKERS. A. L. Samuel. EMPIRICAL EXPLORATIONS WITH THE LOGIC THEORY MACHINE: A CASE STUDY IN HEURISTICS. Allen Newell J.C. Shaw and H.A. Simon. REALIZATION OF A GEOMETRY-THEOREM PROVING MACHINE. H. Gelernter. EMPIRICAL EXPLORATIONS OF THE GEOMETRY-THEOREM PROVING MACHINE. H. Gelernter, J.R. Hansen, and D. W. Loveland. SUMMARY OF A HEURISTIC LINE BALANCING PROCEDURE. Fred M. Tonge. A HEURISTIC PROGRAM THAT SOLVES SYMBOLIC INTEGRATION PROBLEMS IN FRESHMAN CALCULUS. James R. Slagle. BASEBALL: AN AUTOMATIC QUESTION ANSWERER. Green, Bert F. Jr., Alice K. Wolf, Carol Chomsky, and Kenneth Laughery. INFERENTIAL MEMORY AS THE BASIS OF MACHINES WHICH UNDERSTAND NATURAL LANGUAGE. Robert K. Lindsay. PATTERN RECOGNITION BY MACHINE. Oliver G. Selfridge and Ulric Neisser. A PATTERN-RECOGNITION PROGRAM THAT GENERATES, EVALUATES, AND ADJUSTS ITS OWN OPERATORS. Leonard Uhr and Charles Vossler. GPS, A PROGRAM THAT SIMULATES HUMAN THOUGHT. Allen Newell and H.A. Simon. THE SIMULATION OF VERBAL LEARNING BEHAVIOR. Edward A. Feigenbaum. PROGRAMMING A MODEL OF HUMAN CONCEPT FORMULATION. Earl B. Hunt and Carl I. Hovland. SIMULATION OF BEHAVIOR IN THE BINARY CHOICE EXPERIMENT Julian Feldman. A MODEL OF THE TRUST INVESTMENT PROCESS. Geoffrey P. E. Clarkson. A COMPUTER MODEL OF ELEMENTARY SOCIAL BEHAVIOR. John T. Gullahorn and Jeanne E. Gullahorn. TOWARD INTELLIGENT MACHINES. Paul Armer. STEPS TOWARD ARTIFICIAL INTELLIGENCE. Marvin Minsky. A SELECTED DESCRIPTOR-INDEXED BIBLIOGRAPHY TO THE LITERATURE ON ARTIFICIAL INTELLIGENCE. Marvin Minsky.


AI Research Considerations for Human Existential Safety (ARCHES)

arXiv.org Artificial Intelligence

Framed in positive terms, this report examines how technical AI research might be steered in a manner that is more attentive to humanity's long-term prospects for survival as a species. In negative terms, we ask what existential risks humanity might face from AI development in the next century, and by what principles contemporary technical research might be directed to address those risks. A key property of hypothetical AI technologies is introduced, called \emph{prepotence}, which is useful for delineating a variety of potential existential risks from artificial intelligence, even as AI paradigms might shift. A set of \auxref{dirtot} contemporary research \directions are then examined for their potential benefit to existential safety. Each research direction is explained with a scenario-driven motivation, and examples of existing work from which to build. The research directions present their own risks and benefits to society that could occur at various scales of impact, and in particular are not guaranteed to benefit existential safety if major developments in them are deployed without adequate forethought and oversight. As such, each direction is accompanied by a consideration of potentially negative side effects.


Artificial Intelligence for Social Good: A Survey

arXiv.org Artificial Intelligence

Its impact is drastic and real: Youtube's AIdriven recommendation system would present sports videos for days if one happens to watch a live baseball game on the platform [1]; email writing becomes much faster with machine learning (ML) based auto-completion [2]; many businesses have adopted natural language processing based chatbots as part of their customer services [3]. AI has also greatly advanced human capabilities in complex decision-making processes ranging from determining how to allocate security resources to protect airports [4] to games such as poker [5] and Go [6]. All such tangible and stunning progress suggests that an "AI summer" is happening. As some put it, "AI is the new electricity" [7]. Meanwhile, in the past decade, an emerging theme in the AI research community is the so-called "AI for social good" (AI4SG): researchers aim at developing AI methods and tools to address problems at the societal level and improve the wellbeing of the society.