Kleyko, Denis, Rachkovskij, Dmitri A., Osipov, Evgeny, Rahimi, Abbas
This is Part II of the two-part comprehensive survey devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and vector distributed representations. Holographic Reduced Representations is an influential HDC/VSA model that is well-known in the machine learning domain and often used to refer to the whole family. However, for the sake of consistency, we use HDC/VSA to refer to the area. Part I of this survey covered foundational aspects of the area, such as historical context leading to the development of HDC/VSA, key elements of any HDC/VSA model, known HDC/VSA models, and transforming input data of various types into high-dimensional vectors suitable for HDC/VSA. This second part surveys existing applications, the role of HDC/VSA in cognitive computing and architectures, as well as directions for future work. Most of the applications lie within the machine learning/artificial intelligence domain, however we also cover other applications to provide a thorough picture. The survey is written to be useful for both newcomers and practitioners.
Critch, Andrew, Krueger, David
Framed in positive terms, this report examines how technical AI research might be steered in a manner that is more attentive to humanity's long-term prospects for survival as a species. In negative terms, we ask what existential risks humanity might face from AI development in the next century, and by what principles contemporary technical research might be directed to address those risks. A key property of hypothetical AI technologies is introduced, called \emph{prepotence}, which is useful for delineating a variety of potential existential risks from artificial intelligence, even as AI paradigms might shift. A set of \auxref{dirtot} contemporary research \directions are then examined for their potential benefit to existential safety. Each research direction is explained with a scenario-driven motivation, and examples of existing work from which to build. The research directions present their own risks and benefits to society that could occur at various scales of impact, and in particular are not guaranteed to benefit existential safety if major developments in them are deployed without adequate forethought and oversight. As such, each direction is accompanied by a consideration of potentially negative side effects.
Bushfires pose a significant threat to Australia's regional areas. To minimise risk and increase resilience, communities need robust evacuation strategies that account for people's likely behaviour both before and during a bushfire. Agent-based modelling (ABM) offers a practical way to simulate a range of bushfire evacuation scenarios. However, the ABM should reflect the diversity of possible human responses in a given community. The Belief-Desire-Intention (BDI) cognitive model captures behaviour in a compact representation that is understandable by domain experts. Within a BDI-ABM simulation, individual BDI agents can be assigned profiles that determine their likely behaviour. Over a population of agents their collective behaviour will characterise the community response. These profiles are drawn from existing human behaviour research and consultation with emergency services personnel and capture the expected behaviours of identified groups in the population, both prior to and during an evacuation. A realistic representation of each community can then be formed, and evacuation scenarios within the simulation can be used to explore the possible impact of population structure on outcomes. It is hoped that this will give an improved understanding of the risks associated with evacuation, and lead to tailored evacuation plans for each community to help them prepare for and respond to bushfire.
Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.
A survey of early work exploring how AI can be used in medicine, with somewhat more technical expositions than in the complementary volume Artificial Intelligence in Medicine."Each chapter is preceded by a brief introduction that outlines our view of its contribution to the field, the reason it was selected for inclusion in this volume, an overview of its content, and a discussion of how the work evolved after the article appeared and how it relates to other chapters in the book.
JANICE S. AIKINS Dr. Aikins received her Ph.D. in computer science from Stanford University in 1980. She is currently a research computer scientist at IBM's Palo Alto Scientific Center. She specializes in designing systems with an emphasis on the explicit representation of control knowledge in expert systems. ROBERT L. BLUM Dr. Blum received his M.D. from the University of California Medical School at San Francisco in 1973. From 1973 to 1976 he did an internship and residency in the Department of Internal Medicine at the Kaiser Foundation Hospital in Oakland, California, where he was chief resident in 1976.
Mueller, Shane T., Hoffman, Robert R., Clancey, William, Emrey, Abigail, Klein, Gary
This is an integrative review that address the question, "What makes for a good explanation?" with reference to AI systems. Pertinent literatures are vast. Thus, this review is necessarily selective. That said, most of the key concepts and issues are expressed in this Report. The Report encapsulates the history of computer science efforts to create systems that explain and instruct (intelligent tutoring systems and expert systems). The Report expresses the explainability issues and challenges in modern AI, and presents capsule views of the leading psychological theories of explanation. Certain articles stand out by virtue of their particular relevance to XAI, and their methods, results, and key points are highlighted. It is recommended that AI/XAI researchers be encouraged to include in their research reports fuller details on their empirical or experimental methods, in the fashion of experimental psychology research reports: details on Participants, Instructions, Procedures, Tasks, Dependent Variables (operational definitions of the measures and metrics), Independent Variables (conditions), and Control Conditions.
The recent practical successes [26] of Artificial Intelligence (AI) programs of the Reinforcement Learning and Deep Learning varieties in game playing, natural language processing and image classification, are now calling attention to the envisioned pitfalls of their hypothetical extension to wider domains of human behavior. Several voices from the industry and academia are now routinely raising concerns over the advances [49] of often heavily media-covered representatives of this new generation of programs such as Deep Blue, Watson, Google Translate, AlphaGo and AlphaZero. Most of these cutting-edge algorithms generally fall under the class of supervised learning, a branch of the still evolving taxonomy of Machine Learning techniques in AI research. In most cases the implementation choice is artificial neural networks software, the workhorse of the Connectionism school of thought in both AI and Cognitive Psychology. Confronting the current wave of connectionist architectures, critics usually raise issues of interpretability (Can the remarkable predictive capabilities be 1 trusted in real-life tasks? Are these capabilities transferable to unfamiliar situations or to different tasks altogether? How informative are the results about the real world; about human cognition?
Should Artificial Intelligence strive to model and understand human cognitive and perceptual systems? Should it operate at a more abstract mathematical level of characterizing possible intelligent action, independent of human performance? Or, should it focus on building working programs that exhibit increasingly expert behavior, irrespective of theoretical or psychological conccrlls? These questions lie at the heart of most current, debate on whether AI is a science, an art, or a new branch of engineering In fact, some researchers believe it is all three and consequently build systems that perform some interesting task, arguing for the "theoretical significance" and "psychological validity" of the approach. In fact, it assumes the cognitive psychology paradigm as central and suggests that AI research would benefit from closer adherence to the data and methods of psychological research We welcome contributions in support of other research methodologies in AI, as well as discussions com-Rcscarch for this paper was conducted at the LJniversity of Chicago Center for Cognitive Science under a grant.