Plotting

 Sun, Ron


Roles of LLMs in the Overall Mental Architecture

arXiv.org Artificial Intelligence

To better understand existing LLMs, we may examine the human mental (cognitive/psychological) architecture, and its components and structures. Based on psychological, philosophical, and cognitive science literatures, it is argued that, within the human mental architecture, existing LLMs correspond well with implicit mental processes (intuition, instinct, and so on). However, beyond such implicit processes, explicit processes (with better symbolic capabilities) are also present within the human mental architecture, judging from psychological, philosophical, and cognitive science literatures. Various theoretical and empirical issues and questions in this regard are explored. Furthermore, it is argued that existing dual-process computational cognitive architectures (models of the human cognitive/psychological architecture) provide usable frameworks for fundamentally enhancing LLMs by introducing dual processes (both implicit and explicit) and, in the meantime, can also be enhanced by LLMs. The results are synergistic combinations (in several different senses simultaneously).


Can A Cognitive Architecture Fundamentally Enhance LLMs? Or Vice Versa?

arXiv.org Artificial Intelligence

The paper discusses what is needed to address the limitations of current LLM-centered AI systems. The paper argues that incorporating insights from human cognition and psychology, as embodied by a computational cognitive architecture, can help develop systems that are more capable, more reliable, and more human-like. It emphasizes the importance of the dual-process architecture and the hybrid neuro-symbolic approach in addressing the limitations of current LLMs. In the opposite direction, the paper also highlights the need for an overhaul of computational cognitive architectures to better reflect advances in AI and computing technology.


David L Waltz, in Memoriam

AI Magazine

David L. Waltz (1943-2012), was director, Center for Computational Learning Systems In 1973, Dave Waltz with Richard P. Gabriel in tow headed Dave Waltz delivers his AAAI Presidential Address at AAAI-98 in Madison, Wisconsin. While at Illinois, Dave produced system, paving the way for an engineering-style 11 Ph.D. students and many more MS students, approach to emergent AI techniques; and even mentored junior researchers and postdocs, attracted though their first attempts to create a multidisciplinary new AI faculty, and helped create the Beckman AI degree program failed, Dave was able in Institute for Advanced Science and Technology. In 1984, Marvin Minsky asked Dave to return to During the late 1970s and early 1980s, Waltz's Thinking Machines, Inc., an MIT spinoff in Cambridge group explored new ideas in natural language processing, -- with the temptation that the atmosphere cognitive science, qualitative reasoning, would be like the early days of the AI Lab all over and parallel computation in a collaborative environment again. At the same time he took a parttime including researchers in computer science, tenured position at Brandeis. Machines and Brandeis, Dave developed the ideas He chaired and brought the influential of massively parallel AI and, with Craig Stanfill, the Theoretical Issues in Natural Language Processing memory-based reasoning approach to case-based conference to Urbana in 1978.


Reports on the Twenty-First National Conference on Artificial Intelligence (AAAI-06) Workshop Program

AI Magazine

The Workshop program of the Twenty-First Conference on Artificial Intelligence was held July 16-17, 2006 in Boston, Massachusetts. The program was chaired by Joyce Chai and Keith Decker. The titles of the 17 workshops were AIDriven Technologies for Service-Oriented Computing; Auction Mechanisms for Robot Coordination; Cognitive Modeling and Agent-Based Social Simulations, Cognitive Robotics; Computational Aesthetics: Artificial Intelligence Approaches to Beauty and Happiness; Educational Data Mining; Evaluation Methods for Machine Learning; Event Extraction and Synthesis; Heuristic Search, Memory- Based Heuristics, and Their Applications; Human Implications of Human-Robot Interaction; Intelligent Techniques in Web Personalization; Learning for Search; Modeling and Retrieval of Context; Modeling Others from Observations; and Statistical and Empirical Approaches for Spoken Dialogue Systems.


Reports on the Twenty-First National Conference on Artificial Intelligence (AAAI-06) Workshop Program

AI Magazine

The Workshop program of the Twenty-First Conference on Artificial Intelligence was held July 16-17, 2006 in Boston, Massachusetts. The program was chaired by Joyce Chai and Keith Decker. The titles of the 17 workshops were AIDriven Technologies for Service-Oriented Computing; Auction Mechanisms for Robot Coordination; Cognitive Modeling and Agent-Based Social Simulations, Cognitive Robotics; Computational Aesthetics: Artificial Intelligence Approaches to Beauty and Happiness; Educational Data Mining; Evaluation Methods for Machine Learning; Event Extraction and Synthesis; Heuristic Search, Memory- Based Heuristics, and Their Applications; Human Implications of Human-Robot Interaction; Intelligent Techniques in Web Personalization; Learning for Search; Modeling and Retrieval of Context; Modeling Others from Observations; and Statistical and Empirical Approaches for Spoken Dialogue Systems.


The Present and the Future of Hybrid Neural Symbolic Systems Some Reflections from the NIPS Workshop

AI Magazine

In this article, we describe some recent results and trends concerning hybrid neural symbolic systems based on a recent workshop on hybrid neural symbolic integration. The Neural Information Processing Systems (NIPS) workshop on hybrid neural symbolic integration, organized by Stefan Wermter and Ron Sun, was held on 4 to 5 December 1998 in Breckenridge, Colorado.


Computational Cognitive Modeling, the Source of Power, and Other Related Issues

AI Magazine

In computational cognitive modeling, we hypothesize internal mental processes of human cognitive activities and express such activities by computer programs. Such computational models often consist of many components and aspects. Claims are often made that certain aspects play a key role in modeling, but such claims are sometimes not well justified or explored. We then discuss, in principle, systematic ways of identifying the source of power in models.


Computational Cognitive Modeling, the Source of Power, and Other Related Issues

AI Magazine

In computational cognitive modeling, we hypothesize internal mental processes of human cognitive activities and express such activities by computer programs. Such computational models often consist of many components and aspects. Claims are often made that certain aspects play a key role in modeling, but such claims are sometimes not well justified or explored. In this article, we first review some fundamental distinctions and issues in computational modeling. We then discuss, in principle, systematic ways of identifying the source of power in models.


Hybrid Connectionist-Symbolic Modules: A Report from the IJCAI-95 Workshop on Connectionist-Symbolic Integration

AI Magazine

The Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches was held on 19 to 20 August 1995 in Montreal, Canada, in conjunction with the Fourteenth International Joint Conference on Artificial Intelligence. The focus of the workshop was on learning and architectures that feature hybrid representations and support hybrid learning. The general consensus was that hybrid connectionist-symbolic models constitute a promising avenue to the development of more robust, more powerful, and more versatile architectures for both cognitive modeling and intelligent systems.


Hybrid Connectionist-Symbolic Modules: A Report from the IJCAI-95 Workshop on Connectionist-Symbolic Integration

AI Magazine

The Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches was held on 19 to 20 August 1995 in Montreal, Canada, in conjunction with the Fourteenth International Joint Conference on Artificial Intelligence. The focus of the workshop was on learning and architectures that feature hybrid representations and support hybrid learning. The general consensus was that hybrid connectionist-symbolic models constitute a promising avenue to the development of more robust, more powerful, and more versatile architectures for both cognitive modeling and intelligent systems.