Not enough data to create a plot.
Try a different view from the menu above.
Laird, John E.
Acquiring Grounded Representations of Words with Situated Interactive Instruction
Mohan, Shiwali, Mininger, Aaron H., Kirk, James R., Laird, John E.
We present an approach for acquiring grounded representations of words from mixed-initiative, situated interactions with a human instructor. The work focuses on the acquisition of diverse types of knowledge including perceptual, semantic, and procedural knowledge along with learning grounded meanings. Interactive learning allows the agent to control its learning by requesting instructions about unknown concepts, making learning efficient. Our approach has been instantiated in Soar and has been evaluated on a table-top robotic arm capable of manipulating small objects.
A Proposal for Extending the Common Model of Cognition to Emotion
Rosenbloom, Paul S., Laird, John E., Lebiere, Christian, Stocco, Andrea, Granger, Richard H., Huyck, Christian
Model and how we arrived at this proposal. The subsequent The Common Model of Cognition (Rosenbloom, Lebiere & two sections provide more details on two new modules that Laird, 2022) - née the Standard Model of the Mind (Laird, are proposed for inclusion into the Common Model - one for Lebiere & Rosenbloom, 2017) - is a developing consensus emotion and one for metacognitive assessment - and how concerning what must be in a cognitive architecture to they interact with the rest of the model.
Eliciting Problem Specifications via Large Language Models
Wray, Robert E., Kirk, James R., Laird, John E.
Cognitive systems generally require a human to translate a problem definition into some specification that the cognitive system can use to attempt to solve the problem or perform the task. In this paper, we illustrate that large language models (LLMs) can be utilized to map a problem class, defined in natural language, into a semi-formal specification that can then be utilized by an existing reasoning and learning system to solve instances from the problem class. We present the design of LLM-enabled cognitive task analyst agent(s). Implemented with LLM agents, this system produces a definition of problem spaces for tasks specified in natural language. LLM prompts are derived from the definition of problem spaces in the AI literature and general problem-solving strategies (Polya's How to Solve It). A cognitive system can then use the problem-space specification, applying domain-general problem solving strategies ("weak methods" such as search), to solve multiple instances of problems from the problem class. This result, while preliminary, suggests the potential for speeding cognitive systems research via disintermediation of problem formulation while also retaining core capabilities of cognitive systems, such as robust inference and online learning.
Exploiting Language Models as a Source of Knowledge for Cognitive Agents
Kirk, James R., Wray, Robert E., Laird, John E.
Large language models (LLMs) provide capabilities far beyond sentence completion, including question answering, summarization, and natural-language inference. While many of these capabilities have potential application to cognitive systems, our research is exploiting language models as a source of task knowledge for cognitive agents, that is, agents realized via a cognitive architecture. We identify challenges and opportunities for using language models as an external knowledge source for cognitive systems and possible ways to improve the effectiveness of knowledge extraction by integrating extraction with cognitive architecture capabilities, highlighting with examples from our recent work in this area.
Computational-level Analysis of Constraint Compliance for General Intelligence
Wray, Robert E., Jones, Steven J., Laird, John E.
Human behavior is conditioned by codes and norms that constrain action. Rules, ``manners,'' laws, and moral imperatives are examples of classes of constraints that govern human behavior. These systems of constraints are "messy:" individual constraints are often poorly defined, what constraints are relevant in a particular situation may be unknown or ambiguous, constraints interact and conflict with one another, and determining how to act within the bounds of the relevant constraints may be a significant challenge, especially when rapid decisions are needed. Despite such messiness, humans incorporate constraints in their decisions robustly and rapidly. General, artificially-intelligent agents must also be able to navigate the messiness of systems of real-world constraints in order to behave predictability and reliably. In this paper, we characterize sources of complexity in constraint processing for general agents and describe a computational-level analysis for such constraint compliance. We identify key algorithmic requirements based on the computational-level analysis and outline an initial, exploratory implementation of a general approach to constraint compliance.
Integrating Diverse Knowledge Sources for Online One-shot Learning of Novel Tasks
Kirk, James R., Wray, Robert E., Lindes, Peter, Laird, John E.
Autonomous agents are able to draw on a wide variety of potential sources of task knowledge; however current approaches invariably focus on only one or two. Here we investigate the challenges and impact of exploiting diverse knowledge sources to learn online, in one-shot, new tasks for a simulated office mobile robot. The resulting agent, developed in the Soar cognitive architecture, uses the following sources of domain and task knowledge: interaction with the environment, task execution and search knowledge, human natural language instruction, and responses retrieved from a large language model (GPT-3). We explore the distinct contributions of these knowledge sources and evaluate the performance of different combinations in terms of learning correct task knowledge and human workload. Results show that an agent's online integration of diverse knowledge sources improves one-shot task learning overall, reducing human feedback needed for rapid and reliable task learning.
Improving Language Model Prompting in Support of Semi-autonomous Task Learning
Kirk, James R., Wray, Robert E., Lindes, Peter, Laird, John E.
Large language models (LLMs) offer a potential source of knowledge for agents that need to acquire new task competencies within a performance environment. We describe efforts toward a novel agent capability that can construct cues (or "prompts") that result in useful LLM responses for an agent learning a new task. Importantly, responses must not only be "reasonable" (a measure used commonly in research on knowledge extraction from LLMs) but also must be specific to the agent's task context and in a form that the agent can interpret given its native language capacities. We summarize a series of empirical investigations of agent prompting strategies and evaluate LLM responses against the goals of targeted and actionable responses for task learning. Our results demonstrate that actionable task knowledge can be obtained from LLMs in support of online agent task learning.
An Analysis and Comparison of ACT-R and Soar
Laird, John E.
This is a detailed analysis and comparison of the ACT-R and Soar cognitive architectures, including their overall structure, their representations of agent data and metadata, and their associated processing. It focuses on working memory, procedural memory, and long-term declarative memory. I emphasize the commonalities, which are many, but also highlight the differences. I identify the processes and distinct classes of information used by these architectures, including agent data, metadata, and meta-process data, and explore the roles that metadata play in decision making, memory retrievals, and learning.
Reports of the AAAI 2017 Fall Symposium Series
Flenner, Arjuna (NAVAIR China Lake) | Fraune, Marlena R. (Indiana University) | Hiatt, Laura M. (Naval Research Laboratory (NRL)) | Kendall, Tony (Naval Postgraduate School) | Laird, John E. (University of Michigan) | Lebiere, Christian (Carnegie Mellon University) | Rosenbloom, Paul S. (Institute for Creative Technologies, University of Southern California) | Stein, Frank (IBM) | Topp, Elin A. (Lund University) | Unhelkar, Vaibhav V. (Massachusetts Institute of Technology) | Zhao, Ying (Naval Postgraduate School)
The AAAI 2017 Fall Symposium Series was held Thursday through Saturday, November 9–11, at the Westin Arlington Gateway in Arlington, Virginia, adjacent to Washington, DC. The titles of the six symposia were Artificial Intelligence for Human-Robot Interaction; Cognitive Assistance in Government and Public Sector Applications; Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; Human-Agent Groups: Studies, Algorithms and Challenges; Natural Communication for Human-Robot Collaboration; and A Standard Model of the Mind. The highlights of each symposium (except the Natural Communication for Human-Robot Collaboration symposium, whose organizers did not submit a report) are presented in this report.
Interactively Learning a Blend of Goal-Based and Procedural Tasks
Mininger, Aaron (University of Michigan) | Laird, John E. (University of Michigan)
Agents that can learn new tasks through interactive instruction can utilize goal information to search for and learn flexible policies. This approach can be resilient to variations in initial conditions or issues that arise during execution. However, if a task is not easily formulated as achieving a goal or if the agent lacks sufficient domain knowledge for planning, other methods are required. We present a hybrid approach to interactive task learning that can learn both goal-oriented and procedural tasks, and mixtures of the two, from human natural language instruction. We describe this approach, go through two examples of learning tasks, and outline the space of tasks that the system can learn. We show that our approach can learn a variety of goal-oriented and procedural tasks from a single example and is robust to different amounts of domain knowledge.