If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
We propose two distinct levels of learning for general autonomous intelligent agents. Level 1 consists of fixed architectural learning mechanisms that are innate and automatic. Level 2 consists of deliberate learning strategies that are controlled by the agent's knowledge. We describe these levels and provide an example of their use in a task-learning agent. We also explore other potential levels and discuss the implications of this view of learning for the design of autonomous agents.
Agents that can learn new tasks through interactive instruction can utilize goal information to search for and learn flexible policies. This approach can be resilient to variations in initial conditions or issues that arise during execution. However, if a task is not easily formulated as achieving a goal or if the agent lacks sufficient domain knowledge for planning, other methods are required. We present a hybrid approach to interactive task learning that can learn both goal-oriented and procedural tasks, and mixtures of the two, from human natural language instruction. We describe this approach, go through two examples of learning tasks, and outline the space of tasks that the system can learn. We show that our approach can learn a variety of goal-oriented and procedural tasks from a single example and is robust to different amounts of domain knowledge.
This paper examines the relationship between modeling human sentence comprehension using cognitive architectures and approaches to linguistic knowledge representation using construction grammars. We review multiple computational models of language understanding that vary in their use of construction grammar and cognitive architectures. We present a case study: Lucia, that uses Embodied Construction Grammar (ECG) within the Soar cognitive architecture to comprehend language used to instruct an embodied agent. We also examine the tradeoffs between alternative approaches to representing and accessing linguistic knowledge within a cognitive architecture and suggest future research.
Long-living autonomous agents must be able to learn to perform competently in novel environments. One important aspect of competence is the ability to plan, which entails the ability to learn models of the agent’s own actions and their effects on the environment. In this paper we describe an approach to learn action models of environments with continuous-valued spatial states and realistic physics consisting of multiple interacting rigid objects. In such environments, we hypothesize that objects exhibit multiple qualitatively distinct behaviors we call modes, conditioned on their spatial relationships to each other. We argue that action models that explicitly represent these modes using a combination of symbolic spatial relationships and continuous metric information learn faster, generalize better, and make more accurate predictions than models that only use metric information. We present a method to learn action models with piecewise linear modes conditioned on a combination of first order Horn clauses that test symbolic spatial predicates and continuous classifiers. We empirically demonstrate that our method learns more accurate and more general models of a physics simulation than a method that learns a single function (locally weighted regression).
We present an approach for learning grounded language from mixed-initiative human-robot interaction. Prior work on learning from human instruction has concentrated on acquisition of task-execution knowledge from domain-specific language. In this work, we demonstrate acquisition of linguistic, semantic, perceptual, and procedural knowledge from mixed-initiative, natural language dialog. Our approach has been instantiated in a cognitive architecture, Soar, and has been deployed on a table-top robotic arm capable of picking up small objects. A preliminary analysis verifies the ability of the robot to acquire diverse knowledge from human-robot interaction.
Linguistic communication relies on non-linguistic context toconvey meaning. That context might include, for instance, recent orlong-term experience, semantic knowledge of the world, or objects and events in the immediate environment. In this paper, we describe embodied agents instantiated in Soar cognitive architecture that use context derived from their linguistic, perceptual, procedural and semantic knowledge for comprehending imperative sentences.
This paper discusses the challenge of designing instructable agents that can learn through interaction with a human expert. Learning through instruction is a powerful paradigm for acquiring knowledge because it limits the complexity of the learning task in a variety of ways. To support learning through instruction, the agent must be able to effectively communicate its lack of knowledge to the human, comprehend instructions, and apply them to the ongoing task. Weidentify some problems of concern when designing instructable agents. We propose an agent design that addresses some of these problems. We instantiate this design in the Soar cognitive architecture and analyze its capabilities on a learning task.
This paper documents a functionality-driven exploration of automatic working-memory management in Soar. We first derive and discuss desiderata that arise from the need to embed a mechanism for managing working memory within a general cognitive architecture that is used to develop real-time agents. We provide details of our mechanism, including the decay model and architecture-independent data structures and algorithms that are computationally efficient. Finally, we present empirical results, which demonstrate both that our mechanism performs with little computational overhead and that it helps maintain the reactivity of a Soar agent contending with long-term, autonomous simulated robotic exploration as it reasons using large amounts of acquired information.
In this work, we look at the challenge of learning in an action game,Infinite Mario. Learning to play an action game can be divided intotwo distinct but related problems, learning an object-relatedbehavior and selecting a primitive action. We propose a framework that allows for the use of reinforcement learning for both ofthese problems. We present promising results in some instances of thegame and identify some problems that might affect learning.