If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
We review the psychological notion of affordances and examine it anew from a cognitive systems perspective. We distinguish between environmental affordances and their internal representation, choosing to focus on the latter. We consider issues that arise in representing mental affordances, using them to understand and generate plans, and learning them from experience. In each case, we present theoretical claims that, together, form an incipient theory of affordance in cognitive systems. We close by noting related research and proposing directions for future work in this arena.
People regularly use objects in the environment as tools to achieve their goals. In this paper we report extensions to the ICARUS cognitive architecture that let it create and use combinations of objects inthis manner. These extensions include the ability to represent virtual objects composed of simpler ones and to reason about their quantitative features. They also include revised modules for planning and execution that operate over this hybrid representation, taking into account both relational structures and numeric attributes. We demonstrate the extended architecture's behavior on a number of tasks that involve tool construction and use, after which we discuss related research and plans for future work.
Langley, Pat (Institute for the Study of Learning and Expertise)
Research on cognitive architectures attempts to develop unified theories of the mind. This paradigm incorporates many ideas from other parts of AI, but it differs enough in its aims and methods that it merits separate treatment. In this paper, we review the notion of cognitive architectures and some recurring themes in their study. Next we examine the substantial progress made by the subfield over the past 40 years, after which we turn to some topics that have received little attention and that pose challenges for the research community.
Inductive process modeling involves the construction of explanatory accounts for multivariate time series. As typically specified, background knowledge is available in the form of generic processes that serve as the building blocks for candidate model structures. In this paper, we present a more flexible approach that, when available processes are insufficient to construct an acceptable model, automatically produces new generic processes that let it complete the task. We describe FPM, a system that implements this idea by composing knowledge about algebraic rate expressions and about conceptual processes like predation and remineralization in ecology. We demonstrate empirically FPM's ability to construct new generic processes when necessary and to transfer them later to new modeling tasks. We also compare its failure-driven approach with a naive scheme that generates all possible processes at the outset. We conclude by discussing prior work on equation discovery and model construction, along with plans for additional research.
This paper presents a novel approach to inductive process modeling, the task of constructing a quantitative account of dynamical behavior from time-series data and background knowledge. We review earlier work on this topic, noting its reliance on methods that evaluate entire model structures and use repeated simulation to estimate parameters, which together make severe computational demands. In response, we present an alternative method for process model induction that assumes each process has a rate, that this rate is determined by an algebraic expression, and that changes due to a process are directly proportionalto its rate. We describe RPM, an implemented system that incorporates these ideas, and we report analyses and experiments that suggest it scales well to complex domains and data sets. In closing, we discuss related research and outline ways to extend the framework.
In recent work, Langley et al. (2014) introduced UMBRA, a systemfor plan and dialogue understanding. The program applies a form of abductive inference to generate explanations incrementally from relational descriptions of observed behavior and knowledge inthe form of rules. Although UMBRA's creators described the systemarchitecture, knowledge, and inferences, along with experimental studies of its operation, they did not provide a formalization of its structures or processes. In this paper, we analyze both aspects of the architecture in terms of the Situation Calculus — a classicallogic for reasoning about dynamical systems — and give a specification of the inference task the system performs. After this, we state some properties of this formalization thatare desirable for the task of incremental dialogue understanding. We conclude by discussing related work and describing our plans for additional research.
In this paper, we discuss a computational approach to the cognitivetask of social planning. First, we specify a class of planningproblems that involve an agent who attempts to achieve its goalsby altering other agents' mental states. Next, we describe SFPS,a flexible problem solver that generates social plans of this sort,including ones that include deception and reasoning about otheragents' beliefs. We report the results for experiments on socialscenarios that involve different levels of sophistication and thatdemonstrate both SFPS's capabilities and the sources of its power.Finally, we discuss how our approach to social planning has beeninformed by earlier work in the area and propose directions foradditional research on the topic.
There is general agreement that knowledge plays a key role in intelligent behavior, but most work on this topic has emphasized domain-specific expertise. We argue, in contrast, that cognitive systems also benefit from meta-level knowledge that has a domain-independent character. In this paper, we propose a representational framework that distinguishes between these two forms of content, along with an integrated architecture that supports their use for abductive interpretation and hierarchical skill execution. We demonstrate this framework's viability on high-level aspects of extended dialogue that require reasoning about, and altering, participants' beliefs and goals. Furthermore, we demonstrate its generality by showing that the meta-level knowledge operates with different domain-level content. We conclude by reviewing related work on these topics and discussing promising directions for future research.
In this paper we present a new approach to plan understanding that explains observed actions in terms of domain knowledge. The process operates over hierarchical methods and utilizes an incremental form of data-driven abductive inference. We report experiments on problems from the Monroe corpus that demonstrate a basic ability to construct plausible explanations, graceful degradation of performance with reduction of the fraction of actions observed, and results with incremental processing that are comparable to batch interpretation. We also discuss research on related tasks such as plan recognition and abductive construction of explanations.
Langley, Pat, Sage, Stephanie
In this paper, we examine previous work on the naive Bayesian classifier and review its limitations, which include a sensitivity to correlated features. We respond to this problem by embedding the naive Bayesian induction scheme within an algorithm that c arries out a greedy search through the space of features. We hypothesize that this approach will improve asymptotic accuracy in domains that involve correlated features without reducing the rate of learning in ones that do not. We report experimental results on six natural domains, including comparisons with decision-tree induction, that support these hypotheses. In closing, we discuss other approaches to extending naive Bayesian classifiers and outline some directions for future research.