If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The field of adaptive robotics involves simulations and real-world implementations of robots that adapt to their environments. In this article, I introduce adaptive environmentics -- the flip side of adaptive robotics -- in which the environment adapts to the robot. To illustrate the approach, I offer three simple experiments in which a genetic algorithm is used to shape an environment for a simulated khepera robot. I then discuss at length the potential of adaptive environmentics, also delineating several possible avenues of future research.
Choosing an environment is an important decision for agent developers. A key issue in this decision is whether the environment will provide realistic problems for the agent to solve, in the sense that the problems are true to the issues that arise in addressing a particular research question. In addition to realism, other important issues include how tractable problems are that can be formulated in the environment, how easy agent performance can be measured, and whether the environment can be customized or extended for specific research questions. In the ideal environment, researchers can pose realistic but tractable problems to an agent, measure and evaluate its performance, and iteratively rework the environment to explore increasingly ambitious questions, all at a reasonable cost in time and effort. As might be expected, trade-offs dominate the suitability of an environment; however, we have found that the modern graphic user interface offers a good balance among these trade-offs. This article takes a brief tour of agent research in the user interface, showing how significant questions related to vision, planning, learning, cognition, and communication are currently being addressed.
Many of the intelligent tutoring systems that have been developed during the last 20 years have proven to be quite successful, particularly in the domains of mathematics, science, and technology. They produce significant learning gains beyond classroom environments. They are capable of engaging most students' attention and interest for hours. We have been working on a new generation of intelligent tutoring systems that hold mixed-initiative conversational dialogues with the learner. The tutoring systems present challenging problems and questions to the learner, the learner types in answers in English, and there is a lengthy multiturn dialogue as complete solutions or answers evolve. This article presents the tutoring systems that we have been developing. AutoTutor is a conversational agent, with a talking head, that helps college students learn about computer literacy. andes, atlas, and why2 help adults learn about physics. Instead of being mere information-delivery systems, our systems help students actively construct knowledge through conversations.
The belief that humans will be able to interact with computers in conversational speech has long been a favorite subject in science fiction, reflecting the persistent belief that spoken dialogue would be the most natural and powerful user interface to computers. With recent improvements in computer technology and in speech and language processing, such systems are starting to appear feasible. There are significant technical problems that still need to be solved before speech-driven interfaces become truly conversational. This article describes the results of a 10-year effort building robust spoken dialogue systems at the University of Rochester.
In this article, I describe agent-centered search (also called real-time search or local search) and illustrate this planning paradigm with examples. Agent-centered search methods interleave planning and plan execution and restrict planning to the part of the domain around the current state of the agent, for example, the current location of a mobile robot or the current board position of a game. These methods can execute actions in the presence of time constraints and often have a small sum of planning and execution cost, both because they trade off planning and execution cost and because they allow agents to gather information early in nondeterministic domains, which reduces the amount of planning they have to perform for unencountered situations. These advantages become important as more intelligent systems are interfaced with the world and have to operate autonomously in complex environments. Agent-centered search methods have been applied to a variety of domains, including traditional search, strips-type planning, moving-target search, planning with totally and partially observable Markov decision process models, reinforcement learning, constraint satisfaction, and robot navigation. I discuss the design and properties of several agent-centered search methods, focusing on robot exploration and localization.
Recent years have witnessed significant progress in intelligent user interfaces. Emerging from the intersection of AI and human-computer interaction, research on intelligent user interfaces is experiencing a renaissance, both in the overall level of activity and in raw research achievements. Research on intelligent user interfaces exploits developments in a broad range of foundational AI work, ranging from knowledge representation and computational linguistics to planning and vision. Because intelligent user interfaces are designed to facilitate problem-solving activities where reasoning is shared between users and the machine, they are currently transitioning from the laboratory to applications in the workplace, home, and classroom.
This article gives an overview of current research on animated pedagogical agents at the Center for Advanced Research in Technology for Education (CARTE) at the University of Southern California/Information Sciences Institute. Animated pedagogical agents, nicknamed guidebots, interact with learners to help keep learning activities on track. They combine the pedagogical expertise of intelligent tutoring systems with the interpersonal interaction capabilities of embodied conversational characters. They can support the acquisition of team skills as well as skills performed alone by individuals. At CARTE, we have been developing guidebots that help learners acquire a variety of problem-solving skills in virtual worlds, in multimedia environments, and on the web. We are also developing technologies for creating interactive pedagogical dramas populated with guidebots and other autonomous animated characters.