If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
One particular challenge is to ground human language to robot internal representation of the physical world. Although copresent in a shared environment, humans and robots have mismatched capabilities in reasoning, perception, and action. A robot not only needs to incorporate collaborative effort from human partners to better connect human language to its own representation, but also needs to make extra collaborative effort to communicate its representation in language that humans can understand. This article gives a brief introduction to this research effort and discusses several collaborative approaches to grounding language to perception and action.
Policy making is an extremely complex process occurring in changing environments and affecting the three pillars of sustainable development: society, economy and the environment. Improving decision making in this context could have a huge beneficial impact on all these aspects. There are a number of Artificial Intelligence techniques that could play an important role in improving the policy making process such as decision support and optimization techniques, game theory, data and opinion mining and agent-based simulation. We outline here some potential use of AI technology as it emerged by the European Union (EU) EU FP7 project ePolicy: Engineering the Policy Making Life-Cycle, and we identify some potential research challenges.
Lester, James C. (North Carolina State University) | Ha, Eun Y. (North Carolina State University) | Lee, Seung Y. (North Carolina State University) | Mott, Bradford W. (North Carolina State University) | Rowe, Jonathan P. (North Carolina State University) | Sabourin, Jennifer L. (North Carolina State University)
Intelligent game-based learning environments integrate commercial game technologies with AI methods from intelligent tutoring systems and intelligent narrative technologies. This article introduces the CRYSTAL ISLAND intelligent game-based learning environment, which has been under development in the authors' laboratory for the past seven years. After presenting CRYSTAL ISLAND, the principal technical problems of intelligent game-based learning environments are discussed: narrative-centered tutorial planning, student affect recognition, student knowledge modeling, and student goal recognition. Solutions to these problems are illustrated with research conducted with the CRYSTAL ISLAND learning environment.
Barrett, Christopher (Network Dynamics and Sim Science Lab) | Bisset, Keith (Network Dynamics and Sim Science Lab) | Leidig, Jonathan (Network Dynamics and Sim Science Lab) | Marathe, Achla (Network Dynamics and Sim Science Lab) | Marathe, Madhav V. (Network Dynamics and Sim Science Lab)
We discuss an interaction-based approach to study the coevolution between socio-technical networks, individual behaviors, and contagion processes on these networks. Finally, models of individual behaviors are composed with disease progression models to develop a realistic representation of the complex system in which individual behaviors and the social network adapt to the contagion. These methods are embodied within Simdemics – a general purpose modeling environment to support pandemic planning and response. New advances in network science, machine learning, high performance computing, data mining and behavioral modeling were necessary to develop Simdemics.
The constructionist design methodology (CDM) -- so called because it advocates modular building blocks and incorporation of prior work -- addresses factors that we see as key to future advances in AI, including support for interdisciplinary collaboration, coordination of teams, and large-scale systems integration. We test the methodology by building an interactive multifunctional system with a real-time perception- action loop. The system, whose construction relied entirely on the methodology, consists of an embodied virtual agent that can perceive both real and virtual objects in an augmented-reality room and interact with a user through coordinated gestures and speech. Wireless tracking technologies give the agent awareness of the environment and the user's speech and communicative acts.
We argue that qualitative modeling provides a valuable way for students to learn. Two modelbuilding environments, VMODEL and HOMER/- VISIGARP, are presented that support learners by constructing conceptual models of systems and their behavior using qualitative formalisms. Both environments use diagrammatic representations to facilitate knowledge articulation. Preliminary evaluations in educational settings provide support for the hypothesis that qualitative modeling tools can be valuable aids for learning.
Most human-computer interfaces can be classified according to two dominant metaphors: (1) agent and (2) environment. In the environment metaphor, a model of the task domain is presented for the user to interact with directly. Norman's 1984 model of HCI is introduced as reference to organize and evaluate research in human-agent interaction (HAI). A wide variety of heterogeneous research involving HAI is shown to reflect automation of one of the stages of action or evaluation within Norman's model.
The YODA Robot Project at the University of Southern California/Information Sciences Institute consists of a group of young researchers who share a passion for autonomous systems that can bootstrap its knowledge from real environments by exploration, experimentation, learning, and discovery. Our participation in the Fifth Annual AAAI Mobile Robot Competition and Exhibition, held as part of the Thirteenth National Conference on Artificial Intelligence, served as the first milestone in advancing us toward this goal. YODA's software architecture is a hierarchy of abstraction layers, ranging from a set of behaviors at the bottom layer to a dynamic, mission-oriented planner at the top. This abstraction architecture has proven robust in dynamic and noisy environments, as shown by YODA's performance at the robot competition.