If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Nardi, Daniele (Sapienza University of Rome) | Noda, Itsuk (National Institute of Advanced Industrial Science and Technology) | Ribeiro, Fernando (University of Minho) | Stone, Peter (Technische Universität Darmstadt) | Stryk, Oskar von (Carnegie Mellon University) | Veloso, Manuela
RoboCup was created in 1996 by a group of Japanese, American, and European artificial intelligence and robotics researchers with a formidable, visionary long-term challenge: By 2050 a team of robot soccer players will beat the human World Cup champion team. In this article, we focus on RoboCup robot soccer, and present its five current leagues, which address complementary scientific challenges through different robot and physical setups. Full details on the status of the RoboCup soccer leagues, including league history and past results, upcoming competitions, and detailed rules and specifications are available from the league homepages and wikis.
MacAlpine, Patrick (University of Texas at Austin) | Barrett, Samuel (University of Texas at Austin) | Urieli, Daniel (University of Texas at Austin) | Vu, Victor (University of Texas at Austin) | Stone, Peter (University of Texas at Austin)
This paper presents the design and learning architecture for an omnidirectional walk used by a humanoid robot soccer agent acting in the RoboCup 3D simulation environment. The walk, which was originally designed for and tested on an actual Nao robot before being employed in the 2011 RoboCup 3D simulation competition, was the crucial component in the UT Austin Villa team winning the competition in 2011. To the best of our knowledge, this is the first time that robot behavior has been conceived and constructed on a real robot for the end purpose of being used in simulation. The walk is based on a double linear inverted pendulum model, and multiple sets of its parameters are optimized via a novel framework. The framework optimizes parameters for different tasks in conjunction with one another, a little-understood problem with substantial practical significance. Detailed experiments show that the UT Austin Villa agent significantly outperforms all the other agents in the competition with the optimized walk being the key to its success.
Looking ahead to the time when autonomous cars will be common, Dresner and Stone proposed a multiagent systems-based intersection control protocol called Autonomous Intersection Management (AIM). They showed that by leveraging the capacities of autonomous vehicles it is possible to dramatically reduce the time wasted in traffic, and therefore also fuel consumption and air pollution. The proposed protocol, however, handles reservation requests one at a time and does not prioritize reservations according to their relative priorities and waiting times, causing potentially large inequalities in granting reservations. For example, at an intersection between a main street and an alley, vehicles from the alley can take an excessively long time to get reservations to enter the intersection, causing a waste of time and fuel. The same is true in a network of intersections, in which gridlock may occur and cause traffic congestion. In this paper, we introduce the batch processing of reservations in AIM to enforce liveness properties in intersections and analyze the conditions under which no vehicle will get stuck in traffic. Our experimental results show that our prioritizing schemes outperform previous intersection control protocols in unbalanced traffic.
Lin, Raz (Bar-Ilan University) | Kraus, Sarit (Bar-Ilan University) | Agmon, Noa (The University of Texas at Austin) | Barrett, Samuel (The University of Texas at Austin) | Stone, Peter (The University of Texas at Austin)
The interaction of people with autonomous agents has become increasingly prevalent. Some of these settings include security domains, where people can be characterized as uncooperative, hostile, manipulative, and tending to take advantage of the situation for their own needs. This makes it challenging to design proficient agents to interact with people in such environments. Evaluating the success of the agents automatically before evaluating them with people or deploying them could alleviate this challenge and result in better designed agents. In this paper we show how Peer Designed Agents (PDAs) -- computer agents developed by human subjects -- can be used as a method for evaluating autonomous agents in security domains. Such evaluation can reduce the effort and costs involved in evaluating autonomous agents interacting with people to validate their efficacy. Our experiments included more than 70 human subjects and 40 PDAs developed by students. The study provides empirical support that PDAs can be used to compare the proficiency of autonomous agents when matched with people in security domains.
The problem of multiagent patrol has gained considerable attention during the past decade, with the immediate applicability of the problem being one of its main sources of interest. In this paper we concentrate on frequency-based patrol, in which the agents' goal is to optimize a frequency criterion, namely, minimizing the time between visits to a set of interest points. We consider multiagent patrol in environments with complex environmental conditions that affect the cost of traveling from one point to another. For example, in marine environments, the travel time of ships depends on parameters such as wind, water currents, and waves. We demonstrate that in such environments there is a need to consider a new multiagent patrol strategy which divides the given area into parts in which more than one agent is active, for improving frequency. We show that in general graphs this problem is intractable, therefore we focus on simplified (yet realistic) cyclic graphs with possible inner edges. Although the problem remains generally intractable in such graphs, we provide a heuristic algorithm that is shown to significantly improve point-visit frequency compared to other patrol strategies. For evaluation of our work we used a custom developed ship simulator that realistically models ship movement constraints such as engine force and drag and reaction of the ship to environmental changes.
Transfer learning has recently gained popularity due to the development of algorithms that can successfully generalize information across multiple tasks. This article focuses on transfer in the context of reinforcement learning domains, a general learning framework where an agent acts in an environment to maximize a reward signal. The goals of this article are to (1) familiarize readers with the transfer learning problem in reinforcement learning domains, (2) explain why the problem is both interesting and difficult, (3) present a selection of existing techniques that demonstrate different solutions, and (4) provide representative open problems in the hope of encouraging additional research in this exciting area.
Karlgren, Jussi, Kanerva, Pentti, Gamback, Bjorn, Forbus, Kenneth D., Tumer, Kagan, Stone, Peter, Goebel, Kai, Sukhatme, Gaurav S., Balch, Tucker, Fischer, Bernd, Smith, Doug, Harabagiu, Sanda, Chaudri, Vinay, Barley, Mike, Guesgen, Hans, Stahovich, Thomas, Davis, Randall, Landay, James
The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2002 Spring Symposium Series, held Monday through Wednesday, 25 to 27 March 2002, at Stanford University. The nine symposia were entitled (1) Acquiring (and Using) Linguistic (and World) Knowledge for Information Access; (2) Artificial Intelligence and Interactive Entertainment; (3) Collaborative Learning Agents; (4) Information Refinement and Revision for Decision Making: Modeling for Diagnostics, Prognostics, and Prediction; (5) Intelligent Distributed and Embedded Systems; (6) Logic-Based Program Synthesis: State of the Art and Future Trends; (7) Mining Answers from Texts and Knowledge Bases; (8) Safe Learning Agents; and (9) Sketch Understanding.
Veloso, Manuela M., Balch, Tucker, Stone, Peter, Kitano, Hiroaki, Yamasaki, Fuminori, Endo, Ken, Asada, Minoru, Jamzad, M., Sadjad, B. S., Mirrokni, V. S., Kazemi, M., Chitsaz, H., Heydarnoori, A., Hajiaghai, M. T., Chiniforooshan, E.