If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
This paper develops a path planner that minimizes risk (e.g. motion execution) while maximizing accumulated reward (e.g., quality of sensor viewpoint) motivated by visual assistance or tracking scenarios in unstructured or confined environments. In these scenarios, the robot should maintain the best viewpoint as it moves to the goal. However, in unstructured or confined environments, some paths may increase the risk of collision; therefore there is a tradeoff between risk and reward. Conventional state-dependent risk or probabilistic uncertainty modeling do not consider path-level risk or is difficult to acquire. This risk-reward planner explicitly represents risk as a function of motion plans, i.e., paths. Without manual assignment of the negative impact to the planner caused by risk, this planner takes in a pre-established viewpoint quality map and plans target location and path leading to it simultaneously, in order to maximize overall reward along the entire path while minimizing risk. Exact and approximate algorithms are presented, whose solution is further demonstrated on a physical tethered aerial vehicle. Other than the visual assistance problem, the proposed framework also provides a new planning paradigm to address minimum-risk planning under dynamical risk and absence of substructure optimality and to balance the trade-off between reward and risk.
Henkel, Zachary (Texas A&M University) | Groom, Victoria (Stanford University) | Srinivasan, Vasant (Texas A&M University) | Murphy, Robin (Texas A&M University) | Nass, Cliff (Stanford University)
As part of the “Survivor Buddy” project, we have created an open source speech translator toolkit which allows writ- ten or spoken word from multiple independent controllers to be translated into either a single synthetic voice, synthetic voices for each controller, or unchanged natural voice of each controller. The human controllers can work via the in- ternet or be physically co-located with the Survivor Buddy robot. The toolkit is expected to be of use for exploring voice in general human-robot interaction.
Groom, Victoria (Stanford University) | Srinivasan, Vasant (Texas A and M University) | Nass, Clifford (Stanford University) | Murphy, Robin (Texas A and M University) | Bethel, Cindy (Yale University)
Robots are being considered for applications where they serve as proxies for humans interacting with another human,such as emergency response, hostage negotiation, and healthcare. In these domains, the human (“dependent”) is connected to multiple other humans (“controllers”) via the robot proxy for long periods of time. The dependent may want to interact with humans but also to engage the robot as a medium to the World Wide Web. In the future, medical personnel may use the robot for victim assistance and comfort while the rescue team plans and monitors extrication. Other applications include healthcare, where the robot is the link between a patient and a medical provider for intermittent,routine interactions, and hostage negotiation, where police may use a bomb squad robot to talk with and build rapport with the suspect while the SWAT team uses the robot’s sensors to build and maintain situation awareness.Under funding from the National Science Foundation, we are finishing the first year of investigating verbal and nonverbal communication strategies for robots who are serving as proxies for multiple humans interact with the humans who are dependent on them. Our work posits that such a robot would occupy a novel social medium position according to the Computers as Social Actors (CASA) model [Nass,Steuer, and Tauber1994] [Reeves and Nass1996]. Given that teleoperated robots are treated socially, it is unlikely that a rescue robot would be treated as a pure medium even if playing music or videos. Likewise, the limitations of autonomy and the interactions of specialists with the dependent prevent the robot from being a true social actor. Instead, social actor and pure medium are two extremes on the agent identity spectrum, with a social medium occupying a middle position.A social medium would be perceived as a loyal, helpful “go between” who is an advocate for the dependent, rather than a device for accomplishing the goals of multiple controllers(medical specialist, structural engineer, rescue operations official, etc.). To explore the social medium identity,we have built a physical prototype of a Survivor Buddy and are creating autonomous affective behaviors and a social medium toolkit to explore human-robot interaction.
The RoboCup Rescue Physical Agent League Competition was held in the summer of 2001 in conjunction with the AAAI Mobile Robot Competition Urban Search and Rescue event, eerily preceding the September 11 World Trade Center (WTC) disaster. Four teams responded to the WTC disaster through the auspices of the Center for Robot-Assisted Search and Rescue (CRASAR), directed by John Blitch. Blitch, through his position as program manager for the Defense Advanced Research Projects Agency (DARPA) Tactical Mobile Robots Program, was a supporter of the competition; he also served as a member of the rules committee and a judge. USF participated by chairing the rules committee, judging, assisting with the logistics, providing commentary, and demonstrating tethered and wireless robots whenever entrants had to skip around during the competition.
The RoboCup Rescue Physical Agent League Competition was held in the summer of 2001 in conjunction with the AAAI Mobile Robot Competition Urban Search and Rescue event, eerily preceding the September 11 World Trade Center (WTC) disaster. Four teams responded to the WTC disaster through the auspices of the Center for Robot-Assisted Search and Rescue (CRASAR), directed by John Blitch. The four teams were Foster- Miller and iRobot (both robot manufacturers from the Boston area), the United States Navy's Space Warfare Center (SPAWAR) group from San Diego, and the University of South Florida (USF). Blitch, through his position as program manager for the Defense Advanced Research Projects Agency (DARPA) Tactical Mobile Robots Program, was a supporter of the competition; he also served as a member of the rules committee and a judge. USF participated by chairing the rules committee, judging, assisting with the logistics, providing commentary, and demonstrating tethered and wireless robots whenever entrants had to skip around during the competition. Based on our experiences and history, we were asked to comment on the validity of the competition. The CRASAR collective experience suggests that most of the basic rules of the competition matched reality because the rules accurately reflected deployment scenarios, but the National Institute of Standards and Technology (NIST) Standard Test Course, and hardware or software approaches forwarded by competitors in last summer's event, missed the mark. This article briefly reviews the types of robots and missions used by CRASAR at the WTC site, then discusses the robotassisted search and rescue effort in terms of lessons for the competition.