Country
Making an Impact: Artificial Intelligence at the Jet Propulsion Laboratory
Chien, Steve, DeCoste, Dennis, Doyle, Richard, Stolorz, Paul
The National Aeronautics and Space Administration (NASA) is being challenged to perform more frequent and intensive space-exploration missions at greatly reduced cost. Nowhere is this challenge more acute than among robotic planetary exploration missions that the Jet Propulsion Laboratory (JPL) conducts for NASA. This article describes recent and ongoing work on spacecraft autonomy and ground systems that builds on a legacy of existing success at JPL applying AI techniques to challenging computational problems in planning and scheduling, real-time monitoring and control, scientific data analysis, and design automation.
AAAI News
Ballots will be due Applications of Artificial Intelligence have an accepted technical paper, back at the AAAI office no later than (IAAI-97) will be held in and then to students who are actively June 13. Conference on Knowledge Discovery are encouraged to apply. For further information be held November 8-10 at the Massachusetts following the American Statistical about the Scholarship Program, Institute of Technology in Association annual meeting in Anaheim. The topics Information about these conferences please contact AAAI at scholarships@aaai.org, of seven symposia will be: is available by writing to All student scholarship recipients Context in Knowledge Representation Registration materials for AAAI-97, will be required to participate in the and Natural Language Sasa IAAI-97, and KDD-97 are now available Student Volunteer Program to support Buvac (buvac@cs.stanford.edu), For further information, participation is a valuable contribution.
Yoda: The Young Observant Discovery Agent
Shen, Wei-Min, Adibi, Jafar, Cho, Bongham, Kaminka, Gal, Kim, Jihie, Salemi, Behnam, Tejada, Sheila
The YODA Robot Project at the University of Southern California/Information Sciences Institute consists of a group of young researchers who share a passion for autonomous systems that can bootstrap its knowledge from real environments by exploration, experimentation, learning, and discovery. Our goal is to create a mobile agent that can autonomously learn from its environment based on its own actions, percepts, and mis-sions. Our participation in the Fifth Annual AAAI Mobile Robot Competition and Exhibition, held as part of the Thirteenth National Conference on Artificial Intelligence, served as the first milestone in advancing us toward this goal. YODA's software architecture is a hierarchy of abstraction layers, ranging from a set of behaviors at the bottom layer to a dynamic, mission-oriented planner at the top. The planner uses a map of the environment to determine a sequence of goals to be accomplished by the robot and delegates the detailed executions to the set of behaviors at the lower layer. This abstraction architecture has proven robust in dynamic and noisy environments, as shown by YODA's performance at the robot competition.
Kansas State's Slick Willie Robot Software
The team's robot software was nicknamed Their project was to develop software on the Nomad 200 robot for tasks such as maze following, office delivery, and office navigation. In both the second for the competition's Office Navigation and the final rounds, the software achieved event. The perfectly complete the task. It is equipped with 2 sonar route from the director's office to the conference rings of 16 sonars each and with 2 charge-coupled rooms, directed the robot to each of the device (CCD) cameras. The robot has a two conference rooms, correctly determined 486 processor on board with a hard drive and which conference room was not occupied, 16 megabytes of memory. The behaviors at the bottom level needed to worry about low-level responsibilities, such as avoiding obstacles and not hitting walls but did not need to know about the overall strategy for solving the task.
A Retrospective of the AAAI Robot Competitions
Bonasso, R. Peter, Dean, Thomas
This article is the content of an invited talk given by the authors at the Thirteenth National Conference on Artificial Intelligence (AAAI-96). The piece begins with a short history of the competition, then discusses the technical challenges and the political and cultural issues associated with bringing it off every year. We also cover the science and engineering involved with the robot tasks and the educational and commercial aspects of the competition. We finish with a discussion of the community formed by the organizers, participants, and the conference attendees. The original talk made liberal use of video clips and slide photographs; so, we have expanded the text and added photographs to make up for the lack of such media.
Strong AI Is Simply Silly
That the Strong AI is still alive may have a lot to do with its avoidance of true tests. But when I show to win the Simon Newcomb fond) is silly. We would now retract our which runs as follows. P is a precise disproof of p, then upon hearing their beloved m classified P too is silly. Though Strong AI is simply silly, as silly.
Dynamic Object Capture Using Fast Vision Tracking
Sargent, Randy, Bailey, Bill, Witty, Carl, Wright, Anne
This article discusses the use of fast (60 frames per second) object tracking using the COGNACHROME VISION SYSTEM, produced by Newton Research Labs. The authors embedded the vision system in a small robot base to tie for first place in the Clean Up the Tennis Court event at the 1996 Annual AAAI Mobile Robot Competition and Exhibition, held as part of the Thirteenth National Conference on Artificial Intelligence. Of particular interest is that the authors' entry was the only robot capable of using a gripper to capture and pick up the motorized, randomly moving squiggle ball. Other examples of robotic systems using fast vision tracking are also presented, such as a robot arm capable of catching thrown objects and the soccer-playing robot team that won the 1996 Micro Robot World Cup Soccer Tournament in Taejon, Korea.
Improved Heterogeneous Distance Functions
Wilson, D. R., Martinez, T. R.
Instance-based learning techniques typically handle continuous and linear input values well, but often do not handle nominal input attributes appropriately. The Value Difference Metric (VDM) was designed to find reasonable distance values between nominal attribute values, but it largely ignores continuous attributes, requiring discretization to map continuous values into nominal values. This paper proposes three new heterogeneous distance functions, called the Heterogeneous Value Difference Metric (HVDM), the Interpolated Value Difference Metric (IVDM), and the Windowed Value Difference Metric (WVDM). These new distance functions are designed to handle applications with nominal attributes, continuous attributes, or both. In experiments on 48 applications the new distance metrics achieve higher classification accuracy on average than three previous distance functions on those datasets that have both nominal and continuous attributes.