alvinn
ALVINN: An Autonomous Land Vehicle in a Neural Network
ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Cur(cid:173) rently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perfOIm the task differs dra(cid:173) matically when the networlc is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand.
- Transportation > Ground > Road (0.67)
- Automobiles & Trucks (0.67)
Rapidly Adapting Artificial Neural Networks for Autonomous Navigation
The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses the problem of training artificial neural networks in real time to perform difficult perception tasks. ALVINN,is a back-propagation network that uses inputs from a video camera and an imaging laser rangefinder to drive the CMU Navlab, a modified Chevy van. This paper describes training techniques which allow ALVINN to learn in under 5 minutes to autonomously control the Navlab by watching a human driver's response to new situations. Using these techniques, ALVINN has been trained to drive in a variety of circumstances including single-lane paved and unpaved roads, multilane lined and unlined roads, and obstacle-ridden on- and off-road environments, at speeds of up to 20 miles per hour.
Life in the Fast Lane
Giving robots the ability to operate in the real world has been, and continues to be, one of the most difficult tasks in AI research. Since 1987, researchers at Carnegie Mellon University have been investigating one such task. Their research has been focused on using adaptive, vision-based systems to increase the driving performance of the Navlab line of on-road mobile robots. This research has led to the development of a neural network system that can learn to drive on many road types simply by watching a human teacher. This article describes the evolution of this system from a research project in machine learning to a robust driving system capable of executing tactical driving maneuvers such as lane changing and intersection navigation.
- Transportation > Ground > Road (1.00)
- Information Technology (1.00)
Carnegie Mellon's 1986 Self-Driving Van Was Adorable
Computer scientists have been at the self-driving vehicle problem for longer than you might think. Early research into the automated logic required for autonomous cars was published in the mid-70s, while the first fully robotic van came around in the early-80s courtesy of Ernst Dickmanns and his team at Bundeswehr University Munich. Efforts at Carnegie Mellon, meanwhile, were pushing the technology on the other side of that Atlantic. Then came NavLab, in 1986. Yeah, it's pretty quaint, but machine vision algorithms, in particular, were still young.
Life in the Fast Lane: The Evolution of an Adaptive Vehicle Control System
Giving robots the ability to operate in the real world has been, and continues to be, one of the most difficult tasks in AI research. Since 1987, researchers at Carnegie Mellon University have been investigating one such task. Their research has been focused on using adaptive, vision-based systems to increase the driving performance of the Navlab line of on-road mobile robots. This research has led to the development of a neural network system that can learn to drive on many road types simply by watching a human teacher. This article describes the evolution of this system from a research project in machine learning to a robust driving system capable of executing tactical driving maneuvers such as lane changing and intersection navigation.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > Michigan > Wayne County > Detroit (0.04)
- North America > United States > Maryland (0.04)
- (8 more...)
- Transportation > Infrastructure & Services (1.00)
- Transportation > Ground > Road (1.00)
- Government > Regional Government > North America Government > United States Government (0.67)
Rapidly Adapting Artificial Neural Networks for Autonomous Navigation
Dean A. Pomerleau School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses the problem of training artificial neural networks in real time to perform difficult perception tasks. ALVINN,is a back-propagation network that uses inputs from a video camera and an imaging laser rangefinder to drive the CMU Navlab, a modified Chevy van. This paper describes training techniques which allow ALVINN to learn in under 5 minutes to autonomously control the Navlab by watching a human driver's response to new situations. Using these techniques, ALVINN has been trained to drive in a variety of circumstances including single-lane paved and unpaved roads, multilane lined and unlined roads, and obstacle-ridden on-and off-road environments, at speeds of up to 20 miles per hour. 1 INTRODUCTION Previous trainable connectionist perception systems have often ignored important aspects of the form and content of available sensor data. Because of the assumed impracticality of training networks to perform realistic high level perception tasks, connectionist researchers have frequently restricted their task domains to either toy problems (e.g. the TC identification problem [11] [6]) or fixed low level operations (e.g.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.24)
- North America > United States > Massachusetts > Suffolk County > Boston (0.05)
- North America > United States > California > San Mateo County > San Mateo (0.05)
- (3 more...)
Rapidly Adapting Artificial Neural Networks for Autonomous Navigation
Dean A. Pomerleau School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses the problem of training artificial neural networks in real time to perform difficult perception tasks. ALVINN,is a back-propagation network that uses inputs from a video camera and an imaging laser rangefinder to drive the CMU Navlab, a modified Chevy van. This paper describes training techniques which allow ALVINN to learn in under 5 minutes to autonomously control the Navlab by watching a human driver's response to new situations. Using these techniques, ALVINN has been trained to drive in a variety of circumstances including single-lane paved and unpaved roads, multilane lined and unlined roads, and obstacle-ridden on-and off-road environments, at speeds of up to 20 miles per hour. 1 INTRODUCTION Previous trainable connectionist perception systems have often ignored important aspects of the form and content of available sensor data. Because of the assumed impracticality of training networks to perform realistic high level perception tasks, connectionist researchers have frequently restricted their task domains to either toy problems (e.g. the TC identification problem [11] [6]) or fixed low level operations (e.g.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.24)
- North America > United States > Massachusetts > Suffolk County > Boston (0.05)
- North America > United States > California > San Mateo County > San Mateo (0.05)
- (3 more...)
Rapidly Adapting Artificial Neural Networks for Autonomous Navigation
Dean A. Pomerleau School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses the problem of training artificial neural networks in real time to perform difficult perception tasks. ALVINN,is a back-propagation network that uses inputs from a video camera and an imaging laser rangefinder to drive the CMU Navlab, a modified Chevy van. This paper describes training techniques which allow ALVINN to learn in under 5 minutes to autonomously control the Navlab by watching a human driver's response to new situations. Using these techniques, ALVINN has been trained to drive in a variety of circumstances including single-lane paved and unpaved roads, multilane lined and unlined roads, and obstacle-ridden on-and off-road environments, at speeds of up to 20 miles per hour. 1 INTRODUCTION Previous trainable connectionist perception systems have often ignored important aspects of the form and content of available sensor data. Because of the assumed impracticality of training networks to perform realistic high level perception tasks, connectionist researchers have frequently restricted their task domains to either toy problems (e.g. the TC identification problem [11] [6]) or fixed low level operations (e.g.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.24)
- North America > United States > Massachusetts > Suffolk County > Boston (0.05)
- North America > United States > California > San Mateo County > San Mateo (0.05)
- (3 more...)
ALVINN: An Autonomous Land Vehicle in a Neural Network
ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perfOIm the task differs dramatically when the networlc is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.14)
- North America > United States > California > San Diego County > San Diego (0.05)
- Asia > Middle East > Jordan (0.05)
- (4 more...)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.94)
- Automobiles & Trucks (0.72)
- Transportation > Ground > Road (0.62)
ALVINN: An Autonomous Land Vehicle in a Neural Network
ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perfOIm the task differs dramatically when the networlc is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.14)
- North America > United States > California > San Diego County > San Diego (0.05)
- Asia > Middle East > Jordan (0.05)
- (4 more...)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.94)
- Automobiles & Trucks (0.72)
- Transportation > Ground > Road (0.62)