Results


Life in the Fast Lane

AI Magazine

Giving robots the ability to operate in the real world has been, and continues to be, one of the most difficult tasks in AI research. Since 1987, researchers at Carnegie Mellon University have been investigating one such task. Their research has been focused on using adaptive, vision-based systems to increase the driving performance of the Navlab line of on-road mobile robots. This research has led to the development of a neural network system that can learn to drive on many road types simply by watching a human teacher. This article describes the evolution of this system from a research project in machine learning to a robust driving system capable of executing tactical driving maneuvers such as lane changing and intersection navigation.


Self-driving Audis of 2020 will be powered by Nvidia artificial intelligence

#artificialintelligence

Audi and Nvidia have been collaborating for some time, but at CES 2017, the companies made their biggest joint announcement yet. Using artificial intelligence and deep learning technology, the companies will bring fully automated driving to the roads by 2020. To achieve this, Audi will leverage Nvidia's expertise in artificial intelligence, the fruits of which are already being shown at CES. Audi's Q7 Piloted Driving Concept is fitted with Nvidia's Drive PX 2 processor and after only four days of "training," the vehicle is already driving itself over a complex road course. This is due to the Drive PX 2's incredible ability to learn on the go, which is a far cry from the first driverless cars that needed pre-mapped routes to function properly. "Nvidia is pioneering the use of deep learning AI to revolutionize transportation," Nvidia CEO Jen-Hsun Huang said.


After Mastering Singapore's Streets, NuTonomy's Robo-taxis Are Poised to Take on New Cities

IEEE Spectrum Robotics Channel

Take a short walk through Singapore's city center and you'll cross a helical bridge modeled on the structure of DNA, pass a science museum shaped like a lotus flower, and end up in a towering grove of artificial Supertrees that pulse with light and sound. It's no surprise, then, that this is the first city to host a fleet of autonomous taxis. Since last April, robo-taxis have been exploring the 6 kilometers of roads that make up Singapore's One-North technology business district, and people here have become used to hailing them through a ride-sharing app. Maybe that's why I'm the only person who seems curious when one of the vehicles--a slightly modified Renault Zoe electric car--pulls up outside of a Starbucks. Seated inside the car are an engineer, a safety driver, and Doug Parker, chief operating officer of nuTonomy, the MIT spinout that's behind the project.


How Intelligent is Artificial Intelligence?

#artificialintelligence

There is no question that the portability and omnipresence of cameras in today's society has improved driver safety -- video of a vehicle crash helps people find out specifically what went wrong. But what if you could impart artificial intelligence into those camera systems in vehicles, and predict problems on the road and prevent disaster? Netradyne's Driver-I technology uses machine learning to predict and prevent accidents in the commercial transportation industry San Diego, California-based Netradyne has developed technology designed to do just that, integrating cameras and deep learning with their Driver-i, a "vision based" system, mounted in or on commercial vehicles. According to Pandya, the age of machines controlling humans is far off.


Nexar Joins Berkeley DeepDrive Consortium to Shape the Future Of Driving

#artificialintelligence

Alongside fellow industry leaders participating in the BDD Consortium, the Nexar team will apply its rapidly expanding data network and industry know-how to infuse state-of-the-art deep learning techniques for the optimal and safest driving experience." Backed by the BDD Consortium's advanced research, Nexar can now better analyze and understand this data in order to ultimately gain the best perception of driving and road conditions. With this information, Nexar further develops its vehicle-to-vehicle network, essential for the future of autonomous and human-driven cars. The BDD Industry Consortium investigates state-of-the-art technologies in computer vision and machine learning for automotive applications.


Israeli company developing system to allow cars to learn how to drive through experience

#artificialintelligence

Mobileye has been in the news of late for another reason--its system was the one being used by the Tesla vehicle that was involved in a car crash in Florida recently--the incident is still under investigation by the NHTSA. Shashua does not believe that will harm the company's new initiative, though--building a system based on neural networking, which, if all goes according to plan, will allow a car or truck to learn how to drive in much the same way that humans do. This approach allows for learning all aspects of driving the way that people do as they grow older--by initially recognizing road signs, for example, or seeing the differences between cars, buildings, pedestrians or other objects--and later by coming to understand things like braking distance, road handling and the habits of other drivers on the road. But there is one catch to creating such a system--neural networks learn by example, which means they need a lot of examples.


Intelligent vision systems and AI for the development of autonomous driving

#artificialintelligence

Like much of the technology needed to support and enable autonomous vehicles, intelligent vision systems already exist and are used in other industries, for example, in industrial robots. This will require processing power that is only now becoming available, through advances made in System-on-Chip platforms, advanced software, deep learning algorithms and open source projects. It is enabled by the development of Heterogeneous System Architectures (HSA); platforms that combine powerful general purpose Microprocessing Units (MPUs) with very powerful and highly parallel Graphical Processing Units (GPUs). The software infrastructure needed to develop intelligent vision systems, such as OpenCV (Open Source Computer Vision) and OpenCL (Open Computer Language) require high performance processing platforms to execute their advanced algorithms.


NVIDIA Brings Artificial Intelligence Technology To The Street - Gas 2

#artificialintelligence

A team of NVIDIA engineers working out of a former Bell Labs office in New Jersey decided to use deep learning to teach an automobile how to drive. Instead, they used an NVIDIA DevBox and Torch 7 (a machine learning library) for training and an NVIDIA DRIVE PX self driving car computer to do all the processing. The team trained a CNN with time-stamped video from a front-facing camera in the car synced with the steering wheel angle applied by the human driver. You can read the entire NVIDIA research paper, End To End Learning For Self Driving Cars for yourself or watch the video to learn more about how artificial intelligence is teaching cars how to drive themselves.


Impact of deep learning on computer vision

#artificialintelligence

The technological challenges that must be addressed before autonomous cars can be unleashed onto the streets are quite significant. Using deep learning techniques, the computer can look at hundreds and thousands of pictures, e.g., an electric guitar, and start to learn what an electric guitar looks like in different configurations, contexts, levels of daylight, backgrounds and environments. Sitting behind all this intelligence are neural networks; computer models that are designed to mimic our understanding of how the human brain works. The following year there were of course multiple deep learning models and Microsoft broke records recently when its machine was able to beat their human control subject in the challenge.


End-to-End Deep Learning for Self-Driving Cars

#artificialintelligence

The system is trained to automatically learn the internal representations of necessary processing steps, such as detecting useful road features, with only the human steering angle as the training signal. We train the weights of our network to minimize the mean-squared error between the steering command output by the network, and either the command of the human driver or the adjusted steering command for off-center and rotated images (see "Augmentation", later). Figure 5 shows the network architecture, which consists of 9 layers, including a normalization layer, 5 convolutional layers, and 3 fully connected layers. We follow the five convolutional layers with three fully connected layers, leading to a final output control value which is the inverse-turning-radius.