Goto

Collaborating Authors

Results


Nvidia Is Pushing the Limits in Autonomous Driving

#artificialintelligence

For those not familiar with TuSimple, they are a publicly traded company and they focus in creating autonomous self-driving trucks, which I think is pretty impressive. Trucks are such a huge machine, that you need to have so many sensors, because obviously a car can really cause a lot of damage, but a truck, self-driving, if that goes circuit can definitely cause some insane damage. They partnered up with Nvidia, chips to work on autonomous computing, to work with their autonomous driving. For those not familiar with Nvidia, they are semiconductor company that makes chips or processors and accelerators. The reason you need these accelerators is because when you're doing autonomous driving, you really need to be able to compute all this information, all these sensors that you're seeing at real time.


Challenges of Artificial Intelligence -- From Machine Learning and Computer Vision to Emotional Intelligence

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.


Image Classification with CondenseNeXt for ARM-Based Computing Platforms

arXiv.org Artificial Intelligence

In this paper, we demonstrate the implementation of our ultra-efficient deep convolutional neural network architecture: CondenseNeXt on NXP BlueBox, an autonomous driving development platform developed for self-driving vehicles. We show that CondenseNeXt is remarkably efficient in terms of FLOPs, designed for ARM-based embedded computing platforms with limited computational resources and can perform image classification without the need of a CUDA enabled GPU. CondenseNeXt utilizes the state-of-the-art depthwise separable convolution and model compression techniques to achieve a remarkable computational efficiency. Extensive analyses are conducted on CIFAR-10, CIFAR-100 and ImageNet datasets to verify the performance of CondenseNeXt Convolutional Neural Network (CNN) architecture. It achieves state-of-the-art image classification performance on three benchmark datasets including CIFAR-10 (4.79% top-1 error), CIFAR-100 (21.98% top-1 error) and ImageNet (7.91% single model, single crop top-5 error). CondenseNeXt achieves final trained model size improvement of 2.9+ MB and up to 59.98% reduction in forward FLOPs compared to CondenseNet and can perform image classification on ARM-Based computing platforms without needing a CUDA enabled GPU support, with outstanding efficiency.


Experimental Analysis of Trajectory Control Using Computer Vision and Artificial Intelligence for Autonomous Vehicles

arXiv.org Artificial Intelligence

Perception of the lane boundaries is crucial for the tasks related to autonomous trajectory control. In this paper, several methodologies for lane detection are discussed with an experimental illustration: Hough transformation, Blob analysis, and Bird's eye view. Following the abstraction of lane marks from the boundary, the next approach is applying a control law based on the perception to control steering and speed control. In the following, a comparative analysis is made between an open-loop response, PID control, and a neural network control law through graphical statistics. To get the perception of the surrounding a wireless streaming camera connected to Raspberry Pi is used. After pre-processing the signal received by the camera the output is sent back to the Raspberry Pi that processes the input and communicates the control to the motors through Arduino via serial communication.


Buy Intel Stock Because It Dominates AI and Autonomous Driving, Analyst Says

#artificialintelligence

Intel makes processors that act as the main computing brains for PCs and servers. Nomura Instinet chip analyst David Wong initiated coverage on Intel on Tuesday with a Buy rating, predicting long-term sales growth of 8% to 10% annually for the technology giant. "Intel is the world leader in processors for artificial intelligence and autonomous driving," he wrote. "We think that microprocessor growth could well be above overall semiconductor industry growth over the next decade, fueling long-term top-line growth for Intel." The analyst started his price target for Intel at $65, representing 17% upside to the current stock price.


DRIVE Labs: Detecting the Distance NVIDIA Blog

#artificialintelligence

Editor's note: This is the latest post in our NVIDIA DRIVE Labs series, which takes an engineering-focused look at individual autonomous vehicle challenges and how NVIDIA DRIVE addresses them. The problem: judging distances is anything but simple. We humans, of course, have two high-resolution, highly synchronized visual sensors -- our eyes -- that let us to gauge distances using stereo-vision processing in our brain. A comparable, dual-camera stereo vision system in a self-driving car, however, would be very sensitive. If the cameras are even slightly out of sync, it leads to what's known as "timing misalignment," creating inaccurate distance estimates.


How We Taught Neural Nets to Predict Lane Lines NVIDIA Blog

#artificialintelligence

Editor's note: This is the latest post in our NVIDIA DRIVE Labs series, which takes an engineering-focused look at individual autonomous vehicle challenges and how NVIDIA DRIVE addresses them. Lane markings are critical guides for autonomous vehicles, providing vital context for where they are and where they're going. That's why detecting them with pixel-level precision is fundamentally important for self-driving cars. To begin with, AVs need long lane detection range -- which means the AV system needs to perceive lanes at long distances from the ego car, or the vehicle in which the perception algorithms are operating. Detecting more lane line pixels near the horizon in the image adds tens of meters to lane-detection range in real life.


NVIDIA Builds Supercomputer to Build Self-Driving Cars NVIDIA Blog

#artificialintelligence

In a clear demonstration of why AI leadership demands the best compute capabilities, NVIDIA today unveiled the world's 22nd fastest supercomputer -- DGX SuperPOD -- which provides AI infrastructure that meets the massive demands of the company's autonomous-vehicle deployment program. The system was built in just three weeks with 96 NVIDIA DGX-2H supercomputers and Mellanox interconnect technology. Delivering 9.4 petaflops of processing capability, it has the muscle for training the vast number of deep neural networks required for safe self-driving vehicles. Customers can buy this system in whole or in part from any DGX-2 partner based on our DGX SuperPOD design. AI training of self-driving cars is the ultimate compute-intensive challenge.


Apple updates TV app so people can watch original shows and host of other channels on iPhones and Macs

The Independent - Tech

Apple has announced a complete update for its TV app – as well as changes that will bring it to other company's smart TVs. The update brings the company's new streaming service, known as Apple TV, to all of the supported devices. But it also allows for new ways of watching content from other companies, too, allowing people to watch video from companies such as HBO or Amazon Prime. Unveiling that Apple TV service, it said it had worked with the likes of Steven Spielberg, Reese Witherspoon and Jennifer Aniston among others. We'll tell you what's true.


Nvidia releases Drive Constellation simulation platform for autonomous vehicle testing

#artificialintelligence

Autonomous vehicle development is a time and resource-intensive business, requiring dozens of test vehicles, thousands of hours of data collection and millions of miles of driving to hone the artificial brains of the cars of tomorrow. What if you could do most of that in the cloud? That's the question Nvidia hopes to answer with the release of its Nvidia Drive Constellation testing platform for self-driving cars. The announcement came during the keynote address at Nvidia's 2019 GPU Technology Conference in San Jose Monday. Drive Constellation is, basically, a simulation and validation platform that allows automakers and developers to test their autonomous vehicles and technologies in a virtual environment that lives in a specially-designed cloud server.