autonomous driving


Tesla Director Of AI Discusses Programming A Neural Net For Autopilot (Video) CleanTechnica

#artificialintelligence

Tesla's Director of AI, Andrej Karpathy, took to the stage at TRAIN AI 2018 and then proceeded to unpack the company's approach to building its Autopilot computer vision solution. His talk was titled, "Building the Software 2.0 Stack." Andrej took on the task of delineating traditional rule-based programming methods from the programming methods used when a neural network -- also known as machine learning or artificial intelligence -- runs the show. In typical internet lingo, he dubs neural net programming software 2.0, with rule-based programming taking up the software 1.0 moniker. It turns out that the differences are considerable and programming a neural net is very different from programming a webpage or smartphone app.


The robotaxi's here – Enrique Dans – Medium

#artificialintelligence

Waymo's application for a taxi service license in Phoenix, Arizona, has been granted almost immediately, in line with the pro-autonomous driving policies of state governor Doug Ducey and thus reduce traffic accidents and in improve mobility for the elderly and disabled. The company plans to use the license to launch a stand-alone taxi service this year competing directly with companies such as Uber or Lyft, and with the advantage that it will not have to pay drivers. Waymo plans a speedy roll out in 24 US cities with a wide variety of weather conditions and where it is currently carrying out road tests. Perhaps the ultimate in road testing is being carried out by Russian tech giant Yandex on the streets of Moscow, which have been hit by spectacularly heavy snowfalls and that are also prone to heavy traffic and undisciplined pedestrians. So far, Yandex's self-driving vehicles have adapted easily the Russia capital.


The Mobility Revolution – A Safe, Secure and Autonomous Future

#artificialintelligence

Many OEMs today and their key partners have recognized autonomous driving systems and software is a core competency needed for their future sustainability. These OEMs are either investing in, acquiring or partnering with third-party technology partners to further their autonomous driving development programs. In collaborative discussions with these OEMs, we have learned that no single entity can acquire all the software tools and capabilities needed for the development of autonomous driving. From in-vehicle hardware and software platforms; to sensor fusion, deep learning, simulation, test, validation; to massive volume data ingestion & management, the immense range of capabilities and sophistication required to secure the engineering objectives are beyond the reach of any one company. The industry estimates that billions of miles of driving in simulated real-world conditions (in the cloud) will be needed to sufficiently test AD algorithms before they can be expected to handle real-world contingencies.


Byton's new electric sedan concept car is even flashier than its SUV

Mashable

Byton, the Chinese electric vehicle company, made a big splash with its Byton M-Byte SUV with four screens, facial recognition, and fitness tracker earlier this year. Now it has another concept vehicle to show off, but this time it's a luxury electric sedan with more autonomous driving capabilities. At CES Asia the electric car startup is introducing the K-Byte, built on the same technical platform as the SUV. That means it shares the same specs and features as the SUV, including the battery with a max range of 323 miles on a single charge. "It's easy to recognize," as a Byton vehicle, CEO Carsten Breitfeld said in a phone call ahead of the unveiling.


Motorists 'are being misled by autonomous driving aids' - report

The Guardian

The marketing of driving assistance features such as Autopilot, ProPilot and others as "autonomous" is setting unrealistic expectations and causing dangerous driving, according to insurers and vehicle safety researchers. In a report, Thatcham Research and the Association of British Insurers (ABI) say that drivers are being lulled into a false sense of security by the marketing of new driver assistance features making their way into cars and costing upwards of £20,000. Features such as Tesla's Enhanced Autopilot and Nissan's ProPilot, as well as terms such as "full self-driving capability" and being "capable of driving autonomously" are giving the false impression of a level of autonomy not yet available. As such, drivers are not treating these features with the level of scrutiny and attention required resulting in crashes and dangerous driving. "We are starting to see real-life examples of the hazardous situations that occur when motorists expect the car to drive and function on its own," said Matthew Avery, the head of research at Thatcham Research.


Why This Startup Created A Deep Learning Chip For Autonomous Vehicles

Forbes Technology

HANOVER, GERMANY - APRIL 25: Close up of the digital display while a camera and radar system assists as artificial intelligence takes over driving the car during tests of autonomous car abilities conducted by Continental AG on the A2 highway on April 25, 2018, near Hanover, Germany. Israeli artificial intelligence (AI) startup, Hailo Technologies, has closed a $12.5 million series A from Maniv Mobility, OurCrowd, and NextGear to develop a chip for deep learning on edge devices and processing of high-resolution sensory data in real time. According to a report from Markets and Markets, edge computing will be worth $6.72 billion by 2020, and IC Insights reported that integrated circuits in cars are expected to generate global sales of $42.9 billion in 2021. In 2017, McKinsey reported in the study, Self Driving Car Technology: when will robots hit the road?, that ADAS systems grew to 140 million in 2016 from 90 million units in 2014. "Because of the low latency required for autonomous driving and advanced driving assistance, deep learning with convolutional neural networks, running on in-vehicle hardware, is necessary," offers Tom Coughlin, IEEE Fellow and President at Coughlin Associates.


Campus artificial intelligence researchers aim to improve self-driving cars

#artificialintelligence

The Berkeley Artificial Intelligence Research Lab, or BAIR, released a study on BDD100K -- a driving database that can be used to train algorithms of self-driving cars -- May 12. The data set can be used to train self-driving cars' artificial intelligence programs, according to BAIR's website. The study concluded that the data set can help researchers understand how different scenarios affect current self-driving car programs. A study by the research team that created the data set described two contributions to self-driving cars, one of which is the data set and the other its video annotation system. According to BAIR's website, BDD100K is "the largest and most diverse driving video dataset," containing 100,000 driving clips.


r/MachineLearning - [P] 3D Object Detection for Autonomous Driving using Deep Learning

@machinelearnbot

In this thesis we study a perception problem in the context of autonomous driving. Specifically, we study the computer vision problem of 3D object detection, in which objects should be detected from various sensor data and their position in the 3D world should be estimated. We also study the application of Generative Adversarial Networks in domain adaptation techniques, aiming to improve the 3D object detection model's ability to transfer between different domains. The state-of-the-art Frustum-PointNet architecture for LiDAR-based 3D object detection was implemented and found to closely match its reported performance when trained and evaluated on the KITTI dataset. The architecture was also found to transfer reasonably well from the synthetic SYN dataset to KITTI, and is thus believed to be usable in a semi-automatic 3D bounding box annotation process.


NVIDIAVoice: Building The AI Architecture To Train, Simulate And Test AI Self-Driving Cars

Forbes Technology

Developing an autonomous vehicle requires a massive amount of data. Before any AV can safely navigate on the road, engineers must first train the artificial intelligence (AI) algorithms that enable the car to drive itself. Deep learning, a form of AI, is used to perceive the environment surrounding the car and to make driving decisions with superhuman levels of performance and precision. This is an enormous big data challenge. A single test vehicle can generate petabytes of data a year.


Why This Country (Not The USA) Will Be First To Adopt Driverless Cars

#artificialintelligence

A Pand-auto's electric car controlled by Baidu's Apollo autonomous driving system is on display at Chongqing Internet Industrial Park on May 24, 2018 in Chongqing, China. Baidu and its partner Pand-auto car sharing service started a one-month trial operation of autonomous car-sharing service in Chongqing Internet Industrial Park today. Two Silicon Valley executives have a friendly bet on when a commercially available autonomous car will transport them within Las Vegas, with no intervention from a human driver. The more bullish one has placed his bet on May 27, 2024. A driverless future will happen sooner than 2024, and it won't happen first in the U.S., Europe or Japan.