Collaborating Authors


3D Machine Learning 201 Guide: Point Cloud Semantic Segmentation


Having the skills and the knowledge to attack every aspect of point cloud processing opens up many ideas and development doors. It is like a toolbox for 3D research creativity and development agility. And at the core, there is this incredible Artificial Intelligence space that targets 3D scene understanding. It is particularly relevant due to its importance for many applications, such as self-driving cars, autonomous robots, 3D mapping, virtual reality, and the Metaverse. And if you are an automation geek like me, it is hard to resist the temptation to have new paths to answer these challenges! This tutorial aims to give you what I consider the essential footing to do just that: the knowledge and code skills for developing 3D Point Cloud Semantic Segmentation systems. But actually, how can we apply semantic segmentation? And how challenging is 3D Machine Learning? Let me present a clear, in-depth 201 hands-on course focused on 3D Machine Learning.

Predicting Others' Behavior on the Road With Artificial Intelligence


Researchers have created a machine-learning system that efficiently predicts the future trajectories of multiple road users, like drivers, cyclists, and pedestrians, which could enable an autonomous vehicle to more safely navigate city streets. If a robot is going to navigate a vehicle safely through downtown Boston, it must be able to predict what nearby drivers, cyclists, and pedestrians are going to do next. A new machine-learning system may someday help driverless cars predict the next moves of nearby drivers, pedestrians, and cyclists in real-time. Humans may be one of the biggest roadblocks to fully autonomous vehicles operating on city streets. If a robot is going to navigate a vehicle safely through downtown Boston, it must be able to predict what nearby drivers, pedestrians, and cyclists are going to do next.

Powering Data-Driven Autonomy at Scale with Camera Data


At Woven Planet Level 5, we're using machine learning (ML) to build an autonomous driving system that improves as it observes more human driving. This is based on our Autonomy 2.0 approach, which leverages machine learning and data to solve the complex task of driving safely. This is unlike traditional systems, where engineers hand-design rules for every possible driving event. Last year, we took a critical step in delivering on Autonomy 2.0 by using an ML model to power our motion planner, the core decision-making module of our self-driving system. We saw the ML Planner's performance improve as we trained it on more human driving data.

Recruiting Manager - Autonomous Driving


We are currently looking for an experienced Recruiting Manager to join our Mountain View office. We are looking for a team player with strong ownership of the entire recruiting process and to take charge in scaling our Didi autonomous driving teams. Didi Chuxing ("DiDi") is the world's leading mobile transportation platform. We're committed to working with communities and partners to solve the world's transportation, environmental, and employment challenges by using big data-driven deep-learning algorithms that optimize resource allocation. Didi Chuxing's Autonomous-Driving team was established in 2016, and has grown to a comprehensive research and development organization covering HD mapping, perception, behavior prediction, planning and control, infrastructure and simulation, labeling, hardware, mechanical, problem diagnosis, vehicle modifications, connected car, and security, among others.

Deep Learning First:'s Path to Autonomous Driving


Last month, IEEE Spectrum went out to California to take a ride in one of's It's only been about a year since "This is in contrast to a traditional robotics approach," says Sameep Tandon, one of's "A lot of companies are just using deep learning for this component or that component, while we view it more holistically." Often, deep learning is used in perception, since there's so much variability inherent in how robots see the world.

How Is Mastering Autonomous Driving with Deep Learning


Among all of the self-driving startups working towards Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars.

Is Artificial Intelligence as Intelligent as We Think it is?


Artificial intelligence (AI), like robotics, has long been seen as "future technologies". However, much as with robots, we can now affirm that AI is not just science fiction, but much more than that. AI is very much alive in our personal and professional lives, and it is swiftly catching up to mobile devices in terms of popularity. There is not a single activity in our daily activities, the use of AI is not impacting us. From Alexa, Siri to self-driving cars, AI is stepping up to assist us just like a human would.

Automotive Camera [Apply Computer vision, Deep learning] - 1


Those who wants to learn and understand only concepts can take course 1 only. Those who wants to learn and understand concepts and also wants to know and/or do programming of the those concepts should take both course 1 and course 2. It is highly recommended to complete course 1 before starting course 2. NOTE: This course do not teach computer vision, deep learning, Python and OOPs from scratch, instead uses all of these to develop camera perception algorithms for ADAS and Autonomous Driving applications.

AI-as-a-Service Toolkit for Human-Centered Intelligence in Autonomous Driving Artificial Intelligence

This paper presents a proof-of-concept implementation of the AI-as-a-Service toolkit developed within the H2020 TEACHING project and designed to implement an autonomous driving personalization system according to the output of an automatic driver's stress recognition algorithm, both of them realizing a Cyber-Physical System of Systems. In addition, we implemented a data-gathering subsystem to collect data from different sensors, i.e., wearables and cameras, to automatize stress recognition. The system was attached for testing to a driving simulation software, CARLA, which allows testing the approach's feasibility with minimum cost and without putting at risk drivers and passengers. At the core of the relative subsystems, different learning algorithms were implemented using Deep Neural Networks, Recurrent Neural Networks, and Reinforcement Learning.

Safe AI -- How is this Possible? Artificial Intelligence

A new generation of increasingly autonomous and self-learning cyber-physical systems (CPS) is being developed for control applications in the real world. These systems are AI-based in that they leverage techniques from the field of Artificial intelligence (AI) to flexibly cope with imprecision, inconsistency, incompleteness, to have an inherent ability to learn from experience, and to adapt according to changing and even unforeseen situations. This extra flexibility of AI systems makes it harder to predict their behavior. Moreover, AI systems usually are safety-critical in that they may be causing real harm in (and to) the real world. Consequently, the central question regarding the development of such systems is how to handle or even overcome this basic dichotomy between unpredictable and safe behavior of AI systems. In other words, how can we best construct systems that exploit AI techniques, without incurring the frailties of "AI-like" behavior?