Collaborating Authors


3D Machine Learning 201 Guide: Point Cloud Semantic Segmentation


Having the skills and the knowledge to attack every aspect of point cloud processing opens up many ideas and development doors. It is like a toolbox for 3D research creativity and development agility. And at the core, there is this incredible Artificial Intelligence space that targets 3D scene understanding. It is particularly relevant due to its importance for many applications, such as self-driving cars, autonomous robots, 3D mapping, virtual reality, and the Metaverse. And if you are an automation geek like me, it is hard to resist the temptation to have new paths to answer these challenges! This tutorial aims to give you what I consider the essential footing to do just that: the knowledge and code skills for developing 3D Point Cloud Semantic Segmentation systems. But actually, how can we apply semantic segmentation? And how challenging is 3D Machine Learning? Let me present a clear, in-depth 201 hands-on course focused on 3D Machine Learning.

Recruiting Manager - Autonomous Driving


We are currently looking for an experienced Recruiting Manager to join our Mountain View office. We are looking for a team player with strong ownership of the entire recruiting process and to take charge in scaling our Didi autonomous driving teams. Didi Chuxing ("DiDi") is the world's leading mobile transportation platform. We're committed to working with communities and partners to solve the world's transportation, environmental, and employment challenges by using big data-driven deep-learning algorithms that optimize resource allocation. Didi Chuxing's Autonomous-Driving team was established in 2016, and has grown to a comprehensive research and development organization covering HD mapping, perception, behavior prediction, planning and control, infrastructure and simulation, labeling, hardware, mechanical, problem diagnosis, vehicle modifications, connected car, and security, among others.

Deep Learning First:'s Path to Autonomous Driving


Last month, IEEE Spectrum went out to California to take a ride in one of's It's only been about a year since "This is in contrast to a traditional robotics approach," says Sameep Tandon, one of's "A lot of companies are just using deep learning for this component or that component, while we view it more holistically." Often, deep learning is used in perception, since there's so much variability inherent in how robots see the world.

How Is Mastering Autonomous Driving with Deep Learning


Among all of the self-driving startups working towards Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars.

Automotive Camera [Apply Computer vision, Deep learning] - 1


Those who wants to learn and understand only concepts can take course 1 only. Those who wants to learn and understand concepts and also wants to know and/or do programming of the those concepts should take both course 1 and course 2. It is highly recommended to complete course 1 before starting course 2. NOTE: This course do not teach computer vision, deep learning, Python and OOPs from scratch, instead uses all of these to develop camera perception algorithms for ADAS and Autonomous Driving applications.

AI-as-a-Service Toolkit for Human-Centered Intelligence in Autonomous Driving Artificial Intelligence

This paper presents a proof-of-concept implementation of the AI-as-a-Service toolkit developed within the H2020 TEACHING project and designed to implement an autonomous driving personalization system according to the output of an automatic driver's stress recognition algorithm, both of them realizing a Cyber-Physical System of Systems. In addition, we implemented a data-gathering subsystem to collect data from different sensors, i.e., wearables and cameras, to automatize stress recognition. The system was attached for testing to a driving simulation software, CARLA, which allows testing the approach's feasibility with minimum cost and without putting at risk drivers and passengers. At the core of the relative subsystems, different learning algorithms were implemented using Deep Neural Networks, Recurrent Neural Networks, and Reinforcement Learning.

Safe AI -- How is this Possible? Artificial Intelligence

A new generation of increasingly autonomous and self-learning cyber-physical systems (CPS) is being developed for control applications in the real world. These systems are AI-based in that they leverage techniques from the field of Artificial intelligence (AI) to flexibly cope with imprecision, inconsistency, incompleteness, to have an inherent ability to learn from experience, and to adapt according to changing and even unforeseen situations. This extra flexibility of AI systems makes it harder to predict their behavior. Moreover, AI systems usually are safety-critical in that they may be causing real harm in (and to) the real world. Consequently, the central question regarding the development of such systems is how to handle or even overcome this basic dichotomy between unpredictable and safe behavior of AI systems. In other words, how can we best construct systems that exploit AI techniques, without incurring the frailties of "AI-like" behavior?

Tesla's Cameras-only Autonomous System Stirs Controversy

International Business Times

As it pursues the goal of fully autonomous driving, Tesla has bet entirely on cameras and artificial intelligence, shunning other commonly used tools such as laser detection. Tesla Chief Executive Elon Musk has touted a system built around eight "surround" cameras that feed data into the auto's "deep neural network," according to Tesla's website. But as with so many other things involving Tesla, there is controversy. At the giant Consumer Electronic Show (CES) in Las Vegas, Luminar Technologies has set up a demonstration of two autos moving at about 30 miles per-hour towards the silhouette of a child. A car utilizing Luminar's lidar, a laser-based system, stops in advance of trouble, while its rival, a Tesla, careens into the mannequin.

Challenges of Artificial Intelligence -- From Machine Learning and Computer Vision to Emotional Intelligence Artificial Intelligence

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.

An Intelligent Self-driving Truck System For Highway Transportation Artificial Intelligence

Recently, there have been many advances in autonomous driving society, attracting a lot of attention from academia and industry. However, existing works mainly focus on cars, extra development is still required for self-driving truck algorithms and models. In this paper, we introduce an intelligent self-driving truck system. Our presented system consists of three main components, 1) a realistic traffic simulation module for generating realistic traffic flow in testing scenarios, 2) a high-fidelity truck model which is designed and evaluated for mimicking real truck response in real-world deployment, 3) an intelligent planning module with learning-based decision making algorithm and multi-mode trajectory planner, taking into account the truck's constraints, road slope changes, and the surrounding traffic flow. We provide quantitative evaluations for each component individually to demonstrate the fidelity and performance of each part. We also deploy our proposed system on a real truck and conduct real world experiments which shows our system's capacity of mitigating sim-to-real gap. Our code is available at