Collaborating Authors


A new way to train AI systems could keep them safer from hackers


The context: One of the best unsolved defects of deep knowing is its vulnerability to so-called adversarial attacks. When included to the input of an AI system, these perturbations, apparently random or undetected to the human eye, can make things go totally awry. Stickers tactically put on a stop indication, for instance, can deceive a self-driving automobile into seeing a speed limitation indication for 45 miles per hour, while sticker labels on a roadway can puzzle a Tesla into drifting into the incorrect lane. Safety important: Most adversarial research study concentrates on image acknowledgment systems, however deep-learning-based image restoration systems are susceptible too. This is especially uncomfortable in healthcare, where the latter are typically utilized to rebuild medical images like CT or MRI scans from x-ray information.

How AI is Changing the Mobility Landscape - DATAVERSITY


Click here to learn more about Gilad David Maayan. There are a significant number of investments in the automotive industry nowadays. The majority of these investments focus on artificial intelligence (AI) and the optimization of self-driving technology. Meanwhile, new mobility systems and players are making their way into the automotive market. Tesla is trying to improve its autopilot system, Uber is testing robo-taxis, and Google is developing self-driving cars.

Neuromorphic Computing: The Next-Level Artificial Intelligence


Can AI function like a human brain? But now, armed with Neuromorphic Computing, they are ready to show the world that their dream can change the world for better. As we unearth the benefits, the success of our machine learning and AI quest seem to depend to a great extent on the success of Neuromorphic Computing. The technologies of the future like autonomous vehicles and robots will need access to and utilization of an enormous amount of data and information in real-time. Today, to a limited extent, this is done by machine learning and AI that depend on supercomputer power.

Defining "Vision" in "Computer Vision"


Computer Vision also referred as Vision is the recent cutting edge field within computer science that deals with enabling computers, devices or machines, in general, to see, understand, interpret or manipulate what is being seen. Computer Vision technology implements deep learning techniques and in few cases also employs Natural Language Processing techniques as a natural progression of steps to analyze extracted text from images. With all the advancements of deep learning, building functions like image classification, object detection, tracking, and image manipulation has become more simpler and accurate thus leading way to exploring more complex autonomous applications like self-driving cars, humanoids or drones. With deep learning, we can now manipulate images, for example superimpose Tom Cruise's features onto another face. Or convert a picture into a sketch mode or water color painting mode.

Best Resources to learn AI & Deep Learning


Over the last few years, Deep Learning has proven itself to be the game-changer. This area of data science is the only one responsible for the advancements in machine learning and artificial intelligence. From academic researches to self-driving cars, Deep Learning is found in all possible aspects nowadays. Deep Learning is a complex and a vast field that consists of several components. It cannot be mastered in a day and hence it will take several months if you want to dig deeper into this field.

Improving Movement Predictions of Traffic Actors in Bird's-Eye View Models using GANs and Differentiable Trajectory Rasterization Machine Learning

One of the most critical pieces of the self-driving puzzle is the task of predicting future movement of surrounding traffic actors, which allows the autonomous vehicle to safely and effectively plan its future route in a complex world. Recently, a number of algorithms have been proposed to address this important problem, spurred by a growing interest of researchers from both industry and academia. Methods based on top-down scene rasterization on one side and Generative Adversarial Networks (GANs) on the other have shown to be particularly successful, obtaining state-of-the-art accuracies on the task of traffic movement prediction. In this paper we build upon these two directions and propose a raster-based conditional GAN architecture, powered by a novel differentiable rasterizer module at the input of the conditional discriminator that maps generated trajectories into the raster space in a differentiable manner. This simplifies the task for the discriminator as trajectories that are not scene-compliant are easier to discern, and allows the gradients to flow back forcing the generator to output better, more realistic trajectories. We evaluated the proposed method on a large-scale, real-world data set, showing that it outperforms state-of-the-art GAN-based baselines.

TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications Machine Learning

As machine learning (ML) has seen increasing adoption in safety-critical domains (e.g., autonomous vehicles), the reliability of ML systems has also grown in importance. While prior studies have proposed techniques to enable efficient error-resilience techniques (e.g., selective instruction duplication), a fundamental requirement for realizing these techniques is a detailed understanding of the application's resilience. In this work, we present TensorFI, a high-level fault injection (FI) framework for TensorFlow-based applications. TensorFI is able to inject both hardware and software faults in general TensorFlow programs. TensorFI is a configurable FI tool that is flexible, easy to use, and portable. It can be integrated into existing TensorFlow programs to assess their resilience for different fault types (e.g., faults in particular operators). We use TensorFI to evaluate the resilience of 12 ML programs, including DNNs used in the autonomous vehicle domain. Our tool is publicly available at

Unsupervised Sequence Forecasting of 100,000 Points for Unsupervised Trajectory Forecasting Artificial Intelligence

Predicting the future is a crucial first step to effective control, since systems that can predict the future can select plans that lead to desired outcomes. In this work, we study the problem of future prediction at the level of 3D scenes, represented by point clouds captured by a LiDAR sensor, i.e., directly learning to forecast the evolution of >100,000 points that comprise a complete scene. We term this Scene Point Cloud Sequence Forecasting (SPCSF). By directly predicting the densest-possible 3D representation of the future, the output contains richer information than other representations such as future object trajectories. We design a method, SPCSFNet, evaluate it on the KITTI and nuScenes datasets, and find that it demonstrates excellent performance on the SPCSF task. To show that SPCSF can benefit downstream tasks such as object trajectory forecasting, we present a new object trajectory forecasting pipeline leveraging SPCSFNet. Specifically, instead of forecasting at the object level as in conventional trajectory forecasting, we propose to forecast at the sensor level and then apply detection and tracking on the predicted sensor data. As a result, our new pipeline can remove the need of object trajectory labels and enable large-scale training with unlabeled sensor data. Surprisingly, we found our new pipeline based on SPCSFNet was able to outperform the conventional pipeline using state-of-the-art trajectory forecasting methods, all of which require future object trajectory labels. Finally, we propose a new evaluation procedure and two new metrics to measure the end-to-end performance of the trajectory forecasting pipeline. Our code will be made publicly available at

Inside the lab where Waymo is building the brains for its driverless cars


Right now, a minivan with no one behind the steering wheel is driving through a suburb of Phoenix, Arizona. And while that may seem alarming, the company that built the "brain" powering the car's autonomy wants to assure you that it's totally safe. Waymo, the self-driving unit of Alphabet, is the only company in the world to have fully driverless vehicles on public roads today. That was made possible by a sophisticated set of neural networks powered by machine learning about which very is little is known -- until now. For the first time, Waymo is lifting the curtain on what is arguably the most important (and most difficult-to-understand) piece of its technology stack. The company, which is ahead in the self-driving car race by most metrics, confidently asserts that its cars have the most advanced brains on the road today. Anyone can buy a bunch of cameras and LIDAR sensors, slap them on a car, and call it autonomous. But training a self-driving car to behave like a human driver, or, more importantly, to drive better than a human, is on the bleeding edge of artificial intelligence research.

Towards Safer Self-Driving Through Great PAIN (Physically Adversarial Intelligent Networks) Machine Learning

Automated vehicles' neural networks suffer from overfit, poor generalizability, and untrained edge cases due to limited data availability. Researchers synthesize randomized edge-case scenarios to assist in the training process, though simulation introduces potential for overfit to latent rules and features. Automating worst-case scenario generation could yield informative data for improving self driving. To this end, we introduce a "Physically Adversarial Intelligent Network" (PAIN), wherein self-driving vehicles interact aggressively in the CARLA simulation environment. We train two agents, a protagonist and an adversary, using dueling double deep Q networks (DDDQNs) with prioritized experience replay. The coupled networks alternately seek-to-collide and to avoid collisions such that the "defensive" avoidance algorithm increases the mean-time-to-failure and distance traveled under non-hostile operating conditions. The trained protagonist becomes more resilient to environmental uncertainty and less prone to corner case failures resulting in collisions than the agent trained without an adversary.