Goto

Collaborating Authors

Autonomous Vehicles: Overviews


Why you should learn Computer Vision and how you can get started

#artificialintelligence

In today's world, Computer Vision technologies are everywhere. They are embedded within many of the tools and applications that we use on a daily basis. However, we often pay little attention to those underlaying Computer Vision technologies because they tend to run in the background. As a result, only a small fraction of those outside the tech industries know about the importance of those technologies. Therefore, the goal of this article is to provide an overview of Computer Vision to those with little to no knowledge about the field. I attempt to achieve this goal by answering three questions: What is Computer Vision?, Why should you learn Computer Vision? and How you can get started?


Intro To Computer Vision - Classification

#artificialintelligence

Thanks to advancements in deep learning & artificial neural networks, computer vision is increasingly capable of mimicking human vision & is paving the way for self-driving cars, medical diagnosis, scanning recorded surveillance, manufacturing & much more. In this introductory workshop, Sage Elliot will give an overview of deep learning as it related to computer vision with a focused discussion around image classification. You will also learn about careers in computer vision & who are some of the biggest users of this technology. About Your Instructor: Sage Elliott is a Machine Learning Developer Evangelist for Sixgill with about 10 years of experience in the engineering space. He has passion for exploring new technologies & building communities.


Everything So Far In CVPR 2020 Conference

#artificialintelligence

Computer Vision and Pattern Recognition (CVPR) conference is one of the most popular events around the globe where computer vision experts and researchers gather to share their work and views on the trending techniques on various computer vision topics, including object detection, video understanding, visual recognition, among others. This year, the Computer Vision (CV) researchers and engineers have gathered virtually for the conference from 14 June, which will last till 19 June. In this article, we have listed down all the important topics and tutorials that have been discussed on the 1st and 2nd day of the conference. In this tutorial, the researchers presented the latest developments in robust model fitting, recent advancements in new sampling and local optimisation methods, novel branch-and-bound and mathematical programming algorithms in the global methods as well as the latest developments in differentiable alternative to Random Sample Consensus Algorithm or RANSAC. To know what a RANSAC is and how it works, click here.


Electronics

#artificialintelligence

Various artificial intelligence (AI) technologies have pervaded daily life. For instance, speech recognition has enabled users to interact with a system using their voice, and recent advances in computer vision have made self-driving cars commercially available. However, if not carefully designed, people with different abilities (e.g., loss of vision, weak technical background) may not receive full benefits from these AI-based approaches. This Special Issue focuses on bridging or closing the information gap between people with disabilities and needs. Manuscripts should be submitted online at www.mdpi.com


Computer Vision: An overview about the field of computer vision

#artificialintelligence

Computer vision is a field in computer science that falls under the umbrella of artificial intelligence (AI). Computer vision (CV) software developers strive to give computers the ability to process images in much the same way that humans do. They expect the computer will be able to identify objects, to make appropriate decisions based on what it "sees," and then to produce relevant outputs. Today, facial recognition software, autonomous vehicles, certain forms of surveillance, and gesture recognition are just a few examples of CV systems at work. Why is computer vision so complicated? Every parent can recall their child going through phases when "what's that?" became a recurring question.


Seeing the Road Ahead: The Path Toward Fully Autonomous, Self-Driving Cars

#artificialintelligence

The way SPEKTRA scans the environment is also different. Unlike conventional digital radar systems that capture all the information at once, analogous to a powerful flashbulb illuminating a scene, Metawave's radar works more like a laser beam able to see one specific section of space at a time. The beam rapidly sweeps the environment, detecting and classifying all the objects in the vehicle's field of view within milliseconds. Metawave's approach increases range and accuracy while reducing interference and the probability of clutter, all with very little computational overhead. "We're focused on long range and high resolution, which is the hardest problem to solve in automotive radar today," says Zaidi.


Improving Movement Predictions of Traffic Actors in Bird's-Eye View Models using GANs and Differentiable Trajectory Rasterization

arXiv.org Machine Learning

One of the most critical pieces of the self-driving puzzle is the task of predicting future movement of surrounding traffic actors, which allows the autonomous vehicle to safely and effectively plan its future route in a complex world. Recently, a number of algorithms have been proposed to address this important problem, spurred by a growing interest of researchers from both industry and academia. Methods based on top-down scene rasterization on one side and Generative Adversarial Networks (GANs) on the other have shown to be particularly successful, obtaining state-of-the-art accuracies on the task of traffic movement prediction. In this paper we build upon these two directions and propose a raster-based conditional GAN architecture, powered by a novel differentiable rasterizer module at the input of the conditional discriminator that maps generated trajectories into the raster space in a differentiable manner. This simplifies the task for the discriminator as trajectories that are not scene-compliant are easier to discern, and allows the gradients to flow back forcing the generator to output better, more realistic trajectories. We evaluated the proposed method on a large-scale, real-world data set, showing that it outperforms state-of-the-art GAN-based baselines.


Q&A: Solving connected car challenges with edge AI (Includes interview)

#artificialintelligence

Over 72.5 million connected car units are estimated to be sold by 2023, enabling nearly 70% of all passenger vehicles to actively exchange data with external sources. The amount of data resulting from these smart vehicles will be overwhelming for traditional data processing solutions to gather and analyze, as well as the associated latency of processing this data-- leading to potential life-or-death scenarios, according to Ramya Ravichandar, from Foghorn. We speak with Ravichandar, about how connected car manufacturers are implementing edge AI solutions for real-time video recognition, multi-factor authentication, and other innovative capabilities to decrease network latency and optimize data gathering, analyzing and security. Digital Journal: What are the current trends with autonomous and connected cars? Ramya Ravichandar: Automotive companies are looking to improve real-time functionalities and accelerate autonomous operations of passenger vehicles.


Automotive DevOps: Rules of the Road Ahead

#artificialintelligence

The Indian automotive industry is on the edge of disruption due to increasing automation, new business models and digitization. This disruption is also through innovation and transformational change as industry players are adapting to shifting preferences on car ownership and new technological developments such as Autonomous Vehicles (AVs), IoT, cloud and proliferation electric and connected vehicles. Apart from electric and connected vehicles, the auto industry is also adopting technologies like cloud and IoT to improve the driving experience. From design and operation to servicing, cloud technology will be increasingly used at every stage to reduce costs and eliminate any scope for wastage. Cloud computing enables better vehicle engineering and thanks to advanced analytic capabilities, design teams can deliver exactly what customers want.


Towards Safer Self-Driving Through Great PAIN (Physically Adversarial Intelligent Networks)

arXiv.org Machine Learning

Automated vehicles' neural networks suffer from overfit, poor generalizability, and untrained edge cases due to limited data availability. Researchers synthesize randomized edge-case scenarios to assist in the training process, though simulation introduces potential for overfit to latent rules and features. Automating worst-case scenario generation could yield informative data for improving self driving. To this end, we introduce a "Physically Adversarial Intelligent Network" (PAIN), wherein self-driving vehicles interact aggressively in the CARLA simulation environment. We train two agents, a protagonist and an adversary, using dueling double deep Q networks (DDDQNs) with prioritized experience replay. The coupled networks alternately seek-to-collide and to avoid collisions such that the "defensive" avoidance algorithm increases the mean-time-to-failure and distance traveled under non-hostile operating conditions. The trained protagonist becomes more resilient to environmental uncertainty and less prone to corner case failures resulting in collisions than the agent trained without an adversary.