Collaborating Authors


Self-Driving Cars With Convolutional Neural Networks (CNN) -


Humanity has been waiting for self-driving cars for several decades. Thanks to the extremely fast evolution of technology, this idea recently went from "possible" to "commercially available in a Tesla". Deep learning is one of the main technologies that enabled self-driving. It's a versatile tool that can solve almost any problem – it can be used in physics, for example, the proton-proton collision in the Large Hadron Collider, just as well as in Google Lens to classify pictures. Deep learning is a technology that can help solve almost any type of science or engineering problem. CNN is the primary algorithm that these systems use to recognize and classify different parts of the road, and to make appropriate decisions. Along the way, we'll see how Tesla, Waymo, and Nvidia use CNN algorithms to make their cars driverless or autonomous. The first self-driving car was invented in 1989, it was the Automatic Land Vehicle in Neural Network (ALVINN). It used neural networks to detect lines, segment the environment, navigate itself, and drive. It worked well, but it was limited by slow processing powers and insufficient data.

Three Unique Architectures For Deep Learning Based Recommendation Systems


Deep learning based recommendation system architectures make use of multiple simpler approaches in order to remediate the shortcomings of any single approach to extracting, transforming and vectorizing a large corpus of data into a useful recommendation for an end user. High-level extraction architectures are useful for categorization, but lack accuracy. Low-level extraction approaches will produce committed decisions about what to recommend, but, since they lack context, their recommendations may be banal, repetitive or even recursive, creating unintelligent'content bubbles' for the user. High level architectures cannot'zoom in' meaningfully, and low-level architectures cannot'step back' to understand the bigger picture that the data is presenting. In this article we'll take a look at three unique approaches that reconcile these two needs into effective and unified frameworks suitable for recommender systems.

Agent-Based Modeling for Predicting Pedestrian Trajectories Around an Autonomous Vehicle

Journal of Artificial Intelligence Research

This paper addresses modeling and simulating pedestrian trajectories when interacting with an autonomous vehicle in a shared space. Most pedestrian–vehicle interaction models are not suitable for predicting individual trajectories. Data-driven models yield accurate predictions but lack generalizability to new scenarios, usually do not run in real time and produce results that are poorly explainable. Current expert models do not deal with the diversity of possible pedestrian interactions with the vehicle in a shared space and lack microscopic validation. We propose an expert pedestrian model that combines the social force model and a new decision model for anticipating pedestrian–vehicle interactions. The proposed model integrates different observed pedestrian behaviors, as well as the behaviors of the social groups of pedestrians, in diverse interaction scenarios with a car. We calibrate the model by fitting the parameters values on a training set. We validate the model and evaluate its predictive potential through qualitative and quantitative comparisons with ground truth trajectories. The proposed model reproduces observed behaviors that have not been replicated by the social force model and outperforms the social force model at predicting pedestrian behavior around the vehicle on the used dataset. The model generates explainable and real-time trajectory predictions. Additional evaluation on a new dataset shows that the model generalizes well to new scenarios and can be applied to an autonomous vehicle embedded prediction.

Vehicle Violation Detection System using Deep Learning:


Now-a-days there is a rapid increase in global population and number of vehicles, traffic jams and accidents are becoming a worldwide problem. Most accidents and congestion are blamed on illegal driving. In many areas, the management of traffic violations is still largely based on artificial monitoring, so once the electromagnetic coil equipment is used to detect illegal vehicles in the area, the punishment that follows is manual. A vehicle's electromagnetic coil, however, is only capable of detecting a small number of violations, such as over speeding and running red lights. Scientists have made many achievements in studying the above issues, many of which are somewhat distant from industrial implementation in terms of robustness and generalization.

Researchers Develop Parking Analytics Framework Using Deep Learning


Artificial Intelligence and deep learning in video analytics are gaining popularity. It has enabled a wide range of industrial applications, including surveillance and public safety, robotics perception, medical intervention, and facial recognition. According to Markets & Markets, the global market for video analytics was valued at USD 5.9 billion in 2021 and is predicted to reach USD 14.9 billion by 2026. Unmanned aerial vehicles (UAVs) have also enabled a wide range of video analytics applications (e.g., aerial surveys) since they provide aerial views of the environment, allowing for collecting aerial photos and processing with deep learning algorithms. Parking analytics is one of these critical smart city applications that uses deep learning and UAVs to collect real-time data and analyze it in order to maximize parking revenue, enhance parking resource allocations, and better manage public space.

9 Lessons from the Tesla AI Team


Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. While OpenAI is famous for its success in NLP and DeepMind is well known for RL and decision making, Tesla is definitely one of the most impactful companies in computer vision.

Deep Learning First:'s Path to Autonomous Driving


Last month, IEEE Spectrum went out to California to take a ride in one of's It's only been about a year since "This is in contrast to a traditional robotics approach," says Sameep Tandon, one of's "A lot of companies are just using deep learning for this component or that component, while we view it more holistically." Often, deep learning is used in perception, since there's so much variability inherent in how robots see the world.

How Is Mastering Autonomous Driving with Deep Learning


Among all of the self-driving startups working towards Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars.

State of AI Ethics Report (Volume 6, February 2022) Artificial Intelligence

This report from the Montreal AI Ethics Institute (MAIEI) covers the most salient progress in research and reporting over the second half of 2021 in the field of AI ethics. Particular emphasis is placed on an "Analysis of the AI Ecosystem", "Privacy", "Bias", "Social Media and Problematic Information", "AI Design and Governance", "Laws and Regulations", "Trends", and other areas covered in the "Outside the Boxes" section. The two AI spotlights feature application pieces on "Constructing and Deconstructing Gender with AI-Generated Art" as well as "Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?". Given MAIEI's mission to democratize AI, submissions from external collaborators have featured, such as pieces on the "Challenges of AI Development in Vietnam: Funding, Talent and Ethics" and using "Representation and Imagination for Preventing AI Harms". The report is a comprehensive overview of what the key issues in the field of AI ethics were in 2021, what trends are emergent, what gaps exist, and a peek into what to expect from the field of AI ethics in 2022. It is a resource for researchers and practitioners alike in the field to set their research and development agendas to make contributions to the field of AI ethics.

Automotive Camera [Apply Computer vision, Deep learning] - 1


Those who wants to learn and understand only concepts can take course 1 only. Those who wants to learn and understand concepts and also wants to know and/or do programming of the those concepts should take both course 1 and course 2. It is highly recommended to complete course 1 before starting course 2. NOTE: This course do not teach computer vision, deep learning, Python and OOPs from scratch, instead uses all of these to develop camera perception algorithms for ADAS and Autonomous Driving applications.