Goto

Collaborating Authors

Results


Harnessing Machine Learning to Accelerate Fast-Charging Battery Design

#artificialintelligence

According to a new study in the journal Nature Materials, researchers from Stanford University have harnessed the power of machine learning technology to reverse long-held suppositions about the way lithium-ion batteries charge and discharge, providing engineers with a new list of criteria for making longer-lasting battery cells. This is the first time machine learning has been coupled with knowledge obtained from experiments and physics equations to uncover and describe how lithium-ion batteries degrade over their lifetime. Machine learning accelerates analyses by finding patterns in large amounts of data. In this instance, researchers taught the machine to study the physics of a battery failure mechanism to design superior and safer fast-charging battery packs. Fast charging can be stressful and harmful to lithium-ion batteries, and resolving this problem is vital to the fight against climate change.


AI safety system offers autonomous vehicle drivers seven seconds warning

#artificialintelligence

A team of researchers in Germany have come up with a safety system that could warn drivers of autonomous cars that they will have to take control up to seven seconds in advance. A team of researchers at the Technical University of Munich (TUM) has developed a new early warning system for autonomous vehicles that uses artificial intelligence to learn from thousands of real traffic situations. The study of the system was carried out in cooperation with the BMW Group. Researchers behind the study claim that if used in today's self-driving vehicles, it could offer seven seconds advanced warning against potentially critical situations that the cars cannot handle alone – with over 85 per cent accuracy. To make self-driving cars safe in the future, development efforts often rely on sophisticated models aimed at giving cars the ability to analyse the behaviour of other traffic.


New Uses For AI

#artificialintelligence

AI is being embedded into an increasing number of technologies that are commonly found inside most chips, and initial results show dramatic improvements in both power and performance. Unlike high-profile AI implementations, such as self-driving cars or natural language processing, much of this work flies well under the radar for most people. It generally takes the path of least disruption, building on or improving technology that already exists. But in addition to having a significant impact, these developments provide design teams with a baseline for understanding what AI can and cannot do well, how it behaves over time and under different environmental and operating conditions, and how it interacts with other systems. Until recently, the bulk of AI/machine learning has been confined to the data center or specialized mil/aero applications. It has since begun migrating to the edge, which itself is just beginning to take form, driven by a rising volume of data and the need to process that data closer to the source.


Researchers design AI-based early warning system for autonomous cars

#artificialintelligence

With massive importance being given to passenger safety in self-driving cars, a squad of researchers from the Technical University of Munich (TUM), in this regard, has designed a new early warning AI-based system for autonomous vehicles. As per reliable reports, a study was recently carried out in association with the BMW Group, which was published in the journal IEEE Transaction on Intelligent Transportation Systems. The results of the study depicted that, when used in the self-driving vehicles of today, the system can warn seven seconds in advance, with an efficiency and accuracy of 85%, against critical situations that the cars cannot handle alone. Notably, the technology makes use of cameras and sensors to capture surrounding conditions and environment while simultaneously recording the data for vehicle such as road conditions, speed, visibility, and steering wheel angle. The AI-system then, based on recurrent neural network, learns to detect patterns with the procured data.


Artificially Intelligent Cars Are Getting Better at Preventing Your Death

#artificialintelligence

Researchers have developed a new early-warning system for self-driving vehicles -- leveraging artificial intelligence (AI) capable of learning from thousands of real traffic scenarios, according to a new study executed with the BMW Group and published in the journal IEEE Transactions on Intelligent Transportation Systems. In other words, you may soon ride in a self-driving car with an AI's figurative finger on the buzzer -- to keep you from dying in transit by giving seven seconds' warning of crucial situations the cars can't handle on their own. And so far, the AI can do it with more than 85% accuracy. The drive to increase safety for self-driving cars feels almost self-explanatory, but efforts typically rely on complicated models designed to enhance vehicles' ability to analyze the traffic behavior of users. But driving on public roads always comes with risk and uncertainty.


MIT Researchers Develop AI System To Cope With Imperfect Inputs

#artificialintelligence

Researchers from MIT have developed a new AI approach that could soon find its way into self-driving cars and industrial robots in smart factories. Designed to handle unpredictable interactions safely, the deep-learning algorithm promises to enhance the robustness of AI systems in safety-critical scenarios. From avoiding a pedestrian dashing across the road in unusually bad weather to overcoming the malicious obstruction of sensors in a manufacturing plant, the new system can enable AI systems to react in a robust manner even when critical inputs deviate due to either unreliable inputs or noise. The details of this new approach are outlined in a study by Michael Everett, Björn Lütjens, and Jonathan How from MIT. Titled "Certifiable robustness to adversarial state uncertainty in deep reinforcement learning", the study was published last month in IEEE's Transactions on Neural Networks and Learning Systems. The algorithm works by building a healthy "skepticism" of the measurements and inputs AI systems receive to help machines to navigate our real, imperfect world.


Investigating Value of Curriculum Reinforcement Learning in Autonomous Driving Under Diverse Road and Weather Conditions

arXiv.org Artificial Intelligence

Applications of reinforcement learning (RL) are popular in autonomous driving tasks. That being said, tuning the performance of an RL agent and guaranteeing the generalization performance across variety of different driving scenarios is still largely an open problem. In particular, getting good performance on complex road and weather conditions require exhaustive tuning and computation time. Curriculum RL, which focuses on solving simpler automation tasks in order to transfer knowledge to complex tasks, is attracting attention in RL community. The main contribution of this paper is a systematic study for investigating the value of curriculum reinforcement learning in autonomous driving applications. For this purpose, we setup several different driving scenarios in a realistic driving simulator, with varying road complexity and weather conditions. Next, we train and evaluate performance of RL agents on different sequences of task combinations and curricula. Results show that curriculum RL can yield significant gains in complex driving tasks, both in terms of driving performance and sample complexity. Results also demonstrate that different curricula might enable different benefits, which hints future research directions for automated curriculum training.


Explanations in Autonomous Driving: A Survey

arXiv.org Artificial Intelligence

The automotive industry is seen to have witnessed an increasing level of development in the past decades; from manufacturing manually operated vehicles to manufacturing vehicles with high level of automation. With the recent developments in Artificial Intelligence (AI), automotive companies now employ high performance AI models to enable vehicles to perceive their environment and make driving decisions with little or no influence from a human. With the hope to deploy autonomous vehicles (AV) on a commercial scale, the acceptance of AV by society becomes paramount and may largely depend on their degree of transparency, trustworthiness, and compliance to regulations. The assessment of these acceptance requirements can be facilitated through the provision of explanations for AVs' behaviour. Explainability is therefore seen as an important requirement for AVs. AVs should be able to explain what they have 'seen', done and might do in environments where they operate. In this paper, we provide a comprehensive survey of the existing work in explainable autonomous driving. First, we open by providing a motivation for explanations and examining existing standards related to AVs. Second, we identify and categorise the different stakeholders involved in the development, use, and regulation of AVs and show their perceived need for explanation. Third, we provide a taxonomy of explanations and reviewed previous work on explanation in the different AV operations. Finally, we draw a close by pointing out pertinent challenges and future research directions. This survey serves to provide fundamental knowledge required of researchers who are interested in explanation in autonomous driving.


Researchers suggest pedestrians should wear technology to communicate with driverless cars

#artificialintelligence

In the paper, they say their Reflective Surface for Intelligent Transportation Systems adopts a multi-antenna design, which "enables constructive blind beamforming to return an enhanced radar signal in the incident direction" and that opreliminary results show that REITS improves the detection distance of a self-driving car radar by a factor of 3.63.


Limitations of Post-Hoc Feature Alignment for Robustness

arXiv.org Artificial Intelligence

Feature alignment is an approach to improving robustness to distribution shift that matches the distribution of feature activations between the training distribution and test distribution. A particularly simple but effective approach to feature alignment involves aligning the batch normalization statistics between the two distributions in a trained neural network. This technique has received renewed interest lately because of its impressive performance on robustness benchmarks. However, when and why this method works is not well understood. We investigate the approach in more detail and identify several limitations. We show that it only significantly helps with a narrow set of distribution shifts and we identify several settings in which it even degrades performance. We also explain why these limitations arise by pinpointing why this approach can be so effective in the first place. Our findings call into question the utility of this approach and Unsupervised Domain Adaptation more broadly for improving robustness in practice.