For an autonomous car to drive safely, being able to predict the behavior of other road users is essential. A research team at the Massachusetts Institute of Technology's CSAIL, along with researchers at the Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University in Beijing, have developed a new ML system that could one day help driverless cars predict in real time the upcoming movements of nearby drivers, cyclists and pedestrians. They titled their study, " M2I: From Factored Marginal Path Prediction to Interactive Prediction." Qiao Sun, Junru Gu, Hang Zhao are the IIIS members who participated in this study while Xin Huang and Brian Williams represented MIT. Humans are unpredictable, which makes predicting road user behavior in urban environments de facto very difficult.
Tesla's founder Elon Musk said back in 2013: Self-driving cars are the natural extension of active safety and obviously something we think we should do. Fully-autonomous vehicles (AV) are no longer a technology of the future. Established and emerging manufacturers have embarked on a journey to produce the most reliable driverless cars to compete in a growing market. But people still don't trust AVs are safe, despite potential benefits of fuel efficiency, reduced emissions and improve mobility. We study the power of brands. Our research found companies can take advantage of their brand reputation to encourage consumers to adopt driverless cars.
A new study from AutoPacific released on Thursday said most drivers are not ready to fully trust autonomous vehicles yet. The research and consulting firm surveyed 600 licensed drivers ages 18 to 80 throughout the U.S., asking their thoughts on the future of self-driving cars. Only 29 percent of drivers said they would feel safe with their own fully autonomous vehicle, while 26 percent said they would only feel comfortable if they were a passenger in somebody else's fully autonomous car. Currently, there are no completely automated vehicles available for public purchase. "This is technology that most consumers are going to need to see and experience for several years before becoming comfortable," Ed Kim, president and chief analyst of AutoPacific, said in a press release.
Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.
Abstract-- The deep neural networks (DNNs)-based autonomous driving systems (ADSs) are expected to reduce road accidents and improve safety in the transportation domain as it removes the factor of human error from driving tasks. The DNN-based ADS sometimes may exhibit erroneous or unexpected behaviours due to unexpected driving conditions which may cause accidents. Therefore, safety assurance is vital to the ADS. However, DNN-based ADS is a highly complex system that puts forward a strong demand for robustness, more specifically, the ability to predict unexpected driving conditions to prevent potential inconsistent behaviour. It is not possible to generalize the DNN model's performance for all driving conditions. Therefore, the driving conditions that were not considered during the training of the ADS may lead to unpredictable consequences for the safety of autonomous vehicles. This study proposes an autoencoder and time series analysis-based anomaly detection system to prevent the safety-critical inconsistent behaviour of autonomous vehicles at runtime. Our approach called DeepGuard consists of two components. The first component-the inconsistent behaviour predictor, is based on an autoencoder and time series analysis to reconstruct the driving scenarios. Based on reconstruction error (e) and threshold (θ), it determines the normal and unexpected driving scenarios and predicts potential inconsistent behaviour. The second component provides on-the-fly safety guards, that is, it automatically activates healing strategies to prevent inconsistencies in the behaviour. We evaluated the performance of DeepGuard in predicting the injected anomalous driving scenarios using already available open-sourced DNN-based ADSs in the Udacity simulator. Our simulation results show that the best variant of DeepGuard can predict up to 93 % on the CHAUFFEUR ADS, 83 % on DAVE-2 ADS, and 80 % of inconsistent behaviour on the EPOCH ADS model, outperforming SELFORACLE and DeepRoad. Overall, DeepGuard can prevent up to 89% of all predicted inconsistent behaviours of ADS by executing predefined safety guards. I. INTRODUCTION Autonomous vehicles are one of the most promising applications of artificial intelligence. This would be a technological revolution in the transportation industry in the near future. Autonomous driving systems (ADSs) use sensors such as cameras, radar, Lidar, and GPS to automatically produce driving parameters such as vehicle velocity, throttle, brakes, steering angles, and directions. Advancements in deep learning have made progress in autonomous systems, such as autonomous vehicles and unmanned aerial vehicles.
Autonomous car racing is a challenging task in the robotic control area. Traditional modular methods require accurate mapping, localization and planning, which makes them computationally inefficient and sensitive to environmental changes. Recently, deep-learning-based end-to-end systems have shown promising results for autonomous driving/racing. However, they are commonly implemented by supervised imitation learning (IL), which suffers from the distribution mismatch problem, or by reinforcement learning (RL), which requires a huge amount of risky interaction data. In this work, we present a general deep imitative reinforcement learning approach (DIRL), which successfully achieves agile autonomous racing using visual inputs. The driving knowledge is acquired from both IL and model-based RL, where the agent can learn from human teachers as well as perform self-improvement by safely interacting with an offline world model. We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation. The evaluation results demonstrate that our method outperforms previous IL and RL methods in terms of sample efficiency and task performance. Demonstration videos are available at https://caipeide.github.io/autorace-dirl/
There is mounting public concern over the influence that AI based systems has in our society. Coalitions in all sectors are acting worldwide to resist hamful applications of AI. From indigenous people addressing the lack of reliable data, to smart city stakeholders, to students protesting the academic relationships with sex trafficker and MIT donor Jeffery Epstein, the questionable ethics and values of those heavily investing in and profiting from AI are under global scrutiny. There are biased, wrongful, and disturbing assumptions embedded in AI algorithms that could get locked in without intervention. Our best human judgment is needed to contain AI's harmful impact. Perhaps one of the greatest contributions of AI will be to make us ultimately understand how important human wisdom truly is in life on earth.
While AI has benefited humans, it may also harm humans if not appropriately developed. We conducted a literature review of current related work in developing AI systems from an HCI perspective. Different from other approaches, our focus is on the unique characteristics of AI technology and the differences between non-AI computing systems and AI systems. We further elaborate on the human-centered AI (HCAI) approach that we proposed in 2019. Our review and analysis highlight unique issues in developing AI systems which HCI professionals have not encountered in non-AI computing systems. To further enable the implementation of HCAI, we promote the research and application of human-AI interaction (HAII) as an interdisciplinary collaboration. There are many opportunities for HCI professionals to play a key role to make unique contributions to the main HAII areas as we identified. To support future HCI practice in the HAII area, we also offer enhanced HCI methods and strategic recommendations. In conclusion, we believe that promoting the HAII research and application will further enable the implementation of HCAI, enabling HCI professionals to address the unique issues of AI systems and develop human-centered AI systems.
Delseny, Hervé, Gabreau, Christophe, Gauffriau, Adrien, Beaudouin, Bernard, Ponsolle, Ludovic, Alecu, Lucian, Bonnin, Hugues, Beltran, Brice, Duchel, Didier, Ginestet, Jean-Brice, Hervieu, Alexandre, Martinez, Ghilaine, Pasquet, Sylvain, Delmas, Kevin, Pagetti, Claire, Gabriel, Jean-Marc, Chapdelaine, Camille, Picard, Sylvaine, Damour, Mathieu, Cappi, Cyril, Gardès, Laurent, De Grancey, Florence, Jenn, Eric, Lefevre, Baptiste, Flandin, Gregory, Gerchinovitz, Sébastien, Mamalet, Franck, Albore, Alexandre
Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.
The localization of self-driving cars is needed for several tasks such as keeping maps updated, tracking objects, and planning. Localization algorithms often take advantage of maps for estimating the car pose. Since maintaining and using several maps is computationally expensive, it is important to analyze which type of map is more adequate for each application. In this work, we provide data for such analysis by comparing the accuracy of a particle filter localization when using occupancy, reflectivity, color, or semantic grid maps. To the best of our knowledge, such evaluation is missing in the literature. For building semantic and colour grid maps, point clouds from a Light Detection and Ranging (LiDAR) sensor are fused with images captured by a front-facing camera. Semantic information is extracted from images with a deep neural network. Experiments are performed in varied environments, under diverse conditions of illumination and traffic. Results show that occupancy grid maps lead to more accurate localization, followed by reflectivity grid maps. In most scenarios, the localization with semantic grid maps kept the position tracking without catastrophic losses, but with errors from 2 to 3 times bigger than the previous. Colour grid maps led to inaccurate and unstable localization even using a robust metric, the entropy correlation coefficient, for comparing online data and the map.