Delseny, Hervé, Gabreau, Christophe, Gauffriau, Adrien, Beaudouin, Bernard, Ponsolle, Ludovic, Alecu, Lucian, Bonnin, Hugues, Beltran, Brice, Duchel, Didier, Ginestet, Jean-Brice, Hervieu, Alexandre, Martinez, Ghilaine, Pasquet, Sylvain, Delmas, Kevin, Pagetti, Claire, Gabriel, Jean-Marc, Chapdelaine, Camille, Picard, Sylvaine, Damour, Mathieu, Cappi, Cyril, Gardès, Laurent, De Grancey, Florence, Jenn, Eric, Lefevre, Baptiste, Flandin, Gregory, Gerchinovitz, Sébastien, Mamalet, Franck, Albore, Alexandre
Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.
The automotive industry is seen to have witnessed an increasing level of development in the past decades; from manufacturing manually operated vehicles to manufacturing vehicles with high level of automation. With the recent developments in Artificial Intelligence (AI), automotive companies now employ high performance AI models to enable vehicles to perceive their environment and make driving decisions with little or no influence from a human. With the hope to deploy autonomous vehicles (AV) on a commercial scale, the acceptance of AV by society becomes paramount and may largely depend on their degree of transparency, trustworthiness, and compliance to regulations. The assessment of these acceptance requirements can be facilitated through the provision of explanations for AVs' behaviour. Explainability is therefore seen as an important requirement for AVs. AVs should be able to explain what they have 'seen', done and might do in environments where they operate. In this paper, we provide a comprehensive survey of the existing work in explainable autonomous driving. First, we open by providing a motivation for explanations and examining existing standards related to AVs. Second, we identify and categorise the different stakeholders involved in the development, use, and regulation of AVs and show their perceived need for explanation. Third, we provide a taxonomy of explanations and reviewed previous work on explanation in the different AV operations. Finally, we draw a close by pointing out pertinent challenges and future research directions. This survey serves to provide fundamental knowledge required of researchers who are interested in explanation in autonomous driving.
If you thought rocket science was hard, try training a computer to safely change lanes while behind the wheel of a full-size SUV in heavy drivetime traffic. Autonomous vehicle developers have faced myriad similar challenged over the past three decades but nothing, it seems, turns the wheels of innovation quite like a bit of good, old-fashioned competition -- one which DARPA was only more than happy to provide. In Driven: The Race to Create the Autonomous Car, Insider senior editor and former Wired Transportation editor, Alex Davies takes the reader on an immersive tour of DARPA's "Grand Challenges" -- the agency's autonomous vehicle trials which drew top talents from across academia and the private sector in effort to spur on the state of autonomous vehicle technology -- as well as profiles many of the elite engineers that took place in the competitions. In the excerpt below however Davies recalls how, back in 2014, then-CEO Travis Kalanick steered Uber into the murky waters of autonomous vehicle technology, setting off a flurry of acquihires, buyouts, furious R&D efforts, and one fatal accident -- only to end up selling off the division this past December. Excerpt from Driven: The Race to Create the Autonomous Car by Alex Davies.
Dubljević, Veljko, List, George F., Milojevich, Jovan, Ajmeri, Nirav, Bauer, William, Singh, Munindar P., Bardaka, Eleni, Birkland, Thomas, Edwards, Charles, Mayer, Roger, Muntean, Ioan, Powers, Thomas, Rakha, Hesham, Ricks, Vance, Samandar, M. Shoaib
The expansion of artificial intelligence (AI) and autonomous systems has shown the potential to generate enormous social good while also raising serious ethical and safety concerns. AI technology is increasingly adopted in transportation. A survey of various in-vehicle technologies found that approximately 64% of the respondents used a smartphone application to assist with their travel. The top-used applications were navigation and real-time traffic information systems. Among those who used smartphones during their commutes, the top-used applications were navigation and entertainment. There is a pressing need to address relevant social concerns to allow for the development of systems of intelligent agents that are informed and cognizant of ethical standards. Doing so will facilitate the responsible integration of these systems in society. To this end, we have applied Multi-Criteria Decision Analysis (MCDA) to develop a formal Multi-Attribute Impact Assessment (MAIA) questionnaire for examining the social and ethical issues associated with the uptake of AI. We have focused on the domain of autonomous vehicles (AVs) because of their imminent expansion. However, AVs could serve as a stand-in for any domain where intelligent, autonomous agents interact with humans, either on an individual level (e.g., pedestrians, passengers) or a societal level.
When the inquisition required him to drop his study of what the Roman Catholic Church insisted was not a heliocentric solar system, Galileo Galilei turned his energy to the less controversial question of how to stick a telescope onto a helmet. The king of Spain had offered a hefty reward to anyone who could solve the stubborn mystery of how to determine a ship's longitude while at sea: 6,000 ducats up front and another 2,000 per year for life. Galileo thought his headgear, with the telescope fixed over one eye and making its wearer look like a misaligned unicorn, would net him the reward. Determining latitude is easy for any sailor who can pick out the North Star, but finding longitude escaped the citizens of the 17th century, because it required a precise knowledge of time. That's based on a simple principle: Say you set your clock before sailing west from Greenwich.
As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.
Autonomous vehicle design involves an almost incomprehensible combination of engineering tasks including sensor fusion, path planning, and predictive modeling of human behavior. But despite the best efforts to consider all possible real world outcomes, things can go awry. More than two and a half years ago, in Tempe, Arizona, an Uber "self-driving" car crashed into pedestrian Elaine Herzberg, killing her. In mid-September, the safety driver behind the wheel of that car, Rafaela Vasquez, was charged with negligent homicide. Uber's test vehicle was driving 39 mph when it struck Herzberg. Uber's sensors detected her six seconds before impact but determined that the object sensed was a false positive.
Over the years, Fox's animated comedy The Simpsons has successfully predicted several real-life developments. From Donald Trump becoming the United States President to Disney purchasing 20th Century Fox, the show's writers have been correct more than a few times. Though not all their predictions have received the attention they deserve. Back in Season 5, in the episode entitled "Homer Loves Flanders," the frenemy neighbors become better acquainted with each other. At one point, Ned takes his new best friend to a baseball game.
The localization of self-driving cars is needed for several tasks such as keeping maps updated, tracking objects, and planning. Localization algorithms often take advantage of maps for estimating the car pose. Since maintaining and using several maps is computationally expensive, it is important to analyze which type of map is more adequate for each application. In this work, we provide data for such analysis by comparing the accuracy of a particle filter localization when using occupancy, reflectivity, color, or semantic grid maps. To the best of our knowledge, such evaluation is missing in the literature. For building semantic and colour grid maps, point clouds from a Light Detection and Ranging (LiDAR) sensor are fused with images captured by a front-facing camera. Semantic information is extracted from images with a deep neural network. Experiments are performed in varied environments, under diverse conditions of illumination and traffic. Results show that occupancy grid maps lead to more accurate localization, followed by reflectivity grid maps. In most scenarios, the localization with semantic grid maps kept the position tracking without catastrophic losses, but with errors from 2 to 3 times bigger than the previous. Colour grid maps led to inaccurate and unstable localization even using a robust metric, the entropy correlation coefficient, for comparing online data and the map.
Jaynarayan H. Lala (email@example.com) is a Senior Principal Engineering Fellow at Raytheon Technologies, San Diego, CA, USA. Carl E. Landwehr (firstname.lastname@example.org) is a Research Scientist at George Washington University and a Visiting Professor at University of Michigan, Ann Arbor, MI, USA. John F. Meyer (email@example.com) is a Professor Emeritus of Computer Science and Engineering at University of Michigan, Ann Arbor, MI, USA. This Viewpoint is derived from material produced as part of the Intelligent Vehicle Dependability and Security (IVDS) project of IFIP Working Group 10.4.