The localization of self-driving cars is needed for several tasks such as keeping maps updated, tracking objects, and planning. Localization algorithms often take advantage of maps for estimating the car pose. Since maintaining and using several maps is computationally expensive, it is important to analyze which type of map is more adequate for each application. In this work, we provide data for such analysis by comparing the accuracy of a particle filter localization when using occupancy, reflectivity, color, or semantic grid maps. To the best of our knowledge, such evaluation is missing in the literature. For building semantic and colour grid maps, point clouds from a Light Detection and Ranging (LiDAR) sensor are fused with images captured by a front-facing camera. Semantic information is extracted from images with a deep neural network. Experiments are performed in varied environments, under diverse conditions of illumination and traffic. Results show that occupancy grid maps lead to more accurate localization, followed by reflectivity grid maps. In most scenarios, the localization with semantic grid maps kept the position tracking without catastrophic losses, but with errors from 2 to 3 times bigger than the previous. Colour grid maps led to inaccurate and unstable localization even using a robust metric, the entropy correlation coefficient, for comparing online data and the map.
The investigation of factors contributing at making humans trust Autonomous Vehicles (AVs) will play a fundamental role in the adoption of such technology. The user's ability to form a mental model of the AV, which is crucial to establish trust, depends on effective user-vehicle communication; thus, the importance of Human-Machine Interaction (HMI) is poised to increase. In this work, we propose a methodology to validate the user experience in AVs based on continuous, objective information gathered from physiological signals, while the user is immersed in a Virtual Reality-based driving simulation. We applied this methodology to the design of a head-up display interface delivering visual cues about the vehicle' sensory and planning systems. Through this approach, we obtained qualitative and quantitative evidence that a complete picture of the vehicle's surrounding, despite the higher cognitive load, is conducive to a less stressful experience. Moreover, after having been exposed to a more informative interface, users involved in the study were also more willing to test a real AV. The proposed methodology could be extended by adjusting the simulation environment, the HMI and/or the vehicle's Artificial Intelligence modules to dig into other aspects of the user experience.
In recent years, many sectors have experienced significant progress in automation, associated with the growing advances in artificial intelligence and machine learning. There are already automated robotic weapons, which are able to evaluate and engage with targets on their own, and there are already autonomous vehicles that do not need a human driver. It is argued that the use of increasingly autonomous systems (AS) should be guided by the policy of human control, according to which humans should execute a certain significant level of judgment over AS. While in the military sector there is a fear that AS could mean that humans lose control over life and death decisions, in the transportation domain, on the contrary, there is a strongly held view that autonomy could bring significant operational benefits by removing the need for a human driver. This article explores the notion of human control in the United States in the two domains of defense and transportation. The operationalization of emerging policies of human control results in the typology of direct and indirect human controls exercised over the use of AS. The typology helps to steer the debate away from the linguistic complexities of the term "autonomy." It identifies instead where human factors are undergoing important changes and ultimately informs about more detailed rules and standards formulation, which differ across domains, applications, and sectors.
In'The Terminator' series of action films starring Arnold Schwarzenegger, a cybernetic organism (cyborg) is programmed from the future to go back in time and kill the mother of the scientist who leads the fight against Skynet, an artificial intelligence system that will cause a nuclear holocaust. Terrifying and at times comical ("I'll be back", "Make my day") The Terminator cyborg was among the first presentations of artificial intelligence (AI) to a global audience. While numerous facets of AI have been developed over the past couple of decades, all with positive outcomes, the fear of AI being programmed to do something devastating to the human race, of computers "going rogue", continues to persist. On the other hand, AI holds tremendous potential for benefiting humanity in ways we are only just starting to recognize. This article gives an overview of artificial intelligence including some of its most interesting manifestations. The first step is defining what we mean by artificial intelligence. One definition of AI is "the simulation of human intelligence processes by machines, especially computers." Such processes include learning by acquiring information, understanding the rules around using that information, employing reasoning to reach conclusions, and self-correcting.
This ebook, based on the latest ZDNet / TechRepublic special feature, examines how driverless cars, trucks, semis, delivery vehicles, drones, and other UAVs are poised to unleash a new level of automation in the enterprise. Few technologies have been more anticipated heading into the 2020s than autonomous vehicles. Tantalizingly close and yet still perhaps decades from market adoption in some use cases, the technology is as promising as it is misunderstood. You've heard the consumer hype, but what gets less ink are the transformative changes that autonomous vehicles will bring -- in some cases already are bringing -- to the enterprise. Affecting sectors as disparate as shipping and logistics, energy, agriculture, transportation, construction, and infrastructure -- to name just a few -- it's hard to overstate the impact of the diverse and versatile set of technologies lumped into the decidedly broad category of'autonomous vehicles'. This guide will help you sort the hype from the business reality and tell you all you need to know about the autonomous vehicle revolution on the ground, in the air, and even at sea. In 1939, General Motors predicted we'd have an autonomous vehicle highway system up and running by the dawn of the 1960s. As with a lot of autonomous vehicle hype, that prediction was a tad premature, but it demonstrates the long history of autonomous vehicle development.
Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.
Connected and autonomous vehicles (CAVs) will form the backbone of future next-generation intelligent transportation systems (ITS) providing travel comfort, road safety, along with a number of value-added services. Such a transformation---which will be fuelled by concomitant advances in technologies for machine learning (ML) and wireless communications---will enable a future vehicular ecosystem that is better featured and more efficient. However, there are lurking security problems related to the use of ML in such a critical setting where an incorrect ML decision may not only be a nuisance but can lead to loss of precious lives. In this paper, we present an in-depth overview of the various challenges associated with the application of ML in vehicular networks. In addition, we formulate the ML pipeline of CAVs and present various potential security issues associated with the adoption of ML methods. In particular, we focus on the perspective of adversarial ML attacks on CAVs and outline a solution to defend against adversarial attacks in multiple settings.
Earlier this month, I crawled into Dr. Wendy Ju's autonomous car simulator to explore the future of human-machine interfaces at CornellTech's Tata Innovation Center. Dr. Ju recently moved to the Roosevelt Island campus from Stanford University. While in California, the roboticist was famous for making videos capturing people's reactions to self-driving cars using students disguised as "ghost-drivers" in seat costumes. Professor Ju's work raises serious questions of the metaphysical impact of docility.
More than dozen manufacturers are currently on the market developing either partially or fully automated vehicles (AV), and it is plausibly expected that more are willing to join this promising area soon (Endsley 2017, KPMG 2016). As the major automobile companies are already convinced that the future of transportation lies in AV, the general public as well as policy makers and scholars are not so united in their attitude, expressing their worries and doubts about the safety and reliability of self-driving cars. 1 While the technical problems seem to be just a matter of time today, the governance, policy and legal issues persist and naturally dominate the discussions of AV implementation into our everyday life of commuting and transportation, as these are watched with high interest of public that is usually very hard to be brought to a consensus. As even broader discussion of the AC on the European level can be expected in coming years, the expectations, presumptions and overall feelings toward AC are already being analysed on the local and international markets (Volvo Car USA 2016, Bansal & Kockleman 2017, Kyriakidis et al. 2015, Payre et al. 2014). From the perspective of ethics and responsible research and innovation (RRI), many experts call for action in order to assess pros and cons of this emerging technology in terms of public health and safety, ethical, legal and social issues as well as to strengthen the public engagement in the running discussions (Goodall 2014, Fagnant and Kockleman 2015, Hevelke & Nida-Rümelin 2015, Gogoll & Müller 2016, Fleetwood 2017, Geistfeld 2017).