Results


Uber's Self-Driving Car Didn't Malfunction, It Was Just Bad

The Atlantic

On March 18, at 9:58 p.m., a self-driving Uber car killed Elaine Herzberg. The vehicle was driving itself down an uncomplicated road in suburban Tempe, Arizona, when it hit her. Herzberg, who was walking across the mostly empty street, was the first pedestrian killed by an autonomous vehicle. The preliminary National Transportation Safety Board report on the incident, released on Thursday, shows that Herzberg died because of a cascading series of errors, human and machine, which present a damning portrait of Uber's self-driving testing practices at the time. Perhaps the worst part of the report is that Uber's system functioned as designed.


Is the Law Ready for Driverless Cars?

Communications of the ACM

I am a law professor who teaches torts and has been studying driverless cars for almost a decade. Despite the headlines, I am reasonably convinced U.S. common law is going to adapt to driverless cars just fine. The courts have seen hundreds of years of new technology, including robots. American judges have had to decide, for example, whether a salvage operation exercises exclusive possession over a shipwreck by visiting it with a robot submarine (it does) and whether a robot copy of a person can violate their rights of publicity (it can). Assigning liability in the event of a driverless car crash is not, in the run of things, all that tall an order.


The Most Important Self-Driving Car Announcement Yet

The Atlantic

The company's autonomous vehicles have driven 5 million miles since Alphabet began the program back in 2009. The first million miles took roughly six years. The next million took about a year. The third million took less than eight months. The fourth million took six months.


Can You Sue a Robocar?

The Atlantic

On Sunday night, a self-driving car operated by Uber struck and killed a pedestrian, 49-year-old Elaine Herzberg, on North Mill Avenue in Tempe, Arizona. It appears to be the first time an automobile driven by a computer has killed a human being by force of impact. The car was traveling at 38 miles per hour. An initial investigation by Tempe police indicated that the pedestrian might have been at fault. According to that report, Herzberg appears to have come "from the shadows," stepping off the median into the roadway, and ending up in the path of the car while jaywalking across the street.


A Comprehensive Self-Driving Car Test

Communications of the ACM

Every few years, I have to pass a test from the Department of Motor Vehicles to drive my car in Virginia (and the rest of the U.S.). Shouldn't a self-driving car be required to do the same thing? Actually, the Waymo self-driving car passes a more comprehensive set of tests than humans do, as I found out after asking about its safety report.a Disclaimer: I work for Google, which is an Alphabetb company and Waymo is a sister company.


Model-Based Systems in the Automotive Industry

AI Magazine

The automotive industry was the first to promote the development of applications of model-based systems technology on a broad scale and, as a result, has produced some of the most advanced prototypes and products. In this article, we illustrate the features and benefits of model-based systems and qualitative modeling by prototypes and application systems that were developed in the automotive industry to support on-board diagnosis, design for diagnosability, and failure modes and effects analysis. Car manufacturers and their suppliers face increasingly serious challenges particularly related to fault analysis and diagnosis during the life cycle of their products. On the one hand, the complexity and sophistication of vehicles is growing, so it is becoming harder to predict interactions between vehicle systems, especially when failures occur. On the other hand, legal regulations and the demand for safety impose strong requirements on the detection and identification of faults and the prevention of their effects on the environment or dangerous situations for passengers and other people.


337

AI Magazine

The t.estbed simulates a class of a distributed knowledge-based THERE ARE TWO MAJOR T IEMES of this article. First, WC introduce readers to the emerging subdiscipline of AI called Dzstrrbuted Problem Solving, and more specifically the authors' research on Functionally Accurate, Cooperative systems Second, we discuss the st,ructure of tools that allow more thorough experimentation than has typically been performed in AI research An examplr of such a tool, the Distributed Vehicle Monitoring Testbed, will bc presented. The testbed simulates a class of dist,ributed knowledge-based problem solving systems operating on an abstracted version of a vehicle monitoring task. This presentation emphasizes how the t,estbed is structured to facilit,ate the study of a wide range of issues faced in t,he design of distributed problem solving networks. Distribut,ed Problem Solving (also called Distributed Al) combines the research interests of the fields of AI and Distributed Processing (Chandrasekaran 1981; Davis 1980, 1982; Fehling & Erman 1983).


The DARPA High-Performance Knowledge Bases Project

AI Magazine

Now completing its first year, the High-Performance Knowledge Bases Project promotes technology for developing very large, flexible, and reusable knowledge bases. The project is supported by the Defense Advanced Research Projects Agency and includes more than 15 contractors in universities, research laboratories, and companies. Programs lack knowledge about the world sufficient to understand and adjust to new situations as people do. Consequently, programs have been poor at interpreting and reasoning about novel and changing events, such as international crises and battlefield situations. These problems are more open ended than chess.


Steps toward a Cognitive Vision System

AI Magazine

An adequate natural language description of developments in a real-world scene can be taken as proof of "understanding what is going on." An algorithmic system that generates natural language descriptions from video recordings of road traffic scenes can be said to "understand" its input to the extent that algorithmically generated text is acceptable to the humans judging it. The ability to present a "variant formulation" without distorting the essential parts of the original message is taken as a cue that these essentials have been "understood." During art lessons, in particular those concerned with classical or ecclesiastic paintings, students are initially invited to merely describe what they see. Frequently, considerable a priori knowledge about ancient mythology or biblical traditions is required to succinctly characterize the depicted scene. Lack of the corresponding knowledge about other cultures can make it difficult for someone with only a European education to really understand and describe in an appropriate manner a painting by, for example, a Far East classic artist. Familiar human experiences mentioned in the preceding paragraph will now be "morphed" into a scientific challenge: to design and implement an algorithmic engine that generates an appropriate textual description of essential developments in a video sequence recorded from a real-world scene. Such an algorithmic engine will serve as one example of a cognitive vision system (CVS), which leaves room, as the experienced reader has noticed, for there to be more than one way to introduce the concept of a CVS. An alternative clearly consists in coupling a computer vision system with a robotic system of some kind and assessing the reactions of such a compound system. To whomever accepts the formulation, "one of the actions available to an agent is to produce language. This is called a speech act. Russell and Norvig (1995)" is unlikely to consider the two variants of a CVS alluded to previously as being fundamentally different. With regard to the first CVS version in particular, the following remarks are submitted for consideration: Obviously, we avoid a precise definition of understanding in favor of having humans compare the reaction of an algorithmic engine to that expected from a human. This fuzzy approach toward the circumscription of a CVS opens the road to constructive criticism--that is, to incremental system improvement--by pinpointing aspects of an output text that are not yet considered satisfactory.


Life in the Fast Lane

AI Magazine

Giving robots the ability to operate in the real world has been, and continues to be, one of the most difficult tasks in AI research. Since 1987, researchers at Carnegie Mellon University have been investigating one such task. Their research has been focused on using adaptive, vision-based systems to increase the driving performance of the Navlab line of on-road mobile robots. This research has led to the development of a neural network system that can learn to drive on many road types simply by watching a human teacher. This article describes the evolution of this system from a research project in machine learning to a robust driving system capable of executing tactical driving maneuvers such as lane changing and intersection navigation.