Collaborating Authors

The Controllability of Planning, Responsibility, and Security in Automatic Driving Technology Artificial Intelligence

Both traditional automakers and Internet companies have long been involved in the development of automated driving technology and have achieved certain results. In 2017, GM equipped the Super Cruise automatic driving function on the Cadillac CT6. In April of the same year, Baidu released the Apollo self-driving vehicles platform. In July, Audi officially released the Audi A8, and its automated driving system Traffic Jam Pilot reached Level 3. In October, Waymo completed the first social road test of Level 4 self-driving vehicles for the first time. In April 2018, Baidu launched the test ride of Level 4 Baidu driverless bus "Apolon," and announced the automated driving bus entered the mass production phase in July. The rapid development of automated driving technology has also led to a lot of discussions - most of which are concerned about the widespread use of automated driving technology.

New study calls for 'urgent' debate over the ethics of autonomous vehicles

Daily Mail - Science & tech

Self-driving vehicles have been proposed as a solution for the rapidly increasing number of fatal traffic accidents, which now claim a staggering 1.3 million casualties each year. While we have made strides in advancing self-driving technology, we have yet to explore at length how autonomous vehicles will be programmed to deal with situations that endanger human life, according to a new study published in Frontiers in Behavioral Neuroscience. To understand how self-driving cars might make these judgments, the researchers looked at how humans deal with similar driving dilemmas. A study examining the ethics behind decisions self-driving cars make has found that the majority of people will not agree with guidelines drawn up by an ethics committee. When faced with driving dilemmas, people show a high willingness to sacrifice themselves for others, make decisions based on the victim's age and swerve onto sidewalks to minimize the number of lives lost.

Away from Trolley Problems and Toward Risk Management Artificial Intelligence

As automated vehicles receive more attention from the media, there has been an equivalent increase in the coverage of the ethical choices a vehicle may be forced to make in certain crash situations with no clear safe outcome. Much of this coverage has focused on a philosophical thought experiment known as the "trolley problem," and substituting an automated vehicle for the trolley and the car's software for the bystander. While this is a stark and straightforward example of ethical decision making for an automated vehicle, it risks marginalizing the entire field if it is to become the only ethical problem in the public's mind. In this chapter, I discuss the shortcomings of the trolley problem, and introduce more nuanced examples that involve crash risk and uncertainty. Risk management is introduced as an alternative approach, and its ethical dimensions are discussed. The Trolley Problem A self-driving car is driving toward a tunnel when suddenly a child runs into the road from behind a rock. The car begins to brake, but its software determines that braking alone will not slow the car enough to stop in time, or even reach a survivable impact speed.

AI and machine learning has its own trolley problem debate


Advances in robotics mean autonomous vehicles, industrial robots and medical robots will be more capable, independent and pervasive over the next 20 years. Eventually, these autonomous machines could make decision-making errors that lead to hundreds of thousands of deaths, which could be avoided if humans were in the loop. Such a future is reasonably frightening but more lives would be saved than lost if society adopts robotic technologies responsibly. Robots aren't "programmed" by humans to mimic human decision-making; they learn from large datasets to perform tasks like "recognize a red traffic light" using complex mathematical formulas induced from data. This machine learning process requires much more data than humans need. However, once trained, robots would outperform humans in any given task and AI and robotics have dramatically improved their performance over the past five years through machine learning.

Crowdsourcing Moral Machines

Communications of the ACM

Robots and other artificial intelligence (AI) systems are transitioning from performing well-defined tasks in closed environments to becoming significant physical actors in the real world. No longer confined within the walls of factories, robots will permeate the urban environment, moving people and goods around, and performing tasks alongside humans. Perhaps the most striking example of this transition is the imminent rise of automated vehicles (AVs). They are expected to increase the efficiency of transportation, and free up millions of person-hours of productivity. Even more importantly, they promise to drastically reduce the number of deaths and injuries from traffic accidents.12,30 Indeed, AVs are arguably the first human-made artifact to make autonomous decisions with potential life-and-death consequences on a broad scale. This marks a qualitative shift in the consequences of design choices made by engineers. The decisions of AVs will generate indirect negative consequences, such as consequences affecting the physical integrity of third parties not involved in their adoption--for example, AVs may prioritize the safety of their passengers over that of pedestrians.