Google: People Trusted Our Self-Driving Cars Too Much

AITopics Original Links 

The fact that Google's bubble-like self-driving car, unveiled this week, lacks a steering wheel might be seen as evidence the company's software is close to mastering the challenges of piloting a vehicle. But the car's design is just as much a consequence of what Google's existing fleet of automated Lexus SUVs revealed about human laziness. Google's engineers had been focused on perfecting how well those modified cars could handle freeway driving, and they imagined their technology hitting the market in a way that left humans sharing driving duties with their vehicle. "The idea was that the human drives onto the freeway, engages the system, [and] it takes them on the bulk of the trip--the boring part--and then they reëngage," said Nathaniel Fairfield, a technical lead on the project, speaking at the Embedded Vision Summit in Santa Clara, California, on Thursday. That approach had to be scrapped after tests showed that human drivers weren't trustworthy enough to be co-pilots to Google's software. When people began riding in one of the vehicles, they paid close attention to what the car was doing and to activity on the road around them, which meant the hand-off between person and machine was smooth.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found