Last Monday, the Obama Administration released a hundred-and-twelve-page policy tome, "Federal Automated Vehicles Policy," which, despite its sleep-inducing title, found an eager readership. The document contained long-awaited regulatory guidance on self-driving cars--a concept that has gone from sci-fi fantasy to legitimate industry in just a few short years. The official reaction from manufacturers has been muted. Nonetheless, the internal reaction was likely relief. Without federal recognition and regulatory authority, the autonomous-vehicle industry exists in legal limbo.
One day they just appeared--Ford Fusions, some black, some white, with UBER stamped on the side. With their twenty cameras, seven lasers, and rooftop-mounted G.P.S., the self-driving cars stood out. People stopped and stared as they took trial journeys around Pittsburgh. That was in the spring. Now, in the waning days of summer, passengers hailing an Uber X may be picked up by one of the city's many human drivers, or by one of a tiny fleet of autonomous vehicles.
The ethics of driverless car technology is . . . A report [in June] in the journal Science found that most people surveyed think that it would be more moral for a driverless car to be programmed to crash into a wall and sacrifice its passengers rather than hit a larger number of pedestrians, if it only had those two choices. If you don't brake, you will kill the squirrel. However, you happen to know that the squirrel is on his way to kill two other squirrels. What if the two other squirrels are known arsonists?
On a clear morning in early May, Brian Lathrop, a senior engineer for Volkswagen's Electronics Research Laboratory, was in the driver's seat of a Tesla Model S as it travelled along a stretch of road near Blacksburg, Virginia, when the car began to drift from its lane. Lathrop had his hands on the wheel but was not in control of the vehicle. The Tesla was in Autopilot mode, a highly evolved version of cruise control that, via an array of sensors, allows the car to change lanes, steer through corners, and match the lurching of traffic unaided. As the vehicle--one of a fleet belonging to Virginia Tech's Transportation Institute, which Lathrop was visiting that day--lost track of the road markings, he shook the wheel to disengage Autopilot. "If I hadn't been aware of what was happening, it could have been a completely different outcome," Lathrop told me recently.
On the evening of March 31st, Elon Musk unveiled Tesla's sinuous Model 3, the company's first "affordable" electric-car model. After touting the sedan's punchy acceleration, two-hundred-and-fifteen-mile battery range, and sweeping, seamless glass roof, he mentioned its base price of thirty-five thousand dollars and told the audience that prospective buyers had already reserved more than a hundred and fifteen thousand of the vehicles, to rapturous applause and shouts of "You did it!" Not one to miss a marketing trick, Musk capped the night on Twitter, with a cryptic thank-you message that promised more: "Thanks for tuning in to the Model 3 unveil Part 1! Part 2 is super next level, but that's for later . . . Within hours, the tech community was awash in speculation about what more Tesla could have in store for the Model 3. Some wondered, specifically, whether it would be the world's first mass-market, fully autonomous self-driving car. Spurred forward by Google and other Silicon Valley companies, the auto industry has been tinkering with autonomous vehicles for years.