Starting next year, New Yorkers could join Silicon Valley workers and residents of cities like Phoenix, Pittsburgh, and Boston as players in a grand, growing, autonomous car experiment. General Motors, through its self-driving startup Cruise Automation, plans to put a fleet of autonomous Chevrolet Bolts onto the streets of lower Manhattan in early 2018. The company is already testing in San Francisco, and once it finalizes its application to run in New York (the governor loves the idea), expects to learn valuable lessons from the city's colorful chaos. Those will be important lessons, to be sure. But the move across the country raises a novel question.
The idea of a paperclip-making AI didn't originate with Lantz. Most people ascribe it to Nick Bostrom, a philosopher at Oxford University and the author of the book Superintelligence. The New Yorker (owned by Condé Nast, which also owns Wired) called Bostrom "the philosopher of doomsday," because he writes and thinks deeply about what would happen if a computer got really, really smart. Not, like, "wow, Alexa can understand me when I ask it to play NPR" smart, but like really smart. In 2003, Bostrom wrote that the idea of a superintelligent AI serving humanity or a single person was perfectly reasonable.
Amidst the robocar hype, it's easy to forget that for all their powers, computers are still lousy drivers compared to humans. This week, Eric Adams introduced us to the people working to interpret hominid behavior for driving robots. Turns out perception is a remarkable, variegated thing, and cars need to learn how to do all the cool stuff we the fleshy can before performing seamlessly on the road. The same goes for companies. Google parent company Alphabet announced this week it will construct a techified neighborhood in Toronto.
The pandemic of sexual harassment and abuse--you saw its prevalence in the hashtag #metoo on social media in the past weeks--isn't confined to Harvey Weinstein's casting couches. Decades of harassment by a big shot producer put famous faces on the problem, but whisper networks in every field have grappled with it forever. Last summer, the story was women in Silicon Valley. Earthquakes of this magnitude are never any fun for people atop shifting tectonic plates. But the new world they create can be a better one.
Next time you're driving down the road or walking down the street, pause to consider how you read your surroundings. How you pay extra attention to the kid kicking a soccer ball around her front lawn and the slightly wobbly, nervous looking cyclist. How you deprioritize the woman striding toward the street, knowing she's heading for the group of friends waving to her from the sidewalk. You make these calls by drawing on a lifetime of social and cultural experience so ingrained you hardly need to think about it. But imagine you're an autonomous car trying to do the same thing, without that accumulated knowledge or the shared humanity that lets you read others' nuanced behavioral cues.
Tony Fadell is at the Grove, a spectacularly beautiful country estate outside of London. The event is Founders Forum: the ultra exclusive invite-only tech conference. Prince William is in the house. The guest list is lousy with knights and lesser officers of the Most Excellent Order of the British Empire. Marissa Mayer, the now ex-CEO of Yahoo, and Biz Stone, recently returned to Twitter, are mingling with the other hundred or so invitees.
Payload secured, it backs up--beep, beep, beep--whips around, and speeds to its dirt pile, stopping so quickly that it tips forward on two wheels. It drops its quarry and backs up--beep, beep, beep--then speeds back to its excavation for another bucketful. Atop the tractor is, of all things, a cargo carrier, like one you'd put on your car. But instead of carrying camping gear, it's packed with electronics. Because no one is sitting in this Bobcat tractor--it's operating itself, autonomously zipping around a lot lined with 4,500-pound concrete blocks.
The right to due process was inscribed into the US constitution with a pen. A new report from leading researchers in artificial intelligence cautions it is now being undermined by computer code. Public agencies responsible for areas such as criminal justice, health, and welfare increasingly use scoring systems and software to steer or make decisions on life-changing events like granting bail, sentencing, enforcement, and prioritizing services. The report from AI Now, a research institute at NYU that studies the social implications of artificial intelligence, says too many of those systems are opaque to the citizens they hold power over. The AI Now report calls for agencies to refrain from what it calls "black box" systems opaque to outside scrutiny.
At one point during his historic defeat to the software AlphaGo last year, world champion Go player Lee Sedol abruptly left the room. The bot had played a move that confounded established theories of the board game, in a moment that came to epitomize the mystery and mastery of AlphaGo. A new and much more powerful version of the program called AlphaGo Zero unveiled Wednesday is even more capable of surprises. In tests, it trounced the version that defeated Lee by 100 games to nothing, and has begun to generate its own new ideas for the more than 2,000-year-old game. AlphaGo Zero showcases an approach to teaching machines new tricks that makes them less reliant on humans.
While working at Tesla, I always enjoyed talking to people after they finished a factory tour. As much as they raved about the amazing automation, gigantic presses, and hundreds of robots, the reality was they only saw half of the actual manufacturing that was taking place in the building. Unknown to most visitors, the factory's "secret" second floor built many of Tesla's battery, power electronics, and drive-train systems. It was home to some of the most advanced manufacturing and automation systems in the company. Some of the robots moved at such high speeds that their arms needed to be built from carbon fiber instead of steel.