In a video, a rodent reaches out and grabs a morsel of food, while small, colored dots highlight the positions of its knuckles. In another clip, a racehorse gallops along a track; again, small, colored dots track the position of its body parts. In a third video, two human dancers circle around each other as those same dots unfailingly follow the sometimes fluid, sometimes jerky movements of their limbs. These videos are showcases for DeepLabCut, a tool that can automatically track and label the body parts of moving animals. Developed this year by Mackenzie Mathis and Alexander Mathis, a pair of married neuroscientists, DeepLabCut is remarkable in its simplicity.
A mass-shooting in Annapolis, Maryland, at the Capital Gazette yesterday killed five journalists, making it the most deadly domestic attack on the press since 9/11. Local police say a suspect in custody, Jarrod Ramos, appears to have acted alone and been motivated by retribution for a failed defamation lawsuit against the paper. As accounts of the shooting and its aftermath arrived, one detail stood out: The suspect was uncooperative after apprehension, and the county police used facial-recognition technology to identify him. Some would celebrate the use any available technology to name an unidentified and uncooperative suspect caught in the act of a mass shooting, especially before the incident is clearly contained. But recently, complex surveillance technologies, like a service that Amazon pitched to law enforcement, have come under scrutiny.
In a world where the most famous dorm-room-born internet company has developed a reputation as a matrix of fake users and misleading posts, Ash Bhat and Rohan Phadte are hoping that the answer to online disinformation could come out of their own college apartment. Bhat and Phadte, both 21, are the founders of Robhat Labs, which they launched while previously students at the UC Berkeley. Last year, they debuted two misinformation-fighting projects. The first is NewsBot, an app for Facebook Messenger that aims to identify the political leaning of a given news piece. The duo's third project, set to be released next month, is a free browser extension called SurfSafe.
The New York City subway is a miracle, especially at 3 a.m. on a Friday night. But the system is also falling apart, and it's going to cost billions to keep the old trains running: $19 billion, at least according to one estimate from city planners. The time has come to give up on the 19th-century idea of public transportation, and leap for the autonomous future. Right now, fully autonomous cars are rolling around Pittsburgh, the San Francisco Bay area, and parts of Michigan, shuttling people from here to there with minimal manual intervention. Instead of fixing the old trains, let's rip out the tracks and fill the tunnels with fleets of autonomous vehicles running on pavement.
On March 18, at 9:58 p.m., a self-driving Uber car killed Elaine Herzberg. The vehicle was driving itself down an uncomplicated road in suburban Tempe, Arizona, when it hit her. Herzberg, who was walking across the mostly empty street, was the first pedestrian killed by an autonomous vehicle. The preliminary National Transportation Safety Board report on the incident, released on Thursday, shows that Herzberg died because of a cascading series of errors, human and machine, which present a damning portrait of Uber's self-driving testing practices at the time. Perhaps the worst part of the report is that Uber's system functioned as designed.
With a few millisecond mistakes, Duplex sounds like a human, complete with mmms and uhhs and cheery colloquialisms. The ability of the AI to respond to real, messy language and unexpected sequences is also incredible. Generating conversational human sentences in real time can probably be considered a "solved problem," as people around here like to say. The audio suggests that computer voices just skipped right past 2001: A Space Odyssey, and it turns out robots won't sound like an overlord, but a, uhh, Millennial. The way Google presented the technology encouraged people to think about themselves as powerful users, casting magic bots out across the world to do our bidding.
Nearly 20 years earlier, a young roboticist named Helen Greiner was lecturing at a tech company in Boston. Standing in front of the small crowd, Greiner would have been in her late 20s, with hooded eyes, blonde hair, and a faint British accent masked by a lisp. She was showing off videos of Pebbles, a bright-blue robot built out of sheet metal. For many years, the field of AI struggled with a key problem: How do you make robots for the real world? A robot that followed a script was simple; but to handle the unforeseen (say, a pothole or a fence), programmers would have to code instructions for every imaginable scenario.
In a dank corner of the internet, it is possible to find actresses from Game of Thrones or Harry Potter engaged in all manner of sex acts. Or at least to the world the carnal figures look like those actresses, and the faces in the videos are indeed their own. Everything south of the neck, however, belongs to different women. An artificial intelligence has almost seamlessly stitched the familiar visages into pornographic scenes, one face swapped for another. The genre is one of the cruelest, most invasive forms of identity theft invented in the internet era.
You know the drill by now: A runaway trolley is careening down a track. There are five workers ahead, sure to be killed if the trolley reaches them. You can throw a lever to switch the trolley to a neighboring track, but there's a worker on that one as well who would likewise be doomed. Do you hit the switch and kill one person, or do nothing and kill five? That's the most famous version of the trolley problem, a philosophical thought experiment popularized in the 1970s.
Chris Urmson led Google's self-driving car team from its early days all the way until the company shed its Google skin and emerged under the Alphabet umbrella as Waymo, the obvious leader in driverless cars. But though Urmson pushed the organization far enough up the technological mountain to see the possibility that Waymo would be the first to commercially deploy automated vehicles, he did not make it to the promised land. Instead, after current Waymo CEO John Krafcik took control of the enterprise, Urmson left in December of 2016. After a few months pondering his next move, he cofounded Aurora, a new self-driving car start-up, with Sterling Anderson, who'd launched Autopilot at Tesla, and Drew Bagnell, a machine-learning expert who'd been at Uber. When the company came out of stealth in early 2017, it was greeted with something like awe.