Despite the fact that he does not see very well, Alexei Efros, recipient of the 2016 ACM Prize in Computing and a professor at the University of California at Berkeley, has spent most of his career trying to understand, model, and recreate the visual world. Drawing on the massive collection of images on the Internet, he has used machine learning algorithms to manipulate objects in photographs, translate black-and-white images into color, and identify architecturally revealing details about cities. Here, he talks about harnessing the power of visual complexity. You were born in St. Petersburg (Russia), and were 14 when you came to the U.S. What drew you to computer science?
As excited as we are about the forthcoming generation of social home robots (including Jibo, Kuri, and many others), it's hard to ignore the fact that most of them look somewhat similar. They tend to feature lots of shiny white and black plasticky roundness. That's for admittedly very good reasons, but it comes at the cost of both uniqueness and visual and tactile personality. Guy Hoffman, who is well known for the fascinating creativity of his robot designs, has been working on a completely new kind of social robot in a collaboration between his lab at Cornell and Google ZOO's creative technology team in APAC. The robot is called Blossom, and we'd describe it for you, except that it's designed to be handmade out of warm natural materials like wool and wood so that every single one is a little bit different.
One response to the call by experts in robotics and artificial intelligence for an ban on "killer robots" ("lethal autonomous weapons systems" or Laws in the language of international treaties) is to say: shouldn't you have thought about that sooner? Figures such as Tesla's CEO, Elon Musk, are among the 116 specialists calling for the ban. "We do not have long to act," they say. "Once this Pandora's box is opened, it will be hard to close." But such systems are arguably already here, such as the "unmanned combat air vehicle" Taranis developed by BAE and others, or the autonomous SGR-A1 sentry gun made by Samsung and deployed along the South Korean border.
Before autonomous trucks and taxis hit the road, manufacturers will need to solve problems far more complex than collision avoidance and navigation (see "10 Breakthrough Technologies 2017: Self-Driving Trucks"). These vehicles will have to anticipate and defend against a full spectrum of malicious attackers wielding both traditional cyberattacks and a new generation of attacks based on so-called adversarial machine learning (see "AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks"). As consensus grows that autonomous vehicles are just a few years away from being deployed in cities as robotic taxis, and on highways to ease the mind-numbing boredom of long-haul trucking, this risk of attack has been largely missing from the breathless coverage. It reminds me of numerous articles promoting e-mail in the early 1990s, before the newfound world of electronic communications was awash in unwanted spam. Back then, the promise of machine learning was seen as a solution to the world's spam problems.
You need just two eyes and two ears to drive. Those remarkable sensors provide all the info you need to, say, know that a fire engine is coming up fast behind you, so get out of the way. Autonomous vehicles need a whole lot more than that. They use half a dozen cameras to see everything around them, radars to know how far away it all is, and at least one lidar laser scanner to map the world. Yet even that may not be enough.
When it comes to digital assistants like Amazon's Alexa, my four-year-old niece Hannah Metz is an early adopter. Her family has four puck-like Amazon Echo Dot devices plugged in around her house--including one in her bedroom--that she can use to call on Alexa at any moment. "Alexa, play'It's Raining Tacos,'" she commanded on a recent sunny afternoon, and the voice-controlled helper immediately complied, blasting through its speaker a confection of a song with lines like "It's raining tacos from out of the sky" and "Yum, yum, yum, yum, yumidy yum." I think this ability to get music on demand is neat, too, and I didn't want to be rude, so I danced with her. But at the same time I was wondering what it's going to mean for her to grow up with computers as servants.
Poke a hole in a human and something remarkable happens. First of all, you go to jail. Poke a hole in a robot, however, and prepare for a long night of repairs. The machines may be stronger than us, but they're missing out on a vital superpower. Researchers at Belgium's Vrije Universiteit Brussel report this week in Science Robotics that they've developed a squishy, self-healing robot.
Henri Waelbroeck, director of research at machine learning trade execution system Portware, says rather poetically that the system "reads the tea leaves" in market data to distinguish different sorts of orders and execute trades more efficiently. Portware uses artificial intelligence to help traders select the best algorithm for particular market conditions, asset class, broker, venue etc., interacting with the order flow and computing a mind-boggling array of variables in real time. Say you are buying a stock, and you predict there is likely to be more orders hitting the bid side of the spread in the next five minutes, you should be able to operate an efficient algorithm that only posts limit orders and collects the spread as it executes. Using an algorithm that crosses the spread in this instance would be wasteful since you expect order flow to be coming your way. Waelbroeck, formerly a professor at the Institute of Nuclear Sciences at the National University of Mexico, whose specialisms include genetic algorithms and chaos theory, said: "Just throwing machine learning at problems usually doesn't give a very good answer.
UAV designs are a perpetual compromise between the ability to fly long distances efficiently with payloads (fixed-wing) and the ability to maneuver, hover, and land easily (rotorcraft). With a very few rather bizarre exceptions, any aircraft that try to offer the best of both worlds end up relatively complicated, inefficient, and expensive. The ideal fantasy UAV would be a fixed-wing aircraft with the magical ability to land on a dime, and a group of researchers from the University of Sherbrooke in Canada have come very close to making that happen, with a little airplane that uses legs and claws to reliably perch on walls. The majority of the perching robots that we've seen are quadrotors. Perching with a quadrotor is significantly easier than perching with a fixed-wing aircraft, because you have many more degrees of control, and you're not obligated to keep the vehicle moving forward all the time.
If there aren't enough examples of a particular accent or vernacular, then these systems may simply fail to understand you (see "AI's Language Problem"). "If you analyze Twitter for people's opinions on a politician and you're not even considering what African-Americans are saying or young adults are saying, that seems problematic," O'Connor says. Solon Barocas, an assistant professor at Cornell and a cofounder of the event, says the field is growing, with more and more researchers exploring the issue of bias in AI systems. Shared Goel, an assistant professor at Stanford University who studies algorithmic fairness and public policy, says the issue is not always straightforward.