MIT Technology Review


Finally, a Driverless Car with Some Common Sense

MIT Technology Review

Instead of relying on simple rules or machine-learning algorithms to train cars to drive, the startup is taking inspiration from cognitive science to give machines a kind of common sense and the ability to quickly deal with new situations. Trying to reverse-engineer the ways in which even a young baby is smarter than the cleverest existing AI system could eventually lead to many smarter AI systems, Tenenbaum says. A related approach might eventually give a self-driving car something approaching a rudimentary form of common sense in unfamiliar scenarios. "This is a very different approach, and I completely applaud it," says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, a research institute created by Microsoft cofounder Paul Allen to explore new ideas in AI, including ones inspired by cognitive psychology.


Facial Recognition Is Only the Beginning: Here's What to Expect Next in Biometrics on Your Phone

MIT Technology Review

Apple says its version of the technology, called Face ID and available when the phone ships in November, uses a suite of sensors to map your face in 3-D. An infrared light illuminates your face, and a projector projects an array of infrared dots at it. Anil Jain, a Michigan State University professor who studies biometric recognition and computer vision, notes that it uses an existing tactic called structured light to capture your visage in three dimensions--something he employed for object recognition back in the 1980s. Beyond the work the company has done to keep the wrong people out of the phone, Apple claims that Face ID will let the right person in even in the dark, while wearing glasses or a hat, and after growing a beard. Jain says it's conceivable that smartphones will eventually include sensors for face, iris, and fingerprint recognition--a rarity now.


Prospect of Synthetic Embryos Sparks New Bioethics Debate

MIT Technology Review

Two years ago, Shao, a mechanical engineer with a flair for biology, was working with embryonic stem cells, the kind derived from human embryos able to form any cell type. The work in Michigan is part of a larger boom in organoid research--scientists are using stem cells to create clumps of cells that increasingly resemble bits of brain, lungs, or intestine (see "10 Breakthrough Technologies: Brain Organoids"). Scientists have started seeking ways to coax stem cells to form more complicated, organized tissues, called organoids. Following guidelines promulgated last year by Kimmelman's international stem-cell society, Fu's team destroys the cells just five days after they're made.


Why 500 Million People in China Are Talking to This AI

MIT Technology Review

Some also use it to send text messages through voice commands while driving, or to communicate with a speaker of another Chinese dialect. But while some impressive progress in voice recognition and instant translation has enabled Xu to talk with his Canadian tenant, language understanding and translation for machines remains an incredibly challenging task (see "AI's Language Problem"). In August, iFlytek launched a voice assistant for drivers called Xiaofeiyu (Little Flying Fish). Min Chu, the vice president of AISpeech, another Chinese company working on voice-based human-computer interaction technologies, says voice assistants for drivers are in some ways more promising than smart speakers and virtual assistants embedded in smartphones.


Drones and Robots Are Taking Over Industrial Inspection

MIT Technology Review

The effort shows how low-cost drones and robotic systems--combined with rapid advances in machine learning--are making it possible to automate whole sectors of low-skill work. Avitas uses drones, wheeled robots, and autonomous underwater vehicles to collect images required for inspection from oil refineries, gas pipelines, coolant towers, and other equipment. Nvidia's system employs deep learning, an approach that involves training a very large simulated neural network to recognize patterns in data, and which has proven especially good for image processing. It is possible, for example, to train a deep neural network to automatically identify faults in a power line by feeding in thousands of previous examples.


I Tried Shoplifting in a Store without Cashiers and Here's What Happened

MIT Technology Review

At a prototype store in Santa Clara, California, you grab a plastic basket, fill it up as you amble down an aisle packed with all kinds of things--Doritos, hand soap, Coke, and so on--then walk to a tablet computer near the door. This store is actually the demonstration space of a startup called Standard Cognition, which is using a network of cameras and machine vision and deep-learning techniques to create an autonomous checkout experience. A Stockholm-based startup called Wheelys is testing a similar store in China. It missed one of my two bottles of Coke and added an additional bottle of soap--things we could edit in the checkout app on the tablet.


Modern Apothecary

MIT Technology Review

A few weeks earlier, the two of them had won Hacking Medicine, a competition Cohen had previously helped found to pursue innovations in health care. This summer, PillPack launched custom software that helps streamline the prescription filling process and gives its pharmacists a more holistic view of customers so they can offer more personalized service--all at the same cost as filling pill jars at CVS or Walgreens. That weekend, Cohen and Parker talked to doctors about their patients' difficulties sorting their medicines and taking them as prescribed, and the health problems that resulted. By building a more complete view of each customer, PillPack has created that environment, allowing its pharmacists to deliver better care.


Amazon Has Developed an AI Fashion Designer

MIT Technology Review

The effort points to ways in which Amazon and other companies could try to improve the tracking of trends in other areas of retail--making recommendations based on products popping up in social-media posts, for instance. For instance, one group of Amazon researchers based in Israel developed machine learning that, by analyzing just a few labels attached to images, can deduce whether a particular look can be considered stylish. An Amazon team at Lab126, a research center based in San Francisco, has developed an algorithm that learns about a particular style of fashion from images, and can then generate new items in similar styles from scratch--essentially, a simple AI fashion designer. The event included mostly academic researchers who are exploring ways for machines to understand fashion trends.


Hackers Are the Real Obstacle for Self-Driving Vehicles

MIT Technology Review

Before autonomous trucks and taxis hit the road, manufacturers will need to solve problems far more complex than collision avoidance and navigation (see "10 Breakthrough Technologies 2017: Self-Driving Trucks"). These vehicles will have to anticipate and defend against a full spectrum of malicious attackers wielding both traditional cyberattacks and a new generation of attacks based on so-called adversarial machine learning (see "AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks"). When hackers demonstrated that vehicles on the roads were vulnerable to several specific security threats, automakers responded by recalling and upgrading the firmware of millions of cars. The computer vision and collision avoidance systems under development for autonomous vehicles rely on complex machine-learning algorithms that are not well understood, even by the companies that rely on them (see "The Dark Secret at the Heart of AI").


Growing Up with Alexa

MIT Technology Review

When it comes to digital assistants like Amazon's Alexa, my four-year-old niece Hannah Metz is an early adopter. "Alexa, play'It's Raining Tacos,'" she commanded on a recent sunny afternoon, and the voice-controlled helper immediately complied, blasting through its speaker a confection of a song with lines like "It's raining tacos from out of the sky" and "Yum, yum, yum, yum, yumidy yum." These things are most popular among people age 25 to 34, which includes a ton of parents of young children and parents-to-be. Her interest in her digital assistant jibes with some findings in a recent MIT study, where researchers looked at how children ages three to 10 interacted with Alexa, Google Home, a tiny game-playing robot called Cozmo, and a smartphone app called Julie Chatbot.