Describing a decision-making system as an "algorithm" is often a way to deflect accountability for human decisions. For many, the term implies a set of rules based objectively on empirical evidence or data. It also suggests a system that is highly complex--perhaps so complex that a human would struggle to understand its inner workings or anticipate its behavior when deployed. But is this characterization accurate? For example, in late December Stanford Medical Center's misallocation of covid-19 vaccines was blamed on a distribution "algorithm" that favored high-ranking administrators over frontline doctors. The hospital claimed to have consulted with ethicists to design its "very complex algorithm," which a representative said "clearly didn't work right," as MIT Technology Review reported at the time.
Infectious diseases don't get much more personalized than covid-19. No one can explain with any certainty why seemingly similar individuals respond so differently to exactly the same pathogen. Why do some of us get a case of the sniffles, and others end up on a ventilator? Why are so-called long-haulers left with lingering problems, yet other people recover fully? Why do some never show symptoms at all?
AI systems, on the other hand, are built to do only one of these things at a time. Computer-vision and audio-recognition algorithms can sense things but cannot use language to describe them. A natural- language model can manipulate words, but the words are detached from any sensory reality. If senses and language were combined to give an AI a more human-like way to gather and process new information, could it finally develop something like an understanding of the world? The hope is that these "multimodal" systems, with access to both the sensory and linguistic "modes" of human intelligence, should give rise to a more robust kind of AI that can adapt more easily to new situations or problems.
In the hours after she shared this makeup experiment, it was shown to hundreds of thousands of people on their "For You" pages, the lifeblood of TikTok. It wasn't obvious to her why this particular post was suddenly so visible, except that TikTok's recommendation algorithms had made it so. Since TikTok launched in China in 2016, it has become one of the most engaging and fastest-growing social media platforms in the world. It's been downloaded more than 2.6 billion times globally and has 100 million users in the US. And the unique way it finds and serves up content is a big part of its appeal.
After months of social distancing, it's not surprising that many people have felt starved for human companionship. Now a study from MIT has found that to our brains, the longings we feel during isolation are indeed similar to the food cravings we feel when hungry. After subjects endured one day of total isolation, looking at pictures of people having fun together activated the same brain region that lights up when someone who hasn't eaten all day sees a picture of pasta. "People who are forced to be isolated crave social interactions similarly to the way a hungry person craves food," says cognitive sciences professor Rebecca Saxe, PhD '03, the senior author of the study. "Our finding fits the intuitive idea that positive social interactions are a basic human need."
Robot design is usually a painstaking process, but MIT researchers have developed a system that helps automate the task. Once it's told which parts you have--such as wheels, joints, and body segments--and what terrain the robot will need to navigate, RoboGrammar is on the case, generating optimized structures and control programs. To rule out "nonsensical" designs, the researchers developed an animal-inspired "graph grammar"--a set of rules for how parts can be connected, says Allan Zhao, a PhD student in the Computer Science and Artificial Intelligence Laboratory. The rules were particularly informed by the anatomy of arthropods such as insects and lobsters, which all have a central body with a variable number of segments that may have legs attached. RoboGrammar can generate thousands of potential structures based on these rules.
Consider, for example, automated driving systems. Although autonomous vehicles promise to significantly improve mobility, engineers must test these frameworks for critical factors such as safety and potential system failures. Toyota is one of the automakers working to make driverless systems safe. In 2016, Toyota president and CEO Akio Toyoda said more testing would be needed to complete its mission--some 8.8 billion miles of it. Fortunately, says Stefan Jockusch, vice president of strategy at Siemens Digital Industries Software, simulation can help.
For all the attention that AI audits have received, though, their ability to actually detect and protect against bias remains unproven. The term "AI audit" can mean many different things, which makes it hard to trust the results of audits in general. The most rigorous audits can still be limited in scope. And even with unfettered access to the innards of an algorithm, it can be surprisingly tough to say with certainty whether it treats applicants fairly. At best, audits give an incomplete picture, and at worst, they could help companies hide problematic or controversial practices behind an auditor's stamp of approval.
As machine-learning applications move into the mainstream, a new era of cyber threat is emerging--one that uses offensive artificial intelligence (AI) to supercharge attack campaigns. Offensive AI allows attackers to automate reconnaissance, craft tailored impersonation attacks, and even self-propagate to avoid detection. Security teams can prepare by turning to defensive AI to fight back--using autonomous cyber defense that learns on the job to detect and respond to even the most subtle indicators of an attack, no matter where it appears. MIT Technology Review recently sat down with experts from Darktrace--Marcus Fowler, director of strategic threat, and Max Heinemeyer, director of threat hunting--to discuss the current and emerging applications of offensive AI, defensive AI, and the ongoing battle of algorithms between the two. Sign up to watch the webcast.
Deborah Raji, a fellow at nonprofit Mozilla, and Genevieve Fried, who advises members of the US Congress on algorithmic accountability, examined over 130 facial-recognition data sets compiled over 43 years. They found that researchers, driven by the exploding data requirements of deep learning, gradually abandoned asking for people's consent. This has led more and more of people's personal photos to be incorporated into systems of surveillance without their knowledge. It has also led to far messier data sets: they may unintentionally include photos of minors, use racist and sexist labels, or have inconsistent quality and lighting. The trend could help explain the growing number of cases in which facial-recognition systems have failed with troubling consequences, such as the false arrests of two Black men in the Detroit area last year.