Goto

Collaborating Authors

MIT Technology Review


What happens when an algorithm gets it wrong

MIT Technology Review

In the first of a four-part series on FaceID, host Jennifer Strong explores the false arrest of Robert Williams by police in Detroit. The odd thing about Willliams's ordeal wasn't that police used face recognition to ID him--it's that the cops told him about it. There's no law saying they have to. The episode starts to unpack the complexities of this technology and introduces some thorny questions about its use. Credits: This episode was reported and produced by Jennifer Strong, Tate Ryan-Mosley and Emma Cillekens.


What happens in Vegas… is captured on camera

MIT Technology Review

The use of facial recognition by police has come under a lot of scrutiny. In part three of our four-part series on FaceID, host Jennifer Strong takes you to Sin City, which actually has one of America's most buttoned-up policies on when cops can capture your likeness. She also finds out why celebrities like Woody Harrelson are playing a starring role in conversations about this technology. Credits: This episode was reported and produced by Jennifer Strong, Tate Ryan-Mosley and Emma Cillekens. We had help from Benji Rosen and Karen Hao.


Land of a billion faces

MIT Technology Review

Clearview AI has built one of the most comprehensive databases of people's faces in the world. Your picture is probably in there (our host Jennifer Strong's was). In part two of this four-part series on facial recognition, we meet the CEO of the controversial company who tells us our future is filled with FaceID-- regardless of whether it's regulated or not. Credits: This episode was reported and produced by Jennifer Strong, with Tate Ryan-Mosley and Emma Cillekens, with special thanks to Karen Hao and Benji Rosen. Our technical director is Jacob Gorski.


The hack that could make face recognition think someone else is you

MIT Technology Review

"If we go in front of a live camera that is using facial recognition to identify and interpret who they're looking at and compare that to a passport photo, we can realistically and repeatedly cause that kind of targeted misclassification," said the study's lead author, Steve Povolny. To misdirect the algorithm, the researchers used an image translation algorithm known as CycleGAN, which excels at morphing photographs from one style into another. For example, it can make a photo of a harbor look as if it were painted by Monet, or make a photo of mountains taken in the summer look like it was taken in the winter. The McAfee team used 1,500 photos of each of the project's two leads and fed the images into a CycleGAN to morph them into one another. At the same time, they used the facial recognition algorithm to check the CycleGAN's generated images to see who it recognized. After generating hundreds of images, the CycleGAN eventually created a faked image that looked like person A to the naked eye but fooled the face recognition into thinking it was person B. While the study raises clear concerns about the security of face recognition systems, there are some caveats.


The field of natural language processing is chasing the wrong goal

MIT Technology Review

At a typical annual meeting of the Association for Computational Linguistics (ACL), the program is a parade of titles like "A Structured Variational Autoencoder for Contextual Morphological Inflection." At this year's conference in July, though, something felt different--and it wasn't just the virtual format. Attendees' conversations were unusually introspective about the core methods and objectives of natural-language processing (NLP), the branch of AI focused on creating systems that analyze or generate human language. Papers in this year's new "Theme" track asked questions like: Are current methods really enough to achieve the field's ultimate goals? What even are those goals? My colleagues and I at Elemental Cognition, an AI research firm based in Connecticut and New York, see the angst as justified.


A neural network that spots similarities between programs could help computers code themselves

MIT Technology Review

That's why some people think we should just get machines to program themselves. Automated code generation has been a hot research topic for a number of years. Microsoft is building basic code generation into its widely used software development tools, Facebook has made a system called Aroma that autocompletes small programs, and DeepMind has developed a neural network that can come up with more efficient versions of simple algorithms than those devised by humans. Even OpenAI's GPT-3 language model can churn out simple pieces of code, such as web page layouts, from natural-language prompts. Gottschlich and his colleagues call this machine programming.


NASA's new Mars rover is bristling with tech made to find signs of alien life

MIT Technology Review

Deep down, our drive to explore Mars has always been about figuring out the story of life in our solar system. Or is life on Earth descended from Martian progenitors? NASA is now on the verge of launching its most ambitious effort ever to chip away at those questions, in the form of a high-tech rover called Perseverance and a scheme to return some of the samples it finds back home. If all goes well, Perseverance will lift off on July 30, and by February it will join a small fleet of Martian landers and rovers whose close-up study of Mars's surface has, in many ways, led to this moment. Each of the other three rovers NASA has launched in the 21st century have been concerned with investigating the potential for the Red Planet to harbor ancient or current biology.


Smart devices, a cohesive system, a brighter future

MIT Technology Review

If you need a reason to feel good about the direction technology is going, look up Dell Technologies CTO John Roese on Twitter. The handle he composed back in 2006 is @theICToptimist. ICT stands for information and communication. This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review's editorial staff. "The reason for that acronym was because I firmly believed that the future was not about information technology and communication technology independently," says Roese, president and chief technology officer of products and operations at Dell Technologies. "It was about them coming together." Close to two decades later, it's hard not to call him right. Organizations are looking to the massive amounts of data they're collecting and generating to become fully digital, they're using the cloud to process and store all that data, and they're turning to new wireless technologies like 5G to power data-hungry applications such as artificial intelligence (AI) and machine learning. In this episode of Business Lab, Roese walks through this confluence of technologies and its future outcomes. For example, autonomous vehicles are developing fast, but fully driverless cars aren't plying are streets yet. And they won't until they tap into a "collaborative compute model"--smart devices that plug into a combination of cloud and edge-computing infrastructure to provide "effectively infinite compute." "One of the biggest problems isn't making the device smart; it's making the device smart and efficient in a scalable system," Roese says. So big things are ahead, but technology today is making huge strides, Roese says. He talks about machine intelligence, which taps AI and machine learning to mimic human intelligence and tackle complex problems, such as speeding up supply chains, or in health care, more accurately detecting tumors or types of cancer.


Podcast: Canada's narwhals skewer Silicon Valley's unicorns

MIT Technology Review

Toronto and the corridor that stretches west to Kitchener and Waterloo is already Canada's capital of finance and technology--and naturally, the region's leaders want to set an example for the rest of the world. That's part of the reason why in 2017, municipal organizations in Toronto tapped Google's sister company Sidewalk Labs to redevelop a disused waterfront industrial district as a high-tech prototype for the "smarter, greener, more inclusive cities" of tomorrow. But within three years the deal had collapsed, a victim of conflicting visions, public concerns over privacy and surveillance, and (to hear Sidewalk Labs tell it) pandemic-era economic change. Journalist Brian Barth, who trained in urban planning and spent seven years living and working in Toronto before returning to the US this summer, says the Sidewalk fiasco also symbolizes a larger difference: the contrast between Silicon Valley's hard-charging, individualist, libertarian ethos and a Canadian business style that emphasizes collaboration, respect, and social responsibility. In this edition of Deep Tech, Barth talks about the tensions that led to Sidewalk Labs' departure and the strategies Canadian CEOs are following to build a more open and inclusive tech sector. Toronto would like to be seen as the nice person's Silicon Valley, if that's not too much trouble, June 17, 2020 Wade Roush: Is Toronto like Silicon Valley for nice people?


The owner of WeChat thinks deepfakes could actually be good

MIT Technology Review

The news: In a new white paper about its plans for AI, translated by China scholars Jeffrey Ding and Caroline Meinhardt, Tencent, the owner of WeChat and one of China's three largest tech giants, emphasizes that deepfake technology is "not just about'faking' and'deceiving,' but a highly creative and groundbreaking technology." It urges regulators to "be prudent" and to avoid clamping down on its potential benefits to society. Why it matters: Tencent says it's already working to advance some of these applications. This will likely spur its competitors to do the same if they haven't yet, and influence the direction of Chinese startups eager to be acquired. As a member of China's "AI national team," which the government created as part of its overall AI strategy, the company also has significant sway among regulators who want to help foster the industry's growth.