I think, therefore I code

#artificialintelligence 

To most of us, a 3-D-printed turtle just looks like a turtle; four legs, patterned skin, and a shell. But if you show it to a particular computer in a certain way, that object's not a turtle -- it's a gun. Objects or images that can fool artificial intelligence like this are called adversarial examples. Jessy Lin, a senior double-majoring in computer science and electrical engineering and in philosophy, believes that they're a serious problem, with the potential to trip up AI systems involved in driverless cars, facial recognition, or other applications. She and several other MIT students have formed a research group called LabSix, which creates examples of these AI adversaries in real-world settings -- such as the turtle identified as a rifle -- to show that they are legitimate concerns.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found