Researchers develop AI to fool facial recognition tech

#artificialintelligence

A team of engineering researchers from the University of Toronto have created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview with Eureka Alert.


The AI that can STOP facial recognition:

Daily Mail - Science & tech

It could be the answer to the ever more invasive facial recognition systems used by Facebook, Google and others to try and identify you in every picture put online. Researchers at the University of Toronto have revealed AI software than can tweak your snaps so you can't be identified. They say their Instagram-like filter can tweak pictures so they look the same to human eyes, but disrupt machine learning systems used by web giants to identify users. Researchers from the University of Toronto have developed an algorithm specifically designed to disrupt facial recognition systems. The technology uses a deep learning technique called adversarial training, which puts two artificial intelligence algorithms against each other.


Glove-TalkII: Mapping Hand Gestures to Speech Using Neural Networks

Neural Information Processing Systems

There are many different possible schemes for converting hand gestures to speech. The choice of scheme depends on the granularity of the speech that you want to produce. Figure 1 identifies a spectrum defined by possible divisions of speech based on the duration of the sound for each granularity. What is interesting is that in general, the coarser the division of speech, the smaller the bandwidth necessary for the user. In contrast, where the granularity of speech is on the order of articulatory musclemovements (i.e. the artificial vocal tract [AVT]) high bandwidth control is necessary for good speech. Devices which implement this model of speech production are like musical instruments which produce speech sounds.



'Godfathers of AI' Receive Turing Award, the Nobel Prize of Computing - AI Trends

#artificialintelligence

The 2018 Turing Award, known as the "Nobel Prize of computing," has been given to a trio of researchers who laid the foundations for the current boom in artificial intelligence. Yoshua Bengio, Geoffrey Hinton, and Yann LeCun -- sometimes called the'godfathers of AI' -- have been recognized with the $1 million annual prize for their work developing the AI subfield of deep learning. The techniques the trio developed in the 1990s and 2000s enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies, from self-driving cars to automated medical diagnoses. In fact, you probably interacted with the descendants of Bengio, Hinton, and LeCun's algorithms today -- whether that was the facial recognition system that unlocked your phone, or the AI language model that suggested what to write in your last email.