As described in our recent announcement about AI pioneer Randy Goebel joining the ROSS team as an advisor, Goebel is a professor in the Department of Computing Science at the University of Alberta, a founder and researcher with the Alberta Machine Intelligence Institute (AMII) and is involved with the development of the University of Alberta Google DeepMind relationship, the group behind AlphaGo. Goebel's theoretical work on abduction, hypothetical reasoning and belief revision is internationally acclaimed and his recent application of practical belief revision and constraint programming to scheduling, layout, and web mining has had widespread impact across multiple industry verticals.
A team of engineering researchers from the University of Toronto have created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview with Eureka Alert.
When Microsoft acquired deep learning startup Maluuba in January, Maluuba's highly respected advisor, the deep learning pioneer Yoshua Bengio, agreed to continue advising Microsoft on its artificial intelligence efforts. Bengio, head of the Montreal Institute for Learning Algorithms, recently visited Microsoft's Redmond, Washington, campus, and took some time for a chat. Let's start with the basics: What is deep learning? Yoshua Bengio: Deep learning is an approach to machine learning, and machine learning is a way to try to make machines intelligent by allowing computers to learn from examples about the world around us or about some specific aspect of it. Deep learning is particular among all the machine learning methods in that it is inspired by some of the things we know about the brain.
Yoshua Bengio is recognised as one of the world's leading experts in artificial intelligence and a pioneer in deep learning. Following his studies in Montreal, culminating in a PhD in computer science from McGill University in 1991, Professor Bengio did postdoctoral studies at the Massachusetts Institute of Technology (MIT) in Boston. In 2019, he was awarded the Killam Prize as well as the 2018 Turing Award, considered to be the Nobel prize for computing. These honours reflect the profound influence of his work on the evolution of our society. Yoshua Bengio is also known for collecting the largest number of new citations in the world in the year 2018.
Last November Synced ran an interview with Yoshua Bengio, in which the deep learning maverick, Université de Montréal Professor and MILA Scientific Director discussed his research and commented on the current state of deep learning and AI. In this follow-up piece we look at the talk Bengio gave late last year at Tsinghua University in Beijing. Challenges for Deep Learning towards Human-Level AI addressed difficulties Bengio and his collaborators are facing and efforts they have made to improve deep learning for human-like AI development. Research over the last decade has given us a much improved understanding of AI, such as why certain methods are helpful for model optimization and why deep learning is so useful. Researchers are showing great interest in deep learning and its potential for application across many different fields.