Serban, Iulian V., Sankar, Chinnadhurai, Germain, Mathieu, Zhang, Saizheng, Lin, Zhouhan, Subramanian, Sandeep, Kim, Taesup, Pieper, Michael, Chandar, Sarath, Ke, Nan Rosemary, Rajeshwar, Sai, de Brebisson, Alexandre, Sotelo, Jose M. R., Suhubdy, Dendi, Michalski, Vincent, Nguyen, Alexandre, Pineau, Joelle, Bengio, Yoshua
We present MILABOT: a deep reinforcement learning chatbot developed by the Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize competition. MILABOT is capable of conversing with humans on popular small talk topics through both speech and text. The system consists of an ensemble of natural language generation and retrieval models, including template-based models, bag-of-words models, sequence-to-sequence neural network and latent variable neural network models. By applying reinforcement learning to crowdsourced data and real-world user interactions, the system has been trained to select an appropriate response from the models in its ensemble. The system has been evaluated through A/B testing with real-world users, where it performed significantly better than many competing systems. Due to its machine learning architecture, the system is likely to improve with additional data.
As described in our recent announcement about AI pioneer Randy Goebel joining the ROSS team as an advisor, Goebel is a professor in the Department of Computing Science at the University of Alberta, a founder and researcher with the Alberta Machine Intelligence Institute (AMII) and is involved with the development of the University of Alberta Google DeepMind relationship, the group behind AlphaGo. Goebel's theoretical work on abduction, hypothetical reasoning and belief revision is internationally acclaimed and his recent application of practical belief revision and constraint programming to scheduling, layout, and web mining has had widespread impact across multiple industry verticals.
A team of engineering researchers from the University of Toronto have created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview with Eureka Alert.
Turing Award winners (from left to right) Yoshua Bengio, Yann LeCun, and Geoffrey Hinton at the ReWork Deep Learning Summit, Montreal, October 2017. AI "Sputnik moment" (say it in Chinese*) is at hand China is overtaking the US not just in the sheer volume of AI research papers submitted and published, but also in the production of high-impact papers as measured by the top 50%, top 10%, and top 1% most-cited papers. "By projecting current trends, we see that China is likely to have more top-10% papers by 2020 and more top-1% papers by 2025" (Allen Institute for Artificial Intelligence). Cisco attributes the decline to their increased confidence that "migrating to the cloud will improve protection efforts, while apparently decreasing reliance on less proven technologies such as artificial intelligence" (Cisco). Nearly 90% of IT leaders see their use of AI/ML increasing in the future and 41% look for technology that is powered by AI, a top factor in their purchasing decisions.