'Godfather' of deep learning is reimagining AI

#artificialintelligence

Geoffrey Hinton may be the "godfather" of deep learning, a suddenly hot field of artificial intelligence, or AI – but that doesn't mean he's resting on his algorithms. Hinton, a University Professor Emeritus at the University of Toronto, recently released two new papers that promise to improve the way machines understand the world through images or video – a technology with applications ranging from self-driving cars to making medical diagnoses. "This is a much more robust way to detect objects than what we have at present," Hinton, who is also a fellow at Google's AI research arm, said today at a tech conference in Toronto. "If you've been in the field for a long time like I have, you know that the neural nets that we use now – there's nothing special about them. We just sort of made them up."


Should Artificial Intelligence Copy the Human Brain?

#artificialintelligence

That debate comes down to whether or not the current approaches to building AI are enough. With a few tweaks and the application of enough brute computational force, will the technology we have now be capable of true "intelligence," in the sense we imagine it exists in an animal or a human? On one side of this debate are the proponents of "deep learning"--an approach that, since a landmark paper in 2012 by a trio of researchers at the University of Toronto, has exploded in popularity. While far from the only approach to artificial intelligence, it has demonstrated abilities beyond what previous AI tech could accomplish. The "deep" in "deep learning" refers to the number of layers of artificial neurons in a network of them.


Scientists slash computations for deep learning: 'Hashing' can eliminate more than 95 percent of computations

#artificialintelligence

"This applies to any deep-learning architecture, and the technique scales sublinearly, which means that the larger the deep neural network to which this is applied, the more the savings in computations there will be," said lead researcher Anshumali Shrivastava, an assistant professor of computer science at Rice. The research will be presented in August at the KDD 2017 conference in Halifax, Nova Scotia. It addresses one of the biggest issues facing tech giants like Google, Facebook and Microsoft as they race to build, train and deploy massive deep-learning networks for a growing body of products as diverse as self-driving cars, language translators and intelligent replies to emails. Shrivastava and Rice graduate student Ryan Spring have shown that techniques from "hashing," a tried-and-true data-indexing method, can be adapted to dramatically reduce the computational overhead for deep learning. Hashing involves the use of smart hash functions that convert data into manageable small numbers called hashes.


Scientists slash computations for deep learning

#artificialintelligence

Rice University computer scientists have adapted a widely used technique for rapid data lookup to slash the amount of computation--and thus energy and time--required for deep learning, a computationally intense form of machine learning. "This applies to any deep-learning architecture, and the technique scales sublinearly, which means that the larger the deep neural network to which this is applied, the more the savings in computations there will be," said lead researcher Anshumali Shrivastava, an assistant professor of computer science at Rice. The research will be presented in August at the KDD 2017 conference in Halifax, Nova Scotia. It addresses one of the biggest issues facing tech giants like Google, Facebook and Microsoft as they race to build, train and deploy massive deep-learning networks for a growing body of products as diverse as self-driving cars, language translators and intelligent replies to emails. Shrivastava and Rice graduate student Ryan Spring have shown that techniques from "hashing," a tried-and-true data-indexing method, can be adapted to dramatically reduce the computational overhead for deep learning.


Key Trends and Takeaways from RE•WORK Deep Learning Summit Montreal – Part 1: Computer Vision

@machinelearnbot

Last week I was fortunate enough to have attended the RE•WORK Deep Learning Summit Montreal (October 10 & 11), and was able to take in a number of quality talks and meet with other attendees. The conference was split into 2 tracks -- Research Advancements and Business Applications -- and featured a wide array of top neural networks researchers and academics, as well as business leaders. An interesting mix of both industry and academic, RE•WORK did more than enough to prove their professionalism and attention to detail, and this is without mentioning the calibre of speakers they secured for the event. What follows is a summary of some of my favorite talks from the conference, with this selection revolving around the visual reasoning & computer vision blocks which started the conference off. A full listing of the speakers and schedule can be found here.