Goto

Collaborating Authors

Apple and Its Rivals Bet Their Futures on These Men's Dreams

#artificialintelligence

Over the past five years, artificial intelligence has gone from perennial vaporware to one of the technology industry's brightest hopes. Computers have learned to recognize faces and objects, understand the spoken word, and translate scores of languages. Apple, Facebook, and Microsoft--have bet their futures largely on AI, racing to see who's fastest at building smarter machines. That's fueled the perception that AI has come out of nowhere, what with Tesla's self-driving cars and Alexa chatting up your child. But this was no overnight hit, nor was it the brainchild of a single Silicon Valley entrepreneur. The ideas behind modern AI--neural networks and machine learning--have roots you can trace to the last stages of World War II. Back then, academics were beginning to build computing systems meant to store and process information in ways similar to the human brain. Over the decades, the technology had its ups and downs, but it failed to capture the attention of computer scientists broadly until around 2012, thanks to a handful of stubborn researchers who weren't afraid to look foolish. They remained convinced that neural nets would light up the world and alter humanity's destiny.


AAAI 2020 A Turning Point for Deep Learning?

#artificialintelligence

This is an updated version. The Godfathers of AI and 2018 ACM Turing Award winners Geoffrey Hinton, Yann LeCun, and Yoshua Bengio shared a stage in New York on Sunday night at an event organized by the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020). The trio of researchers have made deep neural networks a critical component of computing, and in individual talks and a panel discussion they discussed their views on current challenges facing deep learning and where it should be heading. Introduced in the mid 1980s, deep learning gained traction in the AI community the early 2000s. The year 2012 saw the publication of the CVPR paper Multi-column Deep Neural Networks for Image Classification, which showed how max-pooling CNNs on GPUs could dramatically improve performance on many vision benchmarks; while a similar system introduced months later by Hinton and a University of Toronto team won the large-scale ImageNet competition by a significant margin over shallow machine learning methods.


AAAI 2020 A Turning Point for Deep Learning? Hinton, LeCun, and Bengio Might Have Different Approaches

#artificialintelligence

This is an updated version. The Godfathers of AI and 2018 ACM Turing Award winners Geoffrey Hinton, Yann LeCun, and Yoshua Bengio shared a stage in New York on Sunday night at an event organized by the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020). The trio of researchers have made deep neural networks a critical component of computing, and in individual talks and a panel discussion they discussed their views on current challenges facing deep learning and where it should be heading. Introduced in the mid 1980s, deep learning gained traction in the AI community the early 2000s. The year 2012 saw the publication of the CVPR paper Multi-column Deep Neural Networks for Image Classification, which showed how max-pooling CNNs on GPUs could dramatically improve performance on many vision benchmarks; while a similar system introduced months later by Hinton and a University of Toronto team won the large-scale ImageNet competition by a significant margin over shallow machine learning methods.


Innovation Nation: AI godfathers gave Canada an early edge -- but we could end up being left in the dust

#artificialintelligence

Canada has a rich history of innovation, but in the next few decades, powerful technological forces will transform the global economy. Large multinational companies have jumped out to a headstart in the race to succeed, and Canada runs the risk of falling behind. At stake is nothing less than our prosperity and economic well-being. The Financial Post set out explore what is needed for businesses to flourish and grow. You can find all of our coverage here.


Neural Net Worth

Communications of the ACM

When Geoffrey Hinton started doing graduate student work on artificial intelligence at the University of Edinburgh in 1972, the idea that it could be achieved using neural networks that mimicked the human brain was in disrepute. Computer scientists Marvin Minsky and Seymour Papert had published a book in 1969 on Perceptrons, an early attempt at building a neural net, and it left people in the field with the impression that such devices were nonsense. "It didn't actually say that, but that's how the community interpreted the book," says Hinton who, along with Yoshua Bengio and Yann LeCun, will receive the 2018 ACM A.M. Turing award for their work that led deep neural networks to become an important component of today's computing. "People thought I was just completely crazy to be working on neural nets." Even in the 1980s, when Bengio and LeCun entered graduate school, neural nets were not seen as promising.