Vector Institute


Is AI Riding a One-Trick Pony?

#artificialintelligence

"In 30 years we're going to look back and say Geoff is Einstein--of AI, deep learning, the thing that we're calling AI," Jacobs says. Hinton's breakthrough, in 1986, was to show that backpropagation could train a deep neural net, meaning one with more than two or three layers. A 2012 paper by Hinton and two of his Toronto students showed that deep neural nets, trained using backpropagation, beat state-of-the-art systems in image recognition. That's the bottom layer of the club sandwich: 10,000 neurons (100x100) representing the brightness of every pixel in the image.


Progress in AI seems like it's accelerating, but here's why it could be plateauing

#artificialintelligence

"In 30 years we're going to look back and say Geoff is Einstein--of AI, deep learning, the thing that we're calling AI," Jacobs says. Hinton's breakthrough, in 1986, was to show that backpropagation could train a deep neural net, meaning one with more than two or three layers. A 2012 paper by Hinton and two of his Toronto students showed that deep neural nets, trained using backpropagation, beat state-of-the-art systems in image recognition. That's the bottom layer of the club sandwich: 10,000 neurons (100x100) representing the brightness of every pixel in the image.


Progress in AI seems like it's accelerating, but here's why it could be plateauing

@machinelearnbot

"In 30 years we're going to look back and say Geoff is Einstein--of AI, deep learning, the thing that we're calling AI," Jacobs says. Hinton's breakthrough, in 1986, was to show that backpropagation could train a deep neural net, meaning one with more than two or three layers. A 2012 paper by Hinton and two of his Toronto students showed that deep neural nets, trained using backpropagation, beat state-of-the-art systems in image recognition. That's the bottom layer of the club sandwich: 10,000 neurons (100x100) representing the brightness of every pixel in the image.


Progress in AI seems like it's accelerating, but here's why it could be plateauing

#artificialintelligence

"In 30 years we're going to look back and say Geoff is Einstein--of AI, deep learning, the thing that we're calling AI," Jacobs says. Hinton's breakthrough, in 1986, was to show that backpropagation could train a deep neural net, meaning one with more than two or three layers. A 2012 paper by Hinton and two of his Toronto students showed that deep neural nets, trained using backpropagation, beat state-of-the-art systems in image recognition. That's the bottom layer of the club sandwich: 10,000 neurons (100x100) representing the brightness of every pixel in the image.


Progress in AI seems like it's accelerating, but here's why it could be plateauing

@machinelearnbot

"In 30 years we're going to look back and say Geoff is Einstein--of AI, deep learning, the thing that we're calling AI," Jacobs says. Hinton's breakthrough, in 1986, was to show that backpropagation could train a deep neural net, meaning one with more than two or three layers. A 2012 paper by Hinton and two of his Toronto students showed that deep neural nets, trained using backpropagation, beat state-of-the-art systems in image recognition. That's the bottom layer of the club sandwich: 10,000 neurons (100x100) representing the brightness of every pixel in the image.


Is AI Riding a One-Trick Pony?

@machinelearnbot

"In 30 years we're going to look back and say Geoff is Einstein--of AI, deep learning, the thing that we're calling AI," Jacobs says. Hinton's breakthrough, in 1986, was to show that backpropagation could train a deep neural net, meaning one with more than two or three layers. A 2012 paper by Hinton and two of his Toronto students showed that deep neural nets, trained using backpropagation, beat state-of-the-art systems in image recognition. That's the bottom layer of the club sandwich: 10,000 neurons (100x100) representing the brightness of every pixel in the image.


Is AI Riding a One-Trick Pony?

MIT Technology Review

"In 30 years we're going to look back and say Geoff is Einstein--of AI, deep learning, the thing that we're calling AI," Jacobs says. Hinton's breakthrough, in 1986, was to show that backpropagation could train a deep neural net, meaning one with more than two or three layers. A 2012 paper by Hinton and two of his Toronto students showed that deep neural nets, trained using backpropagation, beat state-of-the-art systems in image recognition. That's the bottom layer of the club sandwich: 10,000 neurons (100x100) representing the brightness of every pixel in the image.


The rise of Canadian AI requires a few critical components

#artificialintelligence

Canadian corporations shouldn't wait for AI companies to knock on their doors. But to see our Canadian AI sector lift off, we surely need a few other critical parts. Data is, of course, the feedstock of AI: the basic raw material that tech companies need to train all forms of AI technologies. Canadian corporations shouldn't wait for AI companies to knock on their doors; they need to actively look to do business with them, and commercial terms should be struck to fairly apportion ownership, upside, and risk.


Vector Institute for Artificial Intelligence ensures the world gets more Canada

#artificialintelligence

In his introduction, Jacobs offers a brief history of Canada's pioneering contribution to the field of artificial intelligence (AI), explaining the significance of the shift between rules-based AI and machine learning that originated in Ontario. The Canadian Institute for Advanced Research (CIFAR) that Jacobs refers to, approved its first program, Artificial Intelligence & Robotics in 1982, while operating out of an Ontario government office just a few blocks from where Jacobs is sitting, and later recruited Geoffrey Hinton to Toronto. A global race for the region's AI talent ensued, with alumni of Geoffrey Hinton's University of Toronto Machine Learning program going on to fill top AI R&D roles at Apple, Facebook, OpenAI and Google Brain, as well as Microsoft and Google DeepMind, among others, while Hinton himself became renowned the world over as the "godfather of deep learning." Contact the Ontario Investment Office to learn how your company can leverage the world's leading AI talent And please don't forget to follow us on Twitter.


Why Canada is Becoming a Hub for A.I. Research

#artificialintelligence

Lead by research director Richard Zemel, a professor of computer science at the University of Toronto and Senior Fellow at the Canadian Institute for Advanced Research, the Vector Institute focuses on building and sustaining AI-based innovation, growth, and productivity in Canada. The Vector Institute will lead Ontario's efforts to build and sustain AI-based innovation, growth and productivity in Canada by focusing on the transformative potential of deep learning and machine learning. Canada is fast becoming a global hub for artificial intelligence research as support and interest grow among academic institutions, private companies, and governments. The number and variety of companies that have signed on as sponsors of the Vector Institute is an indication that industry understands the transformative potential of AI.We also have a large number of start-up companies – and several business incubators – with an interest in developing, applying and commercializing AI technology.Recent announcements from Uber, Google Brain, and DeepMind to expand their research capacity in Canada are further evidence that Canada is emerging as the place to do AI research, and also to apply it.