Neural Net Worth

Communications of the ACM

When Geoffrey Hinton started doing graduate student work on artificial intelligence at the University of Edinburgh in 1972, the idea that it could be achieved using neural networks that mimicked the human brain was in disrepute. Computer scientists Marvin Minsky and Seymour Papert had published a book in 1969 on Perceptrons, an early attempt at building a neural net, and it left people in the field with the impression that such devices were nonsense. "It didn't actually say that, but that's how the community interpreted the book," says Hinton who, along with Yoshua Bengio and Yann LeCun, will receive the 2018 ACM A.M. Turing award for their work that led deep neural networks to become an important component of today's computing. "People thought I was just completely crazy to be working on neural nets." Even in the 1980s, when Bengio and LeCun entered graduate school, neural nets were not seen as promising.


The Godfathers of the AI Boom Win Computing's Highest Honor

#artificialintelligence

In the late 1980s, Canadian master's student Yoshua Bengio became captivated by an unfashionable idea. A handful of artificial intelligence researchers was trying to craft software that loosely mimicked how networks of neurons process data in the brain, despite scant evidence it would work. "I fell in love with the idea that we could both understand the principles of how the brain works and also construct AI," says Bengio, now a professor at the University of Montreal. More than 20 years later, the tech industry fell in love with that idea, too. Neural networks are behind the recent bloom of progress in AI that has enabled projects such as self-driving cars and phone bots practically indistinguishable from people.


Apple and Its Rivals Bet Their Futures on These Men's Dreams

#artificialintelligence

Over the past five years, artificial intelligence has gone from perennial vaporware to one of the technology industry's brightest hopes. Computers have learned to recognize faces and objects, understand the spoken word, and translate scores of languages. Apple, Facebook, and Microsoft--have bet their futures largely on AI, racing to see who's fastest at building smarter machines. That's fueled the perception that AI has come out of nowhere, what with Tesla's self-driving cars and Alexa chatting up your child. But this was no overnight hit, nor was it the brainchild of a single Silicon Valley entrepreneur. The ideas behind modern AI--neural networks and machine learning--have roots you can trace to the last stages of World War II. Back then, academics were beginning to build computing systems meant to store and process information in ways similar to the human brain. Over the decades, the technology had its ups and downs, but it failed to capture the attention of computer scientists broadly until around 2012, thanks to a handful of stubborn researchers who weren't afraid to look foolish. They remained convinced that neural nets would light up the world and alter humanity's destiny.


Welcome to the AI Conspiracy: The 'Canadian Mafia' Behind Tech's Latest Craze

#artificialintelligence

In the late '90s, Tomi Poutanen, a precocious computer whiz from Finland, hoped to do his dissertation on neural networks, a scientific method aimed at teaching computers to act and think like humans. As a student at the University of Toronto, it was a logical choice. Geoffrey Hinton, the godfather of neural network research, taught and ran a research lab there. But instead of encouraging Poutanen, who went on to work at Yahoo and recently co-founded media startup Milq, one of his professors sent a stern warning about taking the academic path known as deep learning. "Smart scientists," his professor cautioned, "go there to see their careers end."


Meet the Man Google Hired to Make AI a Reality

AITopics Original Links

Geoffrey Hinton was in high school when a friend convinced him that the brain worked like a hologram. To create one of those 3-D holographic images, you record how countless beams of light bounce off an object and then you store these little bits of information across a vast database. While still in high school, back in 1960s Britain, Hinton was fascinated by the idea that the brain stores memories in much the same way. Rather than keeping them in a single location, it spreads them across its enormous network of neurons. This may seem like a small revelation, but it was a key moment for Hinton -- "I got very excited about that idea," he remembers.