Goto

Collaborating Authors

Apple and Its Rivals Bet Their Futures on These Men's Dreams

#artificialintelligence

Over the past five years, artificial intelligence has gone from perennial vaporware to one of the technology industry's brightest hopes. Computers have learned to recognize faces and objects, understand the spoken word, and translate scores of languages. Apple, Facebook, and Microsoft--have bet their futures largely on AI, racing to see who's fastest at building smarter machines. That's fueled the perception that AI has come out of nowhere, what with Tesla's self-driving cars and Alexa chatting up your child. But this was no overnight hit, nor was it the brainchild of a single Silicon Valley entrepreneur. The ideas behind modern AI--neural networks and machine learning--have roots you can trace to the last stages of World War II. Back then, academics were beginning to build computing systems meant to store and process information in ways similar to the human brain. Over the decades, the technology had its ups and downs, but it failed to capture the attention of computer scientists broadly until around 2012, thanks to a handful of stubborn researchers who weren't afraid to look foolish. They remained convinced that neural nets would light up the world and alter humanity's destiny.


Meet the Man Google Hired to Make AI a Reality

AITopics Original Links

Geoffrey Hinton was in high school when a friend convinced him that the brain worked like a hologram. To create one of those 3-D holographic images, you record how countless beams of light bounce off an object and then you store these little bits of information across a vast database. While still in high school, back in 1960s Britain, Hinton was fascinated by the idea that the brain stores memories in much the same way. Rather than keeping them in a single location, it spreads them across its enormous network of neurons. This may seem like a small revelation, but it was a key moment for Hinton -- "I got very excited about that idea," he remembers.


'Godfathers of AI' Receive Turing Award, the Nobel Prize of Computing - AI Trends

#artificialintelligence

The 2018 Turing Award, known as the "Nobel Prize of computing," has been given to a trio of researchers who laid the foundations for the current boom in artificial intelligence. Yoshua Bengio, Geoffrey Hinton, and Yann LeCun -- sometimes called the'godfathers of AI' -- have been recognized with the $1 million annual prize for their work developing the AI subfield of deep learning. The techniques the trio developed in the 1990s and 2000s enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies, from self-driving cars to automated medical diagnoses. In fact, you probably interacted with the descendants of Bengio, Hinton, and LeCun's algorithms today -- whether that was the facial recognition system that unlocked your phone, or the AI language model that suggested what to write in your last email.


Neural Net Worth

Communications of the ACM

When Geoffrey Hinton started doing graduate student work on artificial intelligence at the University of Edinburgh in 1972, the idea that it could be achieved using neural networks that mimicked the human brain was in disrepute. Computer scientists Marvin Minsky and Seymour Papert had published a book in 1969 on Perceptrons, an early attempt at building a neural net, and it left people in the field with the impression that such devices were nonsense. "It didn't actually say that, but that's how the community interpreted the book," says Hinton who, along with Yoshua Bengio and Yann LeCun, will receive the 2018 ACM A.M. Turing award for their work that led deep neural networks to become an important component of today's computing. "People thought I was just completely crazy to be working on neural nets." Even in the 1980s, when Bengio and LeCun entered graduate school, neural nets were not seen as promising.


How the artificial intelligence revolution was born in a Vancouver hotel

#artificialintelligence

Mel Silverman walked over to a whiteboard and picked up a marker, listing all the academic disciplines that the band of renegade scientists asking him for money represented. Assembled there 12 years ago at Vancouver's Metropolitan Hotel was a group of about 15 people, ranging from computer scientists to biologists to experimental engineers. What united them was their interest in a concept that was, at the time, generally perceived as the domain of the lunatic fringe. They believed it was possible to teach a machine to learn the same way a child does, through artificial neural networks that mimic the function of the human brain. In the process of teaching a machine to learn like a human, they figured there was likely a lot to discover about how humans learn as well.