If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Over the past five years, artificial intelligence has gone from perennial vaporware to one of the technology industry's brightest hopes. Computers have learned to recognize faces and objects, understand the spoken word, and translate scores of languages. Apple, Facebook, and Microsoft--have bet their futures largely on AI, racing to see who's fastest at building smarter machines. That's fueled the perception that AI has come out of nowhere, what with Tesla's self-driving cars and Alexa chatting up your child. But this was no overnight hit, nor was it the brainchild of a single Silicon Valley entrepreneur. The ideas behind modern AI--neural networks and machine learning--have roots you can trace to the last stages of World War II. Back then, academics were beginning to build computing systems meant to store and process information in ways similar to the human brain. Over the decades, the technology had its ups and downs, but it failed to capture the attention of computer scientists broadly until around 2012, thanks to a handful of stubborn researchers who weren't afraid to look foolish. They remained convinced that neural nets would light up the world and alter humanity's destiny.
Nvidia has unveiled several updates to its deep-learning computing platform, including an absurdly powerful GPU and supercomputer. At this year's GPU Technology Conference in San Jose, Nvidia CEO Jensen Huang unveiled the DGX-2, a new computer for researchers who are "pushing the outer limits of deep-learning research and computing" to train artificial intelligence. The computer, which will ship later this year, is the world's first system to sport a whopping two petaflops of performance. For some perspective: A Macbook Pro might have around one teraflop. A petaflop is one thousand teraflops.
With Moore's Law slowing, engineers have been taking a cold hard look at what will keep computing going when it's gone. Certainly artificial intelligence will play a role. But there are stranger things in the computing universe, and some of them got an airing at the IEEE International Conference on Rebooting Computing in November.
At the GTC technology conference this year, NVIDIA launched their latest and most advanced GPU called Volta. At the center of this chip is Tensor Core, an Artificial Intelligence accelerator that that is poised to usher in the next phase of AI applications. However, our current AI algorithms are not fully utilizing this accelerator, and for us to achieve another major breakthrough in AI, we need to change our software. The realization of this computing resource will advance and even create AI applications that might otherwise not exist. For example, by utilizing this resource, AI algorithms could better understand and synthesize human speech.
In 2013 I had a long interview with Peter Lee, corporate vice president of Microsoft Research, about advances in machine learning and neural networks and how language would be the focal point of artificial intelligence in the coming years. At the time the notion of artificial intelligence and machine learning seemed like a "blue sky" researcher's fantasy. Artificial intelligence was something coming down the road … but not soon. I wish I had taken the talk more seriously. Language is, and will continue to be, the most important tool for the advancement of artificial intelligence.
The key marketing question to ask of AI is: Does this application of artificial intelligence increase relevance and usefulness for the customer? Forty-six per cent of millennials with smart phones use voice recognition software today, and over 70% of voice recognition users are happy with the experience. Gartner estimates that by 2020, 40% of mobile interactions between people and their virtual personal assistants will be powered by the data gathered from users in cloud-based neural networks. How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only'smart,' but also socially responsible?"
When Ray Kurzweil met with Google CEO Larry Page last July, he wasn't looking for a job. A respected inventor who's become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own. It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. "I could try to give you some access to it," Page told Kurzweil.
This is the fourth part in'A Brief History of Neural Nets and Deep Learning'. In this part, we will get to the end of our story and see how deep learning emerged from the slump neural nets found themselves in by the late 90s, and the amazing state of the art results it has achieved since. When you want a revolution, start with a conspiracy. With the ascent of Support Vector Machines and the failure of backpropagation, the early 2000s were a dark time for neural net research. LeCun and Hinton variously mention how in this period their papers or the papers of their students were routinely rejected from being published due to their subject being Neural Nets.
For those out there who know me, it'll be no surprise to learn that I'm going long on the transformative power of artificial intelligence (AI). Since 2013, I've spent most of my energy studying, researching, investing (e.g. Mapillary, Numerai, Ravelin) and building AI communities (AI Summit 2015and 2016, LondonAI meetup), with a mission to accelerate its real-world applications. I am passionate about seeking out and bringing technology advancements to markets that can enable us to solve the high-value (and often complex) problems we face in business and society. Importantly, this includes ones that were previously intractable from either a technical or commercial standpoint.
A toddler meanders unsteadily through the living room, pausing by a sleek black cylinder in the corner. "Alexa," he says in a high-pitched voice. The cylinder acknowledges the request, despite the muffled pronunciation, and the music starts. Alexa, a cloud-based speech recognition software from Amazon and the brain of its black cylindrical loudspeaker Echo, has been a big hit around the world – except for the younger ones, who take it for granted. Children will grow up alongside it, just as Alexa will evolve, as the AI powering it learns to answer more and more questions, and – perhaps – one day even converses freely with people.