If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Google is practicing what it preaches, handing control of one of the most vital components of its data center operation over to its machine-learning algorithms during the past few months. DeepMind, the Google subsidiary that is responsible for much of its advanced artificial intelligence research, announced Friday that Google has saved 30 percent on its energy bills by improving the efficiency of its cooling systems. "This first-of-its-kind cloud-based control system is now safely delivering energy savings in multiple Google data centers," Google said in a blog post. Big cloud companies love to talk about how earth-friendly they are with their data center designs and blueprints, and while that's true in many cases the real motivation is money: electricity usage can be the biggest expense on a data center cost sheet. Google Cloud has data centers in 17 regions around the world, and multiple availability zones within many of those regions, which is a lot of hardware to keep cool.
Over the past five years, artificial intelligence has gone from perennial vaporware to one of the technology industry's brightest hopes. Computers have learned to recognize faces and objects, understand the spoken word, and translate scores of languages. Apple, Facebook, and Microsoft--have bet their futures largely on AI, racing to see who's fastest at building smarter machines. That's fueled the perception that AI has come out of nowhere, what with Tesla's self-driving cars and Alexa chatting up your child. But this was no overnight hit, nor was it the brainchild of a single Silicon Valley entrepreneur. The ideas behind modern AI--neural networks and machine learning--have roots you can trace to the last stages of World War II. Back then, academics were beginning to build computing systems meant to store and process information in ways similar to the human brain. Over the decades, the technology had its ups and downs, but it failed to capture the attention of computer scientists broadly until around 2012, thanks to a handful of stubborn researchers who weren't afraid to look foolish. They remained convinced that neural nets would light up the world and alter humanity's destiny.
Nvidia has unveiled several updates to its deep-learning computing platform, including an absurdly powerful GPU and supercomputer. At this year's GPU Technology Conference in San Jose, Nvidia CEO Jensen Huang unveiled the DGX-2, a new computer for researchers who are "pushing the outer limits of deep-learning research and computing" to train artificial intelligence. The computer, which will ship later this year, is the world's first system to sport a whopping two petaflops of performance. For some perspective: A Macbook Pro might have around one teraflop. A petaflop is one thousand teraflops.
With Moore's Law slowing, engineers have been taking a cold hard look at what will keep computing going when it's gone. Certainly artificial intelligence will play a role. But there are stranger things in the computing universe, and some of them got an airing at the IEEE International Conference on Rebooting Computing in November.
At the GTC technology conference this year, NVIDIA launched their latest and most advanced GPU called Volta. At the center of this chip is Tensor Core, an Artificial Intelligence accelerator that that is poised to usher in the next phase of AI applications. However, our current AI algorithms are not fully utilizing this accelerator, and for us to achieve another major breakthrough in AI, we need to change our software. The realization of this computing resource will advance and even create AI applications that might otherwise not exist. For example, by utilizing this resource, AI algorithms could better understand and synthesize human speech.
In 2013 I had a long interview with Peter Lee, corporate vice president of Microsoft Research, about advances in machine learning and neural networks and how language would be the focal point of artificial intelligence in the coming years. At the time the notion of artificial intelligence and machine learning seemed like a "blue sky" researcher's fantasy. Artificial intelligence was something coming down the road … but not soon. I wish I had taken the talk more seriously. Language is, and will continue to be, the most important tool for the advancement of artificial intelligence.
The key marketing question to ask of AI is: Does this application of artificial intelligence increase relevance and usefulness for the customer? Forty-six per cent of millennials with smart phones use voice recognition software today, and over 70% of voice recognition users are happy with the experience. Gartner estimates that by 2020, 40% of mobile interactions between people and their virtual personal assistants will be powered by the data gathered from users in cloud-based neural networks. How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only'smart,' but also socially responsible?"
When Ray Kurzweil met with Google CEO Larry Page last July, he wasn't looking for a job. A respected inventor who's become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own. It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. "I could try to give you some access to it," Page told Kurzweil.
This is the fourth part in'A Brief History of Neural Nets and Deep Learning'. In this part, we will get to the end of our story and see how deep learning emerged from the slump neural nets found themselves in by the late 90s, and the amazing state of the art results it has achieved since. When you want a revolution, start with a conspiracy. With the ascent of Support Vector Machines and the failure of backpropagation, the early 2000s were a dark time for neural net research. LeCun and Hinton variously mention how in this period their papers or the papers of their students were routinely rejected from being published due to their subject being Neural Nets.
For those out there who know me, it'll be no surprise to learn that I'm going long on the transformative power of artificial intelligence (AI). Since 2013, I've spent most of my energy studying, researching, investing (e.g. Mapillary, Numerai, Ravelin) and building AI communities (AI Summit 2015and 2016, LondonAI meetup), with a mission to accelerate its real-world applications. I am passionate about seeking out and bringing technology advancements to markets that can enable us to solve the high-value (and often complex) problems we face in business and society. Importantly, this includes ones that were previously intractable from either a technical or commercial standpoint.