NEURAL networks, like the ones grabbing headlines for winning boardgames or driving cars, depend on huge amounts of computing hardware. That in turn means a colossal amount of power: the next wave may consume millions of watts each. That's one reason why some suggest we rethink what we want computers to be. Reducing the precision with which they analyse problems, and putting up with the odd "error", can cut zeroes off their energy consumption (see "To make computers better, let them get sloppy"). And it has precedent in the human brain – an unrivalled piece of hardware using electrical fluctuations and requiring a million times less power than a computer.
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. In July, a group of artificial intelligence researchers showcased a self-driving bicycle that could navigate around obstacles, follow a person, and respond to voice commands. While the self-driving bike itself was of little use, the AI technology behind it was remarkable. Powering the bicycle was a neuromorphic chip, a special kind of AI computer. Neuromorphic computing is not new.
When IBM's Deep Blue computer won its first game of chess against world champion Garry Kasparov in 1996, the public got a real taste of how powerful computers had become in competing with human intelligence. Since then, not only has computing power grown exponentially but the cost of processing power has fallen dramatically. These trends, combined with advances in artificial intelligence algorithms have enabled the development of systems that can, in some instances, perform tasks better than human beings. Video surveillance is one of these tasks; and certainly there is a large market opportunity given there has been little increase in the ability to analyze video, despite the massive growth in surveillance and in the storage of video data. According to IHS, 127 million surveillance cameras and 400 thousand body-worn cameras will ship in 2017 - in addition to the estimated 300 million cameras already deployed - and approximately 2.5 billion exabytes of data will be created every day.
Recently we have seen a slew of popular films that deal with artificial intelligence – most notably The Imitation Game, Chappie, Ex Machina, and Her. However, despite over five decades of research into artificial intelligence, there remain many tasks that humans find simple which computers cannot do. Given the slow progress of AI, for many the prospect of computers with human-level intelligence seems further away today than it did when Isaac Asimov's classic I, Robot was published in 1950. The fact is, however, that today neuromorphic chips offer a plausible path to realizing human-level artificial intelligence within the next few decades. Starting in the early 2000's there was a realization that neural network models – based on how the human brain works – could solve many tasks that could not be solved by other methods.
What comes to mind when you hear the words Artificial Intelligence (AI)? Not too long ago, this phrase was reserved for talking about an imagined distant future where humans had robot servants and self-driving cars. This is the world we live in today. We have personal assistants like Siri to answer any of our questions, Tesla's that can get us from point A to B while we sleep, and endless filters on Snapchat that can transform our appearance instantly. The age of AI is here.