If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Automatic speech transcription, Self-driving cars, a computer program beating the world champion GO player and computers learning to play video games and achieving better results than humans. Astonishing results that makes you wonder what Artificial Intelligence (AI) can achieve now and in the future. Futurist Ray Kurzweil predicts that by 2029 computers will have human level intelligence and by 2045 computers will be smarter than humans, the so called "Singularity". Some of us are looking forward to that, others think of it as their worst nightmare. In 2015 several top scientists and entrepreneurs called for caution over AI as it could be used to create something that cannot be controlled.
With Moore's Law slowing, engineers have been taking a cold hard look at what will keep computing going when it's gone. Certainly artificial intelligence will play a role. But there are stranger things in the computing universe, and some of them got an airing at the IEEE International Conference on Rebooting Computing in November. There were also some cool variations on classics such as reversible computing and neuromorphic chips. But some less-familiar ones got their time in the sun too, such as photonics chips that accelerate AI, nano-mechanical comb-shaped logic, and a "hyperdimensional" speech recognition system.
The emerging generation of AI chips need algorithms with significant locality, but not all AI algorithms are currently up to the task. Computer vision algorithms have a leg up in locality due to their heavy use of convolutional neural networks, but the recurrent neural networks used in speech and language applications will need some changes to improve locality, especially for inference. At Baidu's Silicon Valley AI Lab, we are proactively trying several approaches to change our algorithms to harness the potential of locality, and early experiments show very promising signs of overcoming this challenge. The current generation of algorithms has already enabled significant breakthroughs, creating gains in speech recognition, machine translation, and synthesis of realistic human speech.
The key marketing question to ask of AI is: Does this application of artificial intelligence increase relevance and usefulness for the customer? Forty-six per cent of millennials with smart phones use voice recognition software today, and over 70% of voice recognition users are happy with the experience. Gartner estimates that by 2020, 40% of mobile interactions between people and their virtual personal assistants will be powered by the data gathered from users in cloud-based neural networks. How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only'smart,' but also socially responsible?"
We don't want to look things up in dictionaries – so I wanted to build a machine to translate speech – Alexander Waibel At the 1962 World Fair, IBM showcased its "Shoebox" machine, able to understand 16 spoken English words. In 1990, Dragon released the first consumer speech recognition product, Dragon Dictate, for a whopping $9,000. "Before that time, speech recognition products were limited to discrete speech, meaning that they could only recognise one word at a time," says Peter Mahoney, senior vice president and general manager of Dragon, Nuance Communications. In the last 10 years or so, machine learning techniques loosely based on the workings of the human brain have allowed computers to be trained on huge datasets of speech, enabling excellent recognition across many people using many different accents.
This year, the Association for Computing Machinery (ACM) celebrates 50 years of the ACM Turing Award, the most prestigious technical award in the computing industry. With the recent ascent of deep neural networks, speech recognition has improved enough so that it can be easily used for transcribing speech, texting, video captioning, and many other applications. The impact in the real world is both in the applications (such as speech recognition, face recognition, language translation, etc.) Another recent breakthrough is in the area of "reinforcement learning," in which machines learn to perform a task by attempting to perform it and receiving positive or negative rewards.
Now it turns out that they probably did work all along but we weren't doing things in quite the right way and we had no clear idea of the scale needed. To make neural networks fulfill their promise you need to first give then some deep structure and not rely on a random or simplistic architecture. Next you need to train big systems with big data - lots of it. Until quite recently finding enough data in the right form, and finding the large amounts of computer power to do the training, was a difficult problem. The data problem has been eased by the growth of the web and the computing problem by the growth of cloud computing.
When I arrived at a Stanford University auditorium Tuesday night for what I thought would be a pretty nerdy panel on deep learning, a fast-growing branch of artificial intelligence, I figured I must be in the wrong place--maybe a different event for all the new Stanford students and their parents visiting the campus. Despite the highly technical nature of deep learning, some 600 people had shown up for the sold-out AI event, presented by VLAB, a Stanford-based chapter of the MIT Enterprise Forum. The turnout was a stark sign of the rising popularity of deep learning, an approach to AI that tries to mimic the activity of the brain in so-called neural networks. In just the last couple of years, deep learning software from giants like, Facebook, and China's Baidu as well as a raft of startups, has led to big advances in image and speech recognition, medical diagnostics, stock trading, and more. "There's quite a bit of excitement in this area," panel moderator Steve Jurvetson, a partner with the venture firm DFJ, said with uncustomary understatement.
Large companies have been flocking to the "cloud" to store data and run computing applications because it gives them spending flexibility and keeps them from running unwieldy data centers of their own. But Andy Jassy, the CEO of Amazon Web Services, the world's largest cloud computing provider, said there's another, even more compelling reason: newly discovered abilities to create and deploy software. "The cloud and AWS made developers feel like they were equipped with superpowers," Jassy said in a keynote speech at the company's re:Invent conference, which has gathered 32,000 developers and cloud advocates in this gambling and entertainment mecca. "The cloud and AWS gives builders capabilities that they never had before." Jassy's comments came amidst a series of announcements that underscore how the business of renting computing power and storage is morphing into a platform that underpins the rapid deployment of machine-learning tools and voice-activated technology.
This year, the Association for Computing Machinery (ACM) celebrates 50 years of the ACM Turing Award, the most prestigious technical award in the computing industry. The Turing Award, generally regarded as the'Nobel Prize of computing', is an annual prize awarded to "an individual selected for contributions of a technical nature made to the computing community". In celebration of the 50 year milestone, renowned computer scientist Melanie Mitchell spoke to CBR's Ellie Burns about artificial intelligence (AI) – the biggest breakthroughs, hurdles and myths surrounding the technology. EB: What are the most important examples of Artificial Intelligence in mainstream society today? MM: There are many important examples of AI in the mainstream; some very visible, others blended in so well with other methods that the AI part is nearly invisible.