If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Increasingly affordable AI maintenance and the increased speed of calculations thanks to GPU are significant factors in the unbridled growth of AI. The astonishing results that were achieved on training a neural network on GPU cards made Nvidia a key player, with 70 percent of the market share that Intel failed to gain. Compared with the results from the analog algorithms, and thanks to the combination of machine learning and big data, previously "unsolvable" problems are now being solved. Machine learning algorithms can directly analyze thousands of previous cases of different types of diseases and make their own conclusions as to what constitutes a sick individual versus a healthy individual, and consequently help diagnose dangerous conditions including cancer.
When most people think about artificial intelligence, their minds turn to glorified fights to save the human race from rogue robots, a familiar story played out on Hollywood screens in decades gone by. While machine intelligence is still far from resembling human consciousness, an AI fight is playing out in real life, not between robots and humans, but rather among the businesses vying to lead an increasingly lucrative market. The origins of AI stretch back to 1950, when computer science pioneer Alan Turing published a paper speculating that machines could one day think like humans. Last year, research firm IDC valued the market at $8 billion, forecasting a rise to $47 billion in 2020. Between Turing's landmark paper 67 years ago and today's wild market valuations, most major AI developments have either fallen in the realms of research and academia or involved computers beating people at human games.
You may have come across the term "machine learning" while digging around for new technology ideas to invest in, or perhaps you've heard of it while researching the broader (but related) artificial intelligence space. In any case, machine learning probably isn't an idea that most investors know about, so it's worth taking the time to walk through a few basic aspects to help you get started. Let's take a quick look at where the term came from, how machine learning is being used right now, and what its market opportunity is. Of course, I'll discuss a few machine learning stocks investors should know about as well. Machine learning may sound like a new tech concept, but it's been around for decades.
It seems like even the biggest hyperscale platform developers who have long touted software-defined architectures as the key to computing nirvana are starting to learn a cardinal rule of infrastructure: No matter how much you try to abstract it, basic hardware still matters. A key example of this is Google's Tensor Processing Unit (TPU), which the company designed specifically for machine learning and other crucial workflows that were starting to push the limits of available CPUs and GPUs. In fact, the company says that without the TPU, it was looking at doubling its data center footprint in order to support applications like voice recognition and image search. The TPU is custom-designed to work with the TensorFlow software library, generating results 15 to 30 times faster than state-of-the-art Intel Haswell or Nvidia K80 devices. This may seem like a harbinger of bad times ahead for Intel and Nvidia, but the broader picture is a bit more muddled.
Artificial Intelligence is offering the kind of revolution mankind first saw a 100 years ago with electricity, with Andrew Ng, the former Chief Scientist at Baidu claiming he can't think of an industry it won't disrupt in the next 10 years. Google's AI stands at the point where it can recognize user doodles and translate what they're trying to draw; IBM's Watson is quickly expanding its cognitive computing platform across multiple industries; Alexa can now tell you what's in the news, call you an Uber, book you a flight, and incessantly wakes me up to finish my real analysis homework every morning. With more than 35 acquisitions in 2017 and over 200 rounds of financing since 2012, the race for intelligence is picking up fast; Google itself has acquired 11 AI companies. Big Data and increased processing power, which was considered the limiting constraint handicapping the development of AI have picked up the pace in 2016, but with increased horsepower through products such as NVidia's GPUs, and Intel's AI chips, such constraints have decreased. Just in the past month, Elon Musk launched Neuralink, a venture to merge the human brain with AI, NVidia's deep-learning chips showed promise of disrupting the medicine industry, Bots were reported to be chatting in their own language and companies such as Forbes and Intel showed promising signs of boosting AI efforts.
Imagine the computing power necessary to handle the activity of 1.23 billion users playing 100 million hours of video and uploading 95 million posts of photos and videos. Facebook, Inc. (NASDAQ:FB) processes that and more every day. Now imagine sifting through all that data to perform facial recognition, describe the contents of photos and video, and populate your news feed with relevant content. Those tasks are all handled by Facebook's servers that have been infused with its homegrown brand of artificial intelligence (AI). The training that takes place behind the scenes has been the job of Facebook's AI brain named Big Sur, which has been handling the task since 2015.
WHAT sets humans apart from machines is the speed at which we can learn from our surroundings. But scientists have successfully trained computers to use artificial intelligence to learn from experience – and one day they will be smarter than their creators. Now scientists have admitted they are already baffled by the mechanical brains they have built, raising the prospect that we could lose control of them altogether. Computers are already performing incredible feats – like driving cars and predicting diseases, but their makers say they aren't entirely in control of their creations. This could have catastrophic consequences for civilisation, tech experts have warned.
The recent TPU paper by Google draws a clear conclusion – without accelerated computing, the scale-out of AI is simply not practical. Today's economy runs in the world's data centers, and data centers are changing dramatically. Not so long ago, they served up web pages, advertising and video content. Now, they recognize voices, detect images in video streams and connect us with information we need exactly when we need it. Increasingly, those capabilities are enabled by a form of artificial intelligence called deep learning.
In 2011 Google realized they had a problem. They were getting serious about deep learning networks with computational demands that strained their resources. Google calculated they would have to have twice as many data centers as they already had if people used their deep learning speech recognition models for voice search for just three minutes a day. They needed more powerful and efficient processing chips. What kind of chip did they need?
Speed measures for the TPU (blue), GPU (red) and CPU (gold). In 2011 Google realized they had a problem. They were getting serious about deep learning networks with computational demands that strained their resources. Google calculated they would have to have twice as many data centers as they already had if people used their deep learning speech recognition models for voice search for just three minutes a day. They needed more powerful and efficient processing chips.