Artificial intelligence is not fiction. It is already our reality and has been for a very long time. Its origins date back to the 1950s. How has it changed since then? Are computers able to teach themselves? What does a smartphone have to do with the landing on the moon, and could machines be smarter than humans?
Futurist Ray Kurzweil predicted in 1990 that a computer would beat a human world champion chess player by 1998. In 1997, that actually happened with IBM's Deep Blue. Since then, artificial intelligence (AI) has continued to advance rapidly, making now a good time to brush up on what is considered the next wave of highly disruptive technology. AI consists of many sub disciplines such as natural language processing, computer vision, knowledge representation and reasoning. The technology is making its way into a broad range of industries from marketing with behavioural targeting, to healthcare with accurate and early detection of complex diseases, to infrastructure with smarter urban planning.
The key factors driving the GPU market growth include the rising adoption of GPUs in the healthcare sector, rapidly evolving PC gaming landscape with the advent of interactive Virtual Reality (VR) games, and the growing popularity of GPUs for machine learning and neural network training applications. In applications involving machine learning and Big Data analytics, GPUs are highly efficient when compared with CPUs as they enable excellent multiple parallel processing. With 10-100x application throughput and thousands of computational cores compared to CPUs, GPUs are preferred by scientists and data analysts who require massive Big Data processing capabilities. From interactive model-based image registration and MRI connectivity mapping to medical imaging and image reconstruction, the healthcare industry is increasingly leveraging the extensive computational power of GPUs. Technological advancements are giving rise to a persistent expectation from healthcare providers to deliver advanced visualization capabilities.
It's very easy to dismiss Google I/O as just an opportunity to show off new toys and to throw a big party. Given the appearance of LCD Soundsystem at this year's closing concert, the impression is understandable. But I/O 2017 was when Google's efforts in big data, search, and device ubiquity bore fruit as it transitions toward a new focus on AI and machine learning. Google I/O 2016 is etched in my mind for two reasons. First, it was deadly hot, dampening the fun of an outdoor conference with sweat.
"Game developers don't like me," said Sabina Hemmi. "Before I came around, there was no insight into how balanced a game was." As the co-founder of DotaBuff, a site that provides statistics about Dota 2 gameplay, she can show with hard data if certain characters are stronger or weaker, and in what areas. As access to such data is becoming more commonplace, she says, players are starting to expect it, and are reacting to it. "If I'm focusing hours of my time into the game, I want to focus on playing the characters that are actually competitive," she said. Just as it did with traditional sports, the collection, analysis, and use of all kinds of data is beginning to change the way that competitive games are played and understood. But while many scenes like Dota 2, League of Legends, and Overwatch are coming around to the value of statistics, the breadth and sophistication of statistical coverage in general is still in its beginning stages. The numbers that Sabina Hemmi's team have pulled down from Dota 2 over the years are, in a word, expansive. You can view stats from your own match history stretching back years.