Results


Nvidia just unveiled a terrifying AI supercomputer

#artificialintelligence

Nvidia has unveiled several updates to its deep-learning computing platform, including an absurdly powerful GPU and supercomputer. At this year's GPU Technology Conference in San Jose, Nvidia CEO Jensen Huang unveiled the DGX-2, a new computer for researchers who are "pushing the outer limits of deep-learning research and computing" to train artificial intelligence. The computer, which will ship later this year, is the world's first system to sport a whopping two petaflops of performance. For some perspective: A Macbook Pro might have around one teraflop. A petaflop is one thousand teraflops.


4 Strange New Ways to Compute

IEEE Spectrum Robotics Channel

With Moore's Law slowing, engineers have been taking a cold hard look at what will keep computing going when it's gone. Certainly artificial intelligence will play a role. But there are stranger things in the computing universe, and some of them got an airing at the IEEE International Conference on Rebooting Computing in November.


We Need Next Generation Algorithms To Harness The Power Of Today's AI Chips

#artificialintelligence

At the GTC technology conference this year, NVIDIA launched their latest and most advanced GPU called Volta. At the center of this chip is Tensor Core, an Artificial Intelligence accelerator that that is poised to usher in the next phase of AI applications. However, our current AI algorithms are not fully utilizing this accelerator, and for us to achieve another major breakthrough in AI, we need to change our software. The realization of this computing resource will advance and even create AI applications that might otherwise not exist. For example, by utilizing this resource, AI algorithms could better understand and synthesize human speech.


How Language Led To The Artificial Intelligence Revolution - ARC

#artificialintelligence

In 2013 I had a long interview with Peter Lee, corporate vice president of Microsoft Research, about advances in machine learning and neural networks and how language would be the focal point of artificial intelligence in the coming years. At the time the notion of artificial intelligence and machine learning seemed like a "blue sky" researcher's fantasy. Artificial intelligence was something coming down the road … but not soon. I wish I had taken the talk more seriously. Language is, and will continue to be, the most important tool for the advancement of artificial intelligence.


beyond-chatbots-future-ai-marketing-customer-nick-reynolds

#artificialintelligence

The key marketing question to ask of AI is: Does this application of artificial intelligence increase relevance and usefulness for the customer? Forty-six per cent of millennials with smart phones use voice recognition software today, and over 70% of voice recognition users are happy with the experience. Gartner estimates that by 2020, 40% of mobile interactions between people and their virtual personal assistants will be powered by the data gathered from users in cloud-based neural networks. How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only'smart,' but also socially responsible?"


Is Artificial Intelligence Finally Coming into Its Own?

#artificialintelligence

When Ray Kurzweil met with Google CEO Larry Page last July, he wasn't looking for a job. A respected inventor who's become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own. It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. "I could try to give you some access to it," Page told Kurzweil.


A 'Brief' History of Neural Nets and Deep Learning, Part 4

#artificialintelligence

This is the fourth part in'A Brief History of Neural Nets and Deep Learning'. In this part, we will get to the end of our story and see how deep learning emerged from the slump neural nets found themselves in by the late 90s, and the amazing state of the art results it has achieved since. When you want a revolution, start with a conspiracy. With the ascent of Support Vector Machines and the failure of backpropagation, the early 2000s were a dark time for neural net research. LeCun and Hinton variously mention how in this period their papers or the papers of their students were routinely rejected from being published due to their subject being Neural Nets.


Why Go Long on Artificial Intelligence?

#artificialintelligence

For those out there who know me, it'll be no surprise to learn that I'm going long on the transformative power of artificial intelligence (AI). Since 2013, I've spent most of my energy studying, researching, investing (e.g. Mapillary, Numerai, Ravelin) and building AI communities (AI Summit 2015and 2016, LondonAI meetup), with a mission to accelerate its real-world applications. I am passionate about seeking out and bringing technology advancements to markets that can enable us to solve the high-value (and often complex) problems we face in business and society. Importantly, this includes ones that were previously intractable from either a technical or commercial standpoint.


The machines that learned to listen

#artificialintelligence

A toddler meanders unsteadily through the living room, pausing by a sleek black cylinder in the corner. "Alexa," he says in a high-pitched voice. The cylinder acknowledges the request, despite the muffled pronunciation, and the music starts. Alexa, a cloud-based speech recognition software from Amazon and the brain of its black cylindrical loudspeaker Echo, has been a big hit around the world – except for the younger ones, who take it for granted. Children will grow up alongside it, just as Alexa will evolve, as the AI powering it learns to answer more and more questions, and – perhaps – one day even converses freely with people.


Demystifying artificial intelligence

#artificialintelligence

In the last several years, interest in artificial intelligence (AI) has surged. Venture capital investments in companies developing and commercializing AI-related products and technology have exceeded $2 billion since 2011.1 Technology companies have invested billions more acquiring AI startups. Press coverage of the topic has been breathless, fueled by the huge investments and by pundits asserting that computers are starting to kill jobs, will soon be smarter than people, and could threaten the survival of humankind. IBM has committed $1 billion to commercializing Watson, its cognitive computing platform.2 Google has made major investments in AI in recent years, including acquiring eight robotics companies and a machine-learning company.3 Facebook hired AI luminary Yann LeCun to create an AI laboratory with the goal of bringing major advances in the field.4 Amid all the hype, there is significant commercial activity underway in the area of AI that is affecting or will likely soon affect organizations in every sector.