Results


Moore's Law may be out of steam, but the power of artificial intelligence is accelerating

#artificialintelligence

A paper from Google's researchers says they simultaneously used as many as 800 of the powerful and expensive graphics processors that have been crucial to the recent uptick in the power of machine learning (see "10 Breakthrough Technologies 2013: Deep Learning"). Feeding data into deep learning software to train it for a particular task is much more resource intensive than running the system afterwards, but that still takes significant oomph. Intel has slowed the pace at which it introduces generations of new chips with smaller, denser transistors (see "Moore's Law Is Dead. It also motivates the startups--and giants such as Google--creating new chips customized to power machine learning (see "Google Reveals a Powerful New AI Chip and Supercomputer").


Moore's Law may be out of steam, but the power of artificial intelligence is accelerating

#artificialintelligence

A paper from Google's researchers says they simultaneously used as many as 800 of the powerful and expensive graphics processors that have been crucial to the recent uptick in the power of machine learning (see "10 Breakthrough Technologies 2013: Deep Learning"). Feeding data into deep learning software to train it for a particular task is much more resource intensive than running the system afterwards, but that still takes significant oomph. Intel has slowed the pace at which it introduces generations of new chips with smaller, denser transistors (see "Moore's Law Is Dead. It also motivates the startups--and giants such as Google--creating new chips customized to power machine learning (see "Google Reveals a Powerful New AI Chip and Supercomputer").


Facebook's Caffe2 AI tools come to iPhone, Android, and Raspberry Pi

PCWorld

New intelligence can be added to mobile devices like the iPhone, Android devices, and low-power computers like Raspberry Pi with Facebook's new open-source Caffe2 deep-learning framework. Caffe2 can be used to program artificial intelligence features into smartphones and tablets, allowing them to recognize images, video, text, and speech and be more situationally aware. It's important to note that Caffe2 is not an AI program, but a tool allowing AI to be programmed into smartphones. It takes just a few lines of code to write learning models, which can then be bundled into apps. The release of Caffe2 is significant.


IBM reveals 'neurosynaptic' chip that can replicate neurons

AITopics Original Links

It is the nearest thing to a human brain in silicon form. Lawrence Livermore National Laboratory (LLNL) and IBM have revealed a new deep learning supercomputer they say could boost AI systems. Based on a'neurosynaptic' computer chip called IBM TrueNorth, it can replicate the equivalent of 16 million neurons and 4 billion synapses - yet consumes just 2.5 watts, the energy equivalent of a hearing aid battery. TrueNorth can replicate the equivalent of 16 million neurons and 4 billion synapses - yet consumes just 2.5 watts, the energy equivalent of a hearing aid battery. A single TrueNorth processor consists of 5.4 billion transistors wired together to create an array of 1 million digital neurons that communicate with one another via 256 million electrical synapses.


This Tiny Supercomputer Is the New Wave of Artificial Intelligence (AI)

#artificialintelligence

From just powering gaming computers, NVIDIA Corporation (NASDAQ:NVDA) has advanced its GPU business to focusing the use of its technology to power advanced machine technologies. NVIDIA DGX-1 – This is what is known to be the world's first commercially available supercomputer designed specifically for deep learning. NVIDIA claims that DGX-1 is a supercomputer delivering the computing power of 250 2-socket servers in a box. The company states on their website that its NVIDIA NVLink implementation delivers massive increase in GPU memory capacity, giving you a system that can learn, see, and simulate our world--a world with an infinite appetite for computing. NVIDIA also claims the DGX-1 can be trained for tasks like image recognition and will perform significantly faster than other servers.


How blockchain can create the world's biggest supercomputer

#artificialintelligence

As our desktop computers, laptops, mobile devices, etc. stand idly by for a huge portion of the day, the need for computing resources is growing at a fast pace. Large IoT ecosystems, machine learning and deep learning algorithms and other sophisticated solutions being deployed in every domain and industry are raising the demand for stronger cloud servers and more bandwidth to address the minute needs of enterprises and businesses. So how can we make a more economic and efficient use of all the computing power that's going to waste? Blockchain, the distributed ledger that's gaining traction across various domains, might have the answer to the dilemma by providing a platform that enables participants to lend and borrow computing resources -- and make money in the process. "There is a growing demand for computing power from industries and scientific communities to run large applications and process huge volumes of data," says Gilles Fedak, co-founder of iEx.ec, a distributed cloud computing platform.


The Pint-Sized Supercomputer That Companies Are Scrambling to Get

MIT Technology Review

To companies grappling with complex data projects powered by artificial intelligence, a system that Nvidia calls an "AI supercomputer in a box" is a welcome development. Early customers of Nvidia's DGX-1, which combines machine-learning software with eight of the chip maker's highest-end graphics processing units (GPUs), say the system lets them train their analytical models faster, enables greater experimentation, and could facilitate breakthroughs in science, health care, and financial services. Data scientists have been leveraging GPUs to accelerate deep learning--an AI technique that mimics the way human brains process data--since 2012, but many say that current computing systems limit their work. Faster computers such as the DGX-1 promise to make deep-learning algorithms more powerful and let data scientists run deep-learning models that previously weren't possible. The DGX-1 isn't a magical solution for every company.


This is why dozens of companies have bought Nvidia's $129,000 deep-learning supercomputer in a box

#artificialintelligence

To companies grappling with complex data projects powered by artificial intelligence, a system that Nvidia calls an "AI supercomputer in a box" is a welcome development. Early customers of Nvidia's DGX-1, which combines machine-learning software with eight of the chip maker's highest-end graphics processing units (GPUs), say the system lets them train their analytical models faster, enables greater experimentation, and could facilitate breakthroughs in science, health care, and financial services. Data scientists have been leveraging GPUs to accelerate deep learning--an AI technique that mimics the way human brains process data--since 2012, but many say that current computing systems limit their work. Faster computers such as the DGX-1 promise to make deep-learning algorithms more powerful and let data scientists run deep-learning models that previously weren't possible. The DGX-1 isn't a magical solution for every company.


Microsoft, Cray claim deep learning breakthrough on supercomputers

ZDNet

A team of researchers from Microsoft, Cray, and the Swiss National Supercomputing Centre (CSCS) have been working on a project to speed up the use of deep learning algorithms on supercomputers. The team have scaled the Microsoft Cognitive Toolkit -- an open-source suite that trains deep learning algorithms -- to more than 1,000 Nvidia Tesla P100 GPU accelerators on the Swiss centre's Cray XC50 supercomputer, which is nicknamed Piz Daint. The Human Brain Project is an ambitious initiative that seeks to understand how the brain works by simulating it with technology. Could it help us understand the nature of our own consciouness? The project could allow researchers to run larger, more complex, and multi-layered deep learning workloads at scale on the supercomputers, Cray said.


Nvidia CEO's "Hyper-Moore's Law" Vision for Future Supercomputers

#artificialintelligence

Over the last year in particular, we have documented the merger between high performance computing and deep learning and its various shared hardware and software ties. This next year promises far more on both horizons and while GPU maker Nvidia might not have seen it coming to this extent when it was outfitting its first GPUs on the former top "Titan" supercomputer, the company sensed a mesh on the horizon when the first hyperscale deep learning shops were deploying CUDA and GPUs to train neural networks. All of this portends an exciting year ahead and for once, the mighty CPU is not the subject of the keenest interest. Instead, the action is unfolding around the CPU's role alongside accelerators; everything from Intel's approach to integrating the Nervana deep learning chips with Xeons, to Pascal and future Volta GPUs, and other novel architectures that have made waves. While Moore's Law for traditional CPU-based computing is on the decline, Jen-Hsun Huang, CEO of GPU maker, Nvidia told The Next Platform at SC16 that we are just on the precipice of a new Moore's Law-like curve of innovation--one that is driven by traditional CPUs with accelerator kickers, mixed precision capabilities, new distributed frameworks for managing both AI and supercomputing applications, and an unprecedented level of data for training.