Results


Nvidia announces $2,999 Titan V, 'the most powerful PC GPU ever created'

@machinelearnbot

It seems like Nvidia announces the fastest GPU in history multiple times a year, and that's exactly what's happened again today; the Titan V is "the most powerful PC GPU ever created," in Nvidia's words. It represents a more significant leap than most products that have made that claim, however, as it's the first consumer-grade GPU based around Nvidia's new Volta architecture. That said, a liberal definition of the word "consumer" is in order here -- the Titan V sells for $2,999 and is focused around AI and scientific simulation processing. Nvidia claims up to 110 teraflops of performance from its 21.1 billion transistors, with 12GB of HBM2 memory, 5120 CUDA cores, and 640 "tensor cores" that are said to offer up to 9 times the deep-learning performance of its predecessor. Also it comes in gold and black, which looks pretty cool.


IBM Designs a "Performance Beast" for AI

#artificialintelligence

Companies running AI applications often need as much computing muscle as researchers who use supercomputers do. IBM's latest system is aimed at both audiences. The company last week introduced its first server powered by the new Power9 processor designed for AI and high-performance computing. The powerful technologies inside have already attracted the likes of Google and the US Department of Energy as customers. The new IBM Power System AC922 is equipped with two Power9 CPUs and from two to six NVIDIA Tesla V100 GPUs.


How Nvidia Leapfrogged the AI Chip Market

#artificialintelligence

Nvidia CEO Jensen Huang showed up at a gathering of artificial intelligence researchers in Long Beach, Calif. One was an orchestral piece inspired by music from the Star Wars movies, but composed by an AI program from Belgian startup AIVA that--of course--relies on Nvidia chips. The music went over big with the crowd of AI geeks attending the Neural Information Processing Systems Conference, known as NIPS, including some giants in the field like Nicholas Pinto, head of deep learning at Apple, and Yann LeCun, director of AI Research at Facebook. LeCun was quoted saying the Star Wars bit was "a nice surprise." Huang's other surprise was a bit more practical, and showed just how competitive the AI chip market niche has become.


Nvidia launches Titan V desktop GPU to accelerate AI computation

#artificialintelligence

Nvidia launched a new desktop GPU today that's designed to bring massive amounts of power to people who are working on machine learning applications. The new Titan V card will provide customers with a Nvidia Volta chip that they can plug into a desktop computer. According to a press release, the Titan V promises increased performance over its predecessor, the Pascal-based Titan X, while maintaining the same power requirements. The Titan V sports 110 teraflops of raw computing capability, which is 9X that of its predecessor. It's a chip that's meant for machine learning researchers, developers, and data scientists who want to be able to build and test machine learning systems on desktop computers.


For HPC and Deep Learning, GPUs are here to stay - insideHPC

@machinelearnbot

In this special guest feature from Scientific Computing World, David Yip, HPC and Storage Business Development at OCF, provides his take on the place of GPU technology in HPC. There was an interesting story published earlier this week in which NVIDIA's founder and CEO, Jensen Huang, said: 'As advanced parallel-instruction architectures for CPU can be barely worked out by designers, GPUs will soon replace CPUs'. There are only so many processing cores you can fit on a single CPU chip. There are optimized applications that take advantage of a number of cores, but typically they are used for sequential serial processing (although Intel is doing an excellent job of adding more and more cores to its CPUs and getting developers to program multicore systems). By contrast, a GPU has massively parallel architecture consisting of many thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.


At least 16 companies developing Deep Learning chips NextBigFuture.com

@machinelearnbot

There are many established and startup companies developing deep learning chips. Google and Wave Computing have working silicon and are conducting customer trials. Chinese AI chip startup has received $100 million in funding. Cambricon Technologies aims to have one billion smart devices using its AI processor and own 30% of China's high-performance AI chip market in three years. Huawei estimates Cambricon chips are six times faster for deep-learning applications like training algorithms to identify images than a GPU.


With IBM POWER9, we're all riding the AI wave - IBM Systems Blog: In the Making

#artificialintelligence

There's a big connection between my love for water sports and hardware design -- both involve observing waves and planning several moves ahead. Four years ago, when we started sketching the POWER9 chip from scratch, we saw an upsurge of modern workloads driven by artificial intelligence and massive data sets. We are now ready to ride this new tide of computing with POWER9. It is a transformational architecture and an evolutionary shift from the archaic ways of computing promoted by x86. POWER9 is loaded with industry-leading new technologies designed for AI to thrive.


New – Amazon EC2 Instances with Up to 8 NVIDIA Tesla V100 GPUs (P3) Amazon Web Services

#artificialintelligence

Driven by customer demand and made possible by on-going advances in the state-of-the-art, we've come a long way since the original m1.small instance that we launched in 2006, with instances that emphasize compute power, burstable performance, memory size, local storage, and accelerated computing. The New P3 Today we are making the next generation of GPU-powered EC2 instances available in four AWS regions. Powered by up to eight NVIDIA Tesla V100 GPUs, the P3 instances are designed to handle compute-intensive machine learning, deep learning, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, and genomics workloads. P3 instances use customized Intel Xeon E5-2686v4 processors running at up to 2.7 GHz. Each of the NVIDIA GPUs is packed with 5,120 CUDA cores and another 640 Tensor cores and can deliver up to 125 TFLOPS of mixed-precision floating point, 15.7 TFLOPS of single-precision floating point, and 7.8 TFLOPS of double-precision floating point.


GE Healthcare turns to Nvidia for AI boost in medical imaging

ZDNet

GE Healthcare is set to speed up the time taken to process medical images, thanks to a pair of partnerships announced on Sunday. The global giant will team up with Nvidia to update its 500,000 medical imaging devices worldwide with Revolution Frontier CT, which is claimed to be two times faster than the previous generation image processor. GE said the speedier Revolution Frontier would be better at liver lesion detection and kidney lesion characterisation, and has the potential to reduce the number of follow-up appointments and the number of non-interpretable scans. GE Healthcare is also making use of Nvidia in its new analytics platform, with sections of it to be placed in the Nvidia GPU Cloud. An average hospital generates 50 petabytes of data annually, GE said, but only 3 percent of that data is analysed, tagged, or made actionable.


Vertex.AI - Announcing PlaidML: Open Source Deep Learning for Every Platform

@machinelearnbot

We're pleased to announce the next step towards deep learning for every device and platform. Today Vertex.AI is releasing PlaidML, our open source portable deep learning engine. Our mission is to make deep learning accessible to every person on every device, and we're building PlaidML to help make that a reality. We're starting by supporting the most popular hardware and software already in the hands of developers, researchers, and students. The initial version of PlaidML runs on most existing PC hardware with OpenCL-capable GPUs from NVIDIA, AMD, or Intel.