A revolution is taking place in the GPU software stack in the fields of analytics, machine learning and deep learning, driven by NVIDIA's hardware innovation, that provides 100x more processing cores and 20x greater memory bandwidth than CPUs. However, systems and platforms are unable to harness these disruptive performance gains because they remain isolated from each other. The GPU Open Analytics Initiative (GOAI) and its first project, the GPU Data Frame (GDF) was created to allow seamless passing of data between processes. At this meetup, we'll explain how we have implemented an end-to-end machine learning powered by GOAI. We will show how GDFs break down the silos to enable interactive data exploration, model training, and model scoring, that is lightning-fast by virtue of avoiding any serialization overhead.
Quantum computers could give the machine learning algorithms at the heart of modern artificial intelligence a dramatic speed up, but how far off are we? An international group of researchers has outlined the barriers that still need to be overcome. This year has seen a surge of interest in quantum computing, driven in part by Google's announcement that it will demonstrate "quantum supremacy" by the end of 2017. That means solving a problem beyond the capabilities of normal computers, which the company predicts will take 49 qubits--the quantum computing equivalent of bits. As impressive as such a feat would be, the demonstration is likely to be on an esoteric problem that stacks the odds heavily in the quantum processor's favor, and getting quantum computers to carry out practically useful calculations will take a lot more work.
I don't know if you have ever seen one of the Orange Boxes from Canonical These are really sleek machines. They contain 10 Intel NUCs, plus an 11th one for the management. They are used as a demonstration tool for big software stacks such as OpenStack, Hadoop, and, of course, Kubernetes. They are freely available from TranquilPC, so if you are an R&D team, or just interested in having a neat little cluster at home, I encourage you to have a look. However, despite their immense qualities they lack a critical piece of kit that Deep Learning geeks cherish: GPUs!!