Results


IBM Designs a "Performance Beast" for AI

#artificialintelligence

Companies running AI applications often need as much computing muscle as researchers who use supercomputers do. IBM's latest system is aimed at both audiences. The company last week introduced its first server powered by the new Power9 processor designed for AI and high-performance computing. The powerful technologies inside have already attracted the likes of Google and the US Department of Energy as customers. The new IBM Power System AC922 is equipped with two Power9 CPUs and from two to six NVIDIA Tesla V100 GPUs.


Nvidia launches Titan V desktop GPU to accelerate AI computation

#artificialintelligence

Nvidia launched a new desktop GPU today that's designed to bring massive amounts of power to people who are working on machine learning applications. The new Titan V card will provide customers with a Nvidia Volta chip that they can plug into a desktop computer. According to a press release, the Titan V promises increased performance over its predecessor, the Pascal-based Titan X, while maintaining the same power requirements. The Titan V sports 110 teraflops of raw computing capability, which is 9X that of its predecessor. It's a chip that's meant for machine learning researchers, developers, and data scientists who want to be able to build and test machine learning systems on desktop computers.


Getting started with TensorFlow

@machinelearnbot

In the context of machine learning, tensor refers to the multidimensional array used in the mathematical models that describe neural networks. In other words, a tensor is usually a higher-dimension generalization of a matrix or a vector. Through a simple notation that uses a rank to show the number of dimensions, tensors allow the representation of complex n-dimensional vectors and hyper-shapes as n-dimensional arrays. Tensors have two properties: a datatype and a shape. TensorFlow is an open source deep learning framework that was released in late 2015 under the Apache 2.0 license.


Sequoia Backs Graphcore as the Future of Artificial Intelligence Processors

#artificialintelligence

Graphcore has today announced a $50 million Series C funding round by Sequoia Capital as the machine intelligence company prepares to ship its first Intelligence Processing Unit (IPU) products to early access customers at the start of 2018. The Series C round enables Graphcore to significantly accelerate growth to meet the expected global demand for its machine intelligence processor. The funding will be dedicated to scaling up production, building a community of developers around the Poplar software platform, driving Graphcore's extended product roadmap, and investing in its Palo Alto-based US team to help support customers. Nigel Toon, CEO at Graphcore said: "Efficient AI processing power is rapidly becoming the most sought-after resource in the technological world. We believe our IPU technology will become the worldwide standard for machine intelligence compute.


Phones don't need a NPU to benefit from machine learning

#artificialintelligence

Neural Networks and Machine Learning are some of this year's biggest buzzwords in the world of smartphone processors. Huawei's HiSilicon Kirin 970, Apple's A11 Bionic, and the image processing unit (IPU) inside the Google Pixel 2 all boast dedicated hardware support for this emerging technology. The trend so far has suggested that machine learning requires a dedicated piece of hardware, like a Neural Processing Unit (NPU), IPU, or "Neural Engine", as Apple would call it. However, the reality is these are all just fancy words for custom digital signal processors (DSP) -- that is, hardware specialized in performing complex mathematical functions quickly. Today's latest custom silicon has been specifically optimized around machine learning and neural network operations, the most common of which include dot product math and matrix multiply.


TensorFlow (GPU) Setup for Developers – Michael Ramos – Medium

@machinelearnbot

This probably isn't for the professional data scientists or anyone creating actual models -- I imagine their setups are a bit more verbose. This blog post will cover my manual implementation of setting up TensorFlow with GPU support. I've spent hours reading posts and going through walkthroughs… and learned a ton from them… so I pieced together this installation guide to which I've been routinely using since (should have a CloudFormation script soon). This installation guide is for simple/default configurations and settings. They were made specifically for what we want to do, which is to run intense computations on the GPU.


Kinect, Xbox and Windows 10: Why accessibility matters

ZDNet

Kinect is either Microsoft's biggest success or biggest failure, depending on how you look at it. Kinect brought voice control to the living room long before Alexa or Google Home. It's also just been cancelled -- at least as a separate product. The problem is perhaps that for gamers, and maybe developers, Kinect games never felt as much like the Star Trek Holodeck as we thought it was going to (not least because living rooms aren't that big outside a few places in the US) and somehow there was never quite the momentum behind it. So what does this mean for other novel ways of interacting with our devices?


Preliminary IPU benchmarks - Providing previously unseen performance for a range of machine learning applications

#artificialintelligence

When we announced our Series A funding back in October 2016, we made three statements about the performance of the IPU: - it improves performance by 10x to 100x compared with other AI accelerators - it excels at both training and inference - it lets machine learning developers innovate with models and algorithms that just don't work on even the best alternative architectures Since then we have been inundated with requests for more detail about our claims. Today we are delighted to share three preliminary benchmarks to corroborate these early goals. We understood from the beginning that a full solution requires more than just a new chip design. The software infrastructure needs to be comprehensive and easy to use to allow machine learning developers to quickly adapt the hardware to their needs. As a result, we have been focused on bringing up a full software stack early to ensure that the IPU can be used for real applications from the outset.


AWS Announces Availability of P3 Instances for Amazon EC2

#artificialintelligence

The first instances to include NVIDIA Tesla V100 GPUs, P3 instances are the most powerful GPU instances available in the cloud. P3 instances allow customers to build and deploy advanced applications with up to 14 times better performance than previous-generation Amazon EC2 GPU compute instances, and reduce training of machine learning applications from days to hours. With up to eight NVIDIA Tesla V100 GPUs, P3 instances provide up to one petaflop of mixed-precision, 125 teraflops of single-precision, and 62 teraflops of double-precision floating point performance, as well as a 300 GB/s second-generation NVIDIA NVLink interconnect that enables high-speed, low-latency GPU-to-GPU communication. P3 instances also feature up to 64 vCPUs based on custom Intel Xeon E5 (Broadwell) processors, 488 GB of DRAM, and 25 Gbps of dedicated aggregate network bandwidth using the Elastic Network Adapter (ENA). "When we launched our P2 instances last year, we couldn't believe how quickly people adopted them," said Matt Garman, Vice President of Amazon EC2.


Sparkier, faster, more: Graph databases, and Neo4j, are moving on

ZDNet

A lot has happened in graph land in the last six months. Quick recap: a new player (TigerGraph), Microsoft ramping up its graph play with graph support in SQL Server and CosmosDB, and the number two graph database, OrientDB, getting acquired. The number one graph database, Neo4j, is kicking off its Graph Connect event today and announcing a new version, 3.3. This version brings extended support for querying in Spark, ETL, analytics, and improved performance. We discuss the developments and what they mean for this space with Neo4j's CEO, Emil Eifrem.