Results


HPE launches upgraded high-performance systems for AI applications

ZDNet

Hewlett Packard Enterprise (HPE) has announced the launch of upgraded high-density compute and storage systems to encourage adoption of high-performance computing (HPC) and artificial intelligence (AI) among enterprises. The HPE Apollo 2000 Gen10 is a multi-server platform for enterprises looking to support HPC and deep learning applications but have limited datacentre space. The platform supports Nvidia Tesla V100 GPU accelerators to enable deep learning training and inference for use cases such as real-time video analytics for public safety. Enterprises deploying the HPE Apollo 2000 Gen10 system can start small with a single 2U shared infrastructure and scale out up to 80 HPE ProLiant Gen10 servers in a 42U rack. "HPC and AI play an increasingly important role in digital transformation, enabling organisations to leverage modeling, simulation, and deep learning to drive business innovation in areas like financial trading, computer-aided design and engineering, video surveillance, and text analytics," HPE said in an announcement.


Dell EMC high performance computing bundles aimed at AI, deep learning

ZDNet

Systems that aim to meld high performance computing and data analytics for mainstream enterprises. These systems are designed for fraud detection, image processing, financial analysis and personalized medicine. The server is aimed at industries such as scientific imaging, oil and gas and financial services. Systems that aim to meld high performance computing and data analytics for mainstream enterprises. These systems are designed for fraud detection, image processing, financial analysis and personalized medicine.


Stage – Data Scientist and Machine Learning

@machinelearnbot

The Institute of Complex Systems based in Paris is looking for an intern specialised in data science and machine learning to work on our Multivac Platform. Since 2005, the institute is facilitating access to skills, trainings, work areas and pooled research resources based on high performance computing and big data. Description: You will be working on Multivac Platform developed at ISCPIF. Multivac platform is meant as an interface between researchers and Big Data, especially in domain of NLP and text mining.


Project Brainwave: Intel FPGAs Accelerate Microsoft's AI

#artificialintelligence

This week Microsoft unveiled Project Brainwave, a deep learning acceleration platform based on its collaboration with Intel on FPGA computing. Microsoft says Project Brainwave represents a "major leap forward" in cloud-based deep learning performance, and intends to bring the technology to its Windows Azure cloud computing platform. Microsoft says its new approach, which it calls Hardware Microservices, will allow deep neural networks (DNNs) to run in the cloud without any software required, resulting in large advances in speed and efficiency. "To continue as a leading provider of high-performance and high-value cloud services, Tencent needs to adopt the most advanced infrastructure and the chip industry's latest achievements," said Sage Zou, senior director of Tencent Cloud.


The Cloud Computing Era Could Be Nearing Its End

WIRED

Today the $247 billion cloud computing industry funnels everything through massive centralized data centers operated by giants like Amazon, Microsoft, and Google. Theoretically, data would only need to travel a few miles between customers and the nearest cell tower or central office, instead of hundreds of miles to reach a cloud data center. Austin-based Vapor IO has already begun building its own micro data centers alongside existing cell towers. That enables Vapor IO to take advantage of Crown Castle's existing network of 40,000 cell towers and 60,000 miles of fiber optic lines in metropolitan areas.


ARM Targets New Processors at Machine Learning

#artificialintelligence

The company also argues that "big" cores can run faster when "little" cores handle low-level workloads. Based on its DynamIQ multicore architecture technology previewed in March, the Cortex-75 targets emerging AI and machine learning workloads with a single-threaded performance boost of 50 percent, ARM claimed. The high-end core leaves headroom for emerging workloads, and also targets server and networking applications as ARM seeks for make inroads in x86-dominated datacenters as well as edge devices that would flesh out Internet of Things architectures. ARM's overarching IoT strategy focuses on developing and scaling its Cortex-M 32-bit microcontrollers and a device server that handles connections from IoT devices.


NVIDIA CEO: AI Workloads Will "Flood" Data Centers Data Center Knowledge

#artificialintelligence

One data center provider that specializes in hosting infrastructure for Deep Learning told us most of their customers hadn't yet deployed their AI applications in production. If your on-premises Deep Learning infrastructure will do a lot of training – the computationally intensive applications used to teach neural networks things like speech and image recognition – prepare for power-hungry servers with lots of GPUs on every motherboard. While not particularly difficult to handle on-premises, one big question to answer about inferencing servers for the data center manager is how close they have to be to where input data originates. If your corporate data centers are in Ashburn, Virginia, but your Machine Learning application has to provide real-time suggestions to users in Dallas or Portland, chances are you'll need some inferencing servers in or near Dallas and Portland to make it actually feel close to real-time.


Artificial Intelligence Is Now In Residing In Your Pocket

#artificialintelligence

Each TPU has four chips that delivers 180 trillion of floating points performance per second, if this was not enough Google combined 64 of these TPUs together using patented high speed network to create machine learning supercomputer called TPU pod. Remember, Google's real innovation has been on hardware patents in high end cloud computing, chips, servers, networking for its own data centers. Google has been unsuccessful in social media space, but is now using machine learning to help users share photos, even suggesting whom to share it with. Google has search data, complete email conversation data, photos, and location data.


Nvidia's new Volta-based DGX-1 supercomputer puts 400 servers in a box

PCWorld

The GPU, the first one based on the brand-new Volta architecture, was introduced at the company's GPU Technology Conference in San Jose, California, on Wednesday. The new supercomputer 40,960 CUDA cores, which Nvidia says equals the computing power of 800 CPUs. The Tesla V100 in the DGX-1 is five times faster than the current Pascal architecture, Huang said. Nvidia has also included a cube-like Tensor Core, which will work with the regular processing cores to improve deep learning.


What is a GPU Why Do I Care: A Businessperson Guide

#artificialintelligence

While 2016 was the year of the GPU for a number of reasons, the truth of the matter is that outside of some core disciplines (deep learning, virtual reality, autonomous vehicles) the reasons why you would use GPUs for general purpose computing applications remain somewhat unclear. As a company whose products are tuned for this exceptional compute platform, we have a tendency to assume familiarity, often incorrectly. Our New Year's resolution is to explain, in language designed for business leaders, what a GPU is and why you should care. Let's get started by baselining on existing technology - the CPU. Most of us are familiar with a CPU.