Results


MXNet - Deep Learning Framework of Choice at AWS

#artificialintelligence

A set of programming models has emerged to help developers define and train AI models with deep learning; along with open source frameworks that put deep learning in the hands of mere mortals. The ability to scale to multiple GPUs (across multiple hosts) to train larger, more sophisticated models with larger, more sophisticated datasets. Portability to run on a broad range of devices and platforms, because deep learning models have to run in many, many different places: from laptops and server farms with great networking and tons of computing power to mobiles and connected devices which are often in remote locations, with less reliable networking and considerably less computing power. Portability to run on a broad range of devices and platforms, because deep learning models have to run in many, many different places: from laptops and server farms with great networking and tons of computing power to mobiles and connected devices which are often in remote locations, with less reliable networking and considerably less computing power.


The AI-First Cloud: Can artificial intelligence power the next generation of cloud computing?

#artificialintelligence

Cloud Machine Learning (ML) Platforms: Technologies like Azure Machine Learning, AWS Machine Learning and the upcoming Google Cloud Machine Learning enable the creation of machine learning models using a specific technology. AI Cloud Services: Technologies like IBM Watson, Microsoft Cognitive Services, Google Cloud Vision or Natural Language APIs enable abstract complex AI or cognitive computing capabilities via simple API calls. Cloud Machine Learning (ML) Platforms: Technologies like Azure Machine Learning, AWS Machine Learning and the upcoming Google Cloud Machine Learning enable the creation of machine learning models using a specific technology. AI Cloud Services: Technologies like IBM Watson, Microsoft Cognitive Services, Google Cloud Vision or Natural Language APIs enable abstract complex AI or cognitive computing capabilities via simple API calls.


Nvidia CEO's "Hyper-Moore's Law" Vision for Future Supercomputers

#artificialintelligence

Over the last year in particular, we have documented the merger between high performance computing and deep learning and its various shared hardware and software ties. This next year promises far more on both horizons and while GPU maker Nvidia might not have seen it coming to this extent when it was outfitting its first GPUs on the former top "Titan" supercomputer, the company sensed a mesh on the horizon when the first hyperscale deep learning shops were deploying CUDA and GPUs to train neural networks. All of this portends an exciting year ahead and for once, the mighty CPU is not the subject of the keenest interest. Instead, the action is unfolding around the CPU's role alongside accelerators; everything from Intel's approach to integrating the Nervana deep learning chips with Xeons, to Pascal and future Volta GPUs, and other novel architectures that have made waves. While Moore's Law for traditional CPU-based computing is on the decline, Jen-Hsun Huang, CEO of GPU maker, Nvidia told The Next Platform at SC16 that we are just on the precipice of a new Moore's Law-like curve of innovation--one that is driven by traditional CPUs with accelerator kickers, mixed precision capabilities, new distributed frameworks for managing both AI and supercomputing applications, and an unprecedented level of data for training.


As Watson matures, IBM plans more AI hardware and software

#artificialintelligence

Just over five years ago, IBM's Watson supercomputer crushed opponents in the televised quiz show Jeopardy. It was hard to foresee then, but artificial intelligence is now permeating our daily lives. Since then, IBM has expanded the Watson brand to a cognitive computing package with hardware and software used to diagnose diseases, explore for oil and gas, run scientific computing models, and allow cars to drive autonomously. The company has now announced new AI hardware and software packages. The original Watson used advanced algorithms and natural language interfaces to find and narrate answers.


As Watson matures, IBM plans more AI hardware and software - The MSP Hub

#artificialintelligence

Mega data centers run by Facebook, Google, Amazon, and other companies use AI on thousands of servers to recognize images and speech and analyze loads of data. It's releasing more powerful hardware to make deep-learning systems faster while analyzing data or finding answers to complex questions. The new IBM hardware, and software tools called PowerAI, are used to train software to perform AI tasks like image and speech recognition. Drones, robots, and autonomous cars use inferencing engines for navigation, image recognition, or data analysis.


NVIDIA Teams with National Cancer Institute, U.S. Department of Energy to Create AI Platform for Accelerating Cancer Research

#artificialintelligence

SANTA CLARA, CA--(Marketwired - Nov 14, 2016) - NVIDIA (NASDAQ: NVDA) today announced that it is teaming up with the National Cancer Institute, the U.S. Department of Energy (DOE) and several national laboratories on an initiative to accelerate cancer research. Teams collaborating on CANDLE include researchers at the National Cancer Institute (NCI), Frederick National Laboratory for Cancer Research and DOE, as well as at Argonne, Oak Ridge, Livermore and Los Alamos National Laboratories. Georgia Tourassi, Director of the Health Data Sciences Institute at Oak Ridge National Laboratory, said, "Today cancer surveillance relies on manual analysis of clinical reports to extract important biomarkers of cancer progression and outcomes. Certain statements in this press release including, but not limited to, statements regarding the impact, benefits and goals of the Cancer Moonshot, the CANDLE AI framework, the combination of NVLink-enabled Pascal GPU architectures, and NVIDIA DGX-1; NVIDIA's participation in CANDLE; AI and deep learning techniques being essential to achieve the Cancer Moonshot objectives; expected gains in training neural networks for cancer research; large-scale data analytics and deep learning being central to Lawrence Livermore National Laboratory's missions; NVIDIA being at the forefront of accelerated machine learning; and CORAL/Sierra architectures being critical to developing scalable deep learning algorithms are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.


As Watson matures, IBM plans more AI hardware and software

PCWorld

Mega data centers run by Facebook, Google, Amazon, and other companies use AI on thousands of servers to recognize images and speech and analyze loads of data. It's releasing more powerful hardware to make deep-learning systems faster while analyzing data or finding answers to complex questions. The new IBM hardware, and software tools called PowerAI, are used to train software to perform AI tasks like image and speech recognition. Drones, robots, and autonomous cars use inferencing engines for navigation, image recognition, or data analysis.


The State of Enterprise Machine Learning

#artificialintelligence

First, the machine learning outputs – patient risk scores, propensity to buy scores, and fraud predictions – are substantially more valuable to each organization than the raw data. Practical applications include speech recognition, image recognition, or recommendation engines, where the best item to offer can be one of many. Machine learning produces output that can be difficult for humans to interpret compared to statistical techniques; which makes machine learning less useful when the goal of the analysis is attribution or analysis of variance. Before launching his consultancy in 2015, Thomas served as an analytics expert for The Boston Consulting Group; Director of Product Management for Revolution Analytics (Microsoft); Solution Architect for IBM Big Data (Netezza), SAS and PriceWaterhouseCoopers.


The State of Enterprise Machine Learning

#artificialintelligence

Deep learning is a machine learning framework that models high-level patterns in multi-layered networks. First, the machine learning outputs – patient risk scores, propensity to buy scores, and fraud predictions – are substantially more valuable to each organization than the raw data. Machine learning produces output that can be difficult for humans to interpret compared to statistical techniques; which makes machine learning less useful when the goal of the analysis is attribution or analysis of variance. Machine learning algorithms require complex computation, and they need a great deal of computing power to build.


The AI-First Cloud: Can artificial intelligence power the next generation of cloud computing?

#artificialintelligence

The cloud computing market is a race vastly dominated by four companies: Amazon, Microsoft, Google and IBM with a few other platforms with traction in specific regional markets such as AliCloud in China. In this sense, cloud platforms were not required to provide the runtime to run IoT or mobile platforms but rather services that enable the backend capabilities of those solutions. Contrasting with that model, AI applications require not only sophisticated backend services but a very specific runtime optimized for the GPU intensive requirements of AI solutions. Cloud computing is a well-established technology trend vastly dominated by companies like Amazon, Microsoft and Google.