Results


Deep Learning Frameworks Hands-on Review – Knowm.org

@machinelearnbot

At Knowm, we are building a new and exciting type of computer processor to accelerate machine learning (ML) and artificial intelligence applications. The goal of Thermodynamic-RAM (kT-RAM) is to run general ML operations, traditionally deployed to CPUs and GPUs, to a physically-adaptive analog processor based on memristors which unites memory and processing. If you haven't heard yet, we call this new way of computing "AHaH Computing", which stands for Anti-Hebbian and Hebbian Computing, and it provides a universal computing framework for in-memory reconfigurable logic, memory, and ML. While we have shown a long time ago that AHaH Computing is capable of solving problems across many domains of ML, we only recently figured out how to use the kT-RAM instruction set and low precision/noisy memristors to build supervised and unsupervised compositional (deep) ML systems. Our method does not require the propagation of error algorithm (Backprop) and is easy to attain with realistic analog hardware, including but not limited to memristors.


GPU-Accelerated Amazon Web Services

#artificialintelligence

Developers, data scientists, and researchers are solving today's complex challenges with breakthroughs in artificial intelligence, deep learning, and high performance computing (HPC). NVIDIA is working with Amazon Web Services to offer the newest and most powerful GPU-accelerated cloud service based on the latest NVIDIA Volta architecture: Amazon EC2 P3 instance. Using up to eight NVIDIA Tesla V100 GPUs, you will be able to train your neural networks with massive data sets using any of the major deep learning frameworks faster than ever before. Then use the capabilities of GPU parallel computing, running billions of computations, to infer and identify known patterns or objects. With over 500 GPU-accelerated HPC applications accelerated, including the top ten HPC applications and every deep learning framework, you can quickly tap into the power of the Tesla V100 GPUs on AWS to boost performance, scale-out, accelerate time to results, and save money.


The AI Revolution Is Eating Software: NVIDIA Is Powering It NVIDIA Blog

#artificialintelligence

It's great to see the two leading teams in AI computing race while we collaborate deeply across the board – tuning TensorFlow performance, and accelerating the Google cloud with NVIDIA CUDA GPUs. Dennard scaling, whereby reducing transistor size and voltage allowed designers to increase transistor density and speed while maintaining power density, is now limited by device physics. Such leaps in performance have drawn innovators from every industry, with the number of startups building GPU-driven AI services growing more than 4x over the past year to 1,300. Just as convolutional neural networks gave us the computer vision breakthrough needed to tackle self-driving cars, reinforcement learning and imitation learning may be the breakthroughs we need to tackle robotics.


Qualcomm opens its mobile chip deep learning framework to all

#artificialintelligence

Mobile chip maker Qualcomm wants to enable deep learning-based software development on all kinds of devices, which is why it created the Neural Processing Engine (NPE) for its Snapdragon-series mobile processors. The NPE software development kit is now available to all via the Qualcomm Developer Network, which marks the first public release of the SDK, and opens up a lot of potential for AI computing on a range of devices, including mobile phones, in-car platforms and more. Qualcomm's NPE works with the Snapdragon 600 and 800 series processor platforms, and supports a range of common deep learning frameworks including Tensorflow and Caffe2. As more tech companies look for ways to shift AI-based computing functions from remote servers to local platforms in order to improve reliability and reduce requirements in terms of network connectivity, this could be a huge asset for Qualcomm, and a big help in maintaining relevance for whatever comes after mobile in terms of dominant tech trends.


The Era of AI Computing - Fedscoop

#artificialintelligence

Powering Through the End of Moore's Law As Moore's law slows down, GPU computing performance, powered by improvements in everything from silicon to software, surges. Dennard scaling, whereby reducing transistor size and voltage allowed designers to increase transistor density and speed while maintaining power density, is now limited by device physics. The NVIDIA GPU Cloud platform gives AI developers access to our comprehensive deep learning software stack wherever they want it--on PCs, in the data center or via the cloud. Just as convolutional neural networks gave us the computer vision breakthrough needed to tackle self-driving cars, reinforcement learning and imitation learning may be the breakthroughs we need to tackle robotics.


To Accelerate Artificial Intelligence, NVIDIA & Baidu Signed Partnership

#artificialintelligence

NVIDIA and Baidu announced a broad partnership to bring the world's leading artificial intelligence technology j cloud computing, self-driving vehicles and AI home assistants. Speaking in the keynote at Baidu's AI developer conference in Beijing, Baidu president and COO Qi Lu described his company's plans to work with NVIDIA to bring next-generation NVIDIA Volta GPUs to Baidu Cloud, providing cloud customers with the world's leading deep learning platform. Optimize Baidu's PaddlePaddle open source deep learning framework for NVIDIA Volta GPUs and make it widely available to academics and researchers. Our collaboration aligns our exceptional technical resources to create AI computing platforms for all developers – from academic research, startups creating breakthrough AI applications, and autonomous vehicles."


IBM's New PowerAI Features Again Demonstrate Enterprise AI Leadership

#artificialintelligence

PowerAI runs on IBM's highest performing server in its OpenPOWER LC line (the Power S822LC for High Performance Computing), and utilizes deep learning frameworks and building block software to make it easier for enterprises to dive into AI and machine learning. Next, to aid in data preparation, IBM introduced a new cluster virtualization software called Spectrum Conductor. IBM claims this version of TensorFlow will significantly cut down deep learning training times (from weeks to hours), by leveraging a virtualized cluster of GPU-boosted servers. The last new feature announced was a software tool called DL Insight, which IBM says will make model development easier and more accurate.


Deep Learning: What Are My Options?

#artificialintelligence

In this series, we will discuss the deep learning technology, available frameworks/tools, and how to scale deep learning using big data architecture. Neural Networks–or as they are more appropriately called, Artificial Neural Networks (ANN)–were invented in 1940 by McCulloch, Pitts, and Hebbian. The difference between shallow neural networks vs. deep neural networks is the number of hidden layers (i.e., in shallow neural networks, the number of hidden layers are few, while in deep neural networks, it is high). Recently, TensorFrames ( i.e., TensorFlow DataFrames) was proposed, a seemingly great workaround, but in its current state, it is still in development mode, and migrating current TensorFlow projects to TensorFrames framework demands significant efforts.


The AI Revolution Is Eating Software: NVIDIA Is Powering It NVIDIA Blog

#artificialintelligence

It's great to see the two leading teams in AI computing race while we collaborate deeply across the board – tuning TensorFlow performance, and accelerating the Google cloud with NVIDIA CUDA GPUs. Dennard scaling, whereby reducing transistor size and voltage allowed designers to increase transistor density and speed while maintaining power density, is now limited by device physics. Such leaps in performance have drawn innovators from every industry, with the number of startups building GPU-driven AI services growing more than 4x over the past year to 1,300. Just as convolutional neural networks gave us the computer vision breakthrough needed to tackle self-driving cars, reinforcement learning and imitation learning may be the breakthroughs we need to tackle robotics.


Machine learning - What Innovation Will Bring To The AI World

#artificialintelligence

There is an effort underway to standardize and improve access across all layers of the machine learning stack, including specialized chipsets, scalable computing platforms, software frameworks, tools and ML algorithms. "Just like cloud computing ushered in the current explosion in startup … machine learning platforms will likely power the next generation of consumer and business tools." This is where public cloud services such as Amazon Web Services (AWS), Google Cloud Platform, Microsoft Azure and others come in. Just like cloud computing ushered in the current explosion in startups, the ongoing build-out of machine learning platforms will likely power the next generation of consumer and business tools.