Results


AWS AI Blog

#artificialintelligence

Second, framework developers need to maintain multiple backends to guarantee performance on hardware ranging from smartphone chips to data center GPUs. Diverse AI frameworks and hardware bring huge benefits to users, but it is very challenging to AI developers to deliver consistent results to end users. Motivated by the compiler technology, a group of researchers including Tianqi Chen, Thierry Moreau, Haichen Shen, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy from Paul G. Allen School of Computer Science & Engineering, University of Washington, together with Ziheng Jiang from the AWS AI team, introduced the TVM stack to simplify this problem. Today, AWS is excited to announce, together with the research team from UW, an end-to-end compiler based on the TVM stack that compiles workloads directly from various deep learning frontends into optimized machine codes.


An Introduction to Redis-ML (Part 6) - DZone AI

#artificialintelligence

Simple and powerful data management, reduced overhead, and minimal latency are just three of the major advantages of building your machine learning models with Redis. Additional statistical models can be added to an application with a simple SET command, allowing developers to maintain multiple versions of models for cases in which data needs to be reprocessed. A Redis-ML key, like any Redis key, can be maintained using the Redis key management commands. To scale up a Redis-based predictive engine, you simply deploy more Redis nodes and create a replication topology with a single master node and multiple replica nodes.


Alison machine learning predicts mobile ad campaign results

#artificialintelligence

YellowHead has launched Alison, a machine learning technology that predicts how mobile advertising campaigns, known as paid user acquisition, will turn out. It specializes in paid user acquisition campaigns, app store optimization, and search engine optimization. And now it has added Alison to use machine learning to predict a campaign's performance in the hopes of uncovering more insights for brands and wasting less advertising money. Top university math professors at the Data Science Research Team at Tel Aviv University and the company's developers worked on Alison, which supplements human intelligence to optimize campaigns based on predicted results across multiple ad platforms such as Facebook and Google.


We are making on-device AI ubiquitous

#artificialintelligence

You may have heard this vision or may think that AI is really about big data and the cloud, and yet Qualcomm's solutions already have the power, thermal, and processing efficiency to run powerful AI algorithms on the actual device -- which brings several advantages. We've also had our own success at the ImageNet Challenge using deep learning techniques, placing as a top-3 performer in challenges for object localization, object detection, and scene classification. We have also expanded our own research and collaborated with the external AI community into other promising areas and applications of machine learning, like recurrent neural networks, object tracking, natural language processing, and handwriting recognition. As an example, at this year's F8 conference, Facebook and Qualcomm Technologies announced a collaboration to support the optimization of Caffe2, Facebook's open source deep learning framework, and the NPE framework.


Intel's Myriad X AI chip is a game changer for AI

#artificialintelligence

When Intel's Movidius team released the Myriad 2 visual processing unit (VPU) it was a marvel of technology and design. The tiny chip contained AI accelerators and a performance-to-power ratio that placed it, arguably, without peer. The company released the Myriad X chip last week -- despite the fact that the Myriad 2 was still considered cutting edge. Brown sounded even more excited when he spoke about the unseen potential for Intel's Movidius chips: We envision its use beyond the current categories that we're using it in, VPUs will be used by developers in categories and ways that you and I can't even imagine right now.


What is hardcore data science – in practice?

@machinelearnbot

For example, for personalized recommendations, we have been working with learning to rank methods that learn individual rankings over item sets. Figure 1: Typical data science workflow, starting with raw data that is turned into features and fed into learning algorithms, resulting in a model that is applied on future data. This means that this pipeline is iterated and improved many times, trying out different features, different forms of preprocessing, different learning methods, or maybe even going back to the source and trying to add more data sources. Probably the main difference between production systems and data science systems is that production systems are real-time systems that are continuously running.


AI Now Comes in a USB Stick

#artificialintelligence

The $79 USB stick delivers "dedicated deep neural network processing capabilities to a wide range of host devices at the edge," Intel says. With the USB stick, Intel suggests that product developers, researchers and makers will be able to add AI capabilities to their devices and develop, tune and deploy AI-based applications far more easily. Machine intelligence development basically involves training an algorithm on large sets of sample data using modern machine learning techniques, Intel notes, and then running the algorithm in an app that needs to interpret real-world data, a process known as "inference." The key is that the stick can be used as "a discrete neural network accelerator by adding dedicated deep learning inference capabilities to existing computing platforms for improved performance and power efficiency," Intel says.


The AI Revolution Is Eating Software: NVIDIA Is Powering It NVIDIA Blog

#artificialintelligence

It's great to see the two leading teams in AI computing race while we collaborate deeply across the board – tuning TensorFlow performance, and accelerating the Google cloud with NVIDIA CUDA GPUs. Dennard scaling, whereby reducing transistor size and voltage allowed designers to increase transistor density and speed while maintaining power density, is now limited by device physics. Such leaps in performance have drawn innovators from every industry, with the number of startups building GPU-driven AI services growing more than 4x over the past year to 1,300. Just as convolutional neural networks gave us the computer vision breakthrough needed to tackle self-driving cars, reinforcement learning and imitation learning may be the breakthroughs we need to tackle robotics.


What is hardcore data science – in practice?

@machinelearnbot

For example, for personalized recommendations, we have been working with learning to rank methods that learn individual rankings over item sets. This means that this pipeline is iterated and improved many times, trying out different features, different forms of preprocessing, different learning methods, or maybe even going back to the source and trying to add more data sources. Probably the main difference between production systems and data science systems is that production systems are real-time systems that are continuously running. Standard software engineering practices don't really apply to a data scientist's exploratory work mode because the goals are different.


Intel Democratizes Deep Learning Application Development with Launch of Movidius Neural Compute Stick Intel Newsroom

#artificialintelligence

Today, Intel launched the Movidius Neural Compute Stick, the world's first USB-based deep learning inference kit and self-contained artificial intelligence (AI) accelerator that delivers dedicated deep neural network processing capabilities to a wide range of host devices at the edge. Designed for product developers, researchers and makers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor. Whether it is training artificial neural networks on the Intel Nervana cloud, optimizing for emerging workloads such as artificial intelligence, virtual and augmented reality, and automated driving with Intel Xeon Scalable processors, or taking AI to the edge with Movidius vision processing unit (VPU) technology, Intel offers a comprehensive AI portfolio of tools, training and deployment options for the next generation of AI-powered products and services. "The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance – more than 100 gigaflops of performance within a 1W power envelope – to run real-time deep neural networks directly from the device," said Remi El-Ouazzane, vice president and general manager of Movidius, an Intel company.