Results


Pruning AI networks without impacting performance

#artificialintelligence

In a spotlight paper from the 2017 NIPS Conference, my team and I presented an AI optimization framework we call Net-Trim, which is a layer-wise convex scheme to prune a pre-trained deep neural network. Deep learning has become a method of choice for many AI applications, ranging from image recognition to language translation. Thanks to algorithmic and computational advances, we are now able to train bigger and deeper neural networks resulting in increased AI accuracy. However, because of increased power consumption and memory usage, it is impractical to deploy such models on embedded devices with limited hardware resources and power constraints. One practical way to overcome this challenge is to reduce the model complexity without sacrificing accuracy.


Intelligent Agents: An A.I. View of Optimization

#artificialintelligence

As a digital analyst or marketer, you know the importance of analytical decision making. Go to any industry conference, blog, meet up, or even just read the popular press, and you will hear and see topics like machine learning, artificial intelligence, and predictive analytics everywhere. Because many of us don't come from a technical/statistical background, this can be both a little confusing and intimidating. But don't sweat it, in this post, I will try to clear up a some of this confusion by introducing a simple, yet powerful framework – the intelligent agent – which will help link these new ideas with familiar tools and concepts like A/B Testing and Optimization. Note: the intelligent agent framework is used as the guiding principle in Russell and Norvig's excellent text Artificial Intelligence: A Modern Approach – it's an awesome book, and I recommend anyone who wants to learn more to go get a copy or check out their online AI course.


JPL's AI-Powered Racing Drone Challenges Pro Human Pilot

IEEE Spectrum Robotics Channel

As drones and their components get smaller, more efficient, and more capable, we've seen an increasing amount of research towards getting these things flying by themselves in semi-structured environments without relying on external localization. The University of Pennsylvania has done some amazing work in this area, as has DARPA's Fast Lightweight Autonomy program. At NASA's Jet Propulsion Laboratory, they've been working on small drone autonomy for the past few years as part of a Google-funded project. The focus is on high-speed dynamic maneuvering, in the context of flying a drone as fast as possible around an indoor race course using only on-board hardware. For the project's final demo, JPL raced their autonomous drones through an obstacle course against a professional human racing drone pilot.


Optimizing Machine Learning with TensorFlow

#artificialintelligence

In our webinar "Optimizing Machine Learning with TensorFlow" we gave an overview of some of the impressive optimizations Intel has made to TensorFlow when using their hardware. You can find a link to the archived video here. During the webinar, Mohammad Ashraf Bhuiyan, Senior Software Engineer in Intel's Artificial Intelligence Group, and myself spoke about some of the common use cases that require optimization as well as benchmarks demonstrating order-of-magnitude speed improvements when running on Intel hardware. TensorFlow, Google's library for machine learning (ML), has become the most popular machine learning library in a fast-growing ecosystem. This library has over 77k stars on GitHub and is widely used in a growing number of business critical applications.


Four ways AI is already being applied to sales and marketing

#artificialintelligence

Behind the scenes, artificial intelligence (AI) technology is increasingly present in sales and marketing software. And many believe that it is not just going to have an impact but that it is going to dramatically reshape how sales and marketing function in the coming years. While the phone call is an ancient phenomenon to many individuals, companies large and small still conduct a lot of their sales activity over the phone. Unfortunately, for obvious reasons, tracking, analyzing and improving the performance of salespeople on phone calls is a much more challenging task than, say, tracking, analyzing and improving the performance of email sales. But a number of companies, including Marketo, AdRoll and Qualtrics, are using "conversation intelligence" company Chorus.ai's


TensorFlow* Optimizations on Modern Intel Architecture

@machinelearnbot

TensorFlow* is a leading deep learning and machine learning framework, which makes it important for Intel and Google to ensure that it is able to extract maximum performance from Intel's hardware offering. This paper introduces the Artificial Intelligence (AI) community to TensorFlow optimizations on Intel Xeon and Intel Xeon Phi processor-based platforms. These optimizations are the fruit of a close collaboration between Intel and Google engineers announced last year by Intel's Diane Bryant and Google's Diane Green at the first Intel AI Day. We describe the various performance challenges that we encountered during this optimization exercise and the solutions adopted. We also report out performance improvements on a sample of common neural networks models.


New Optimizations Improve Deep Learning Frameworks For CPUs

#artificialintelligence

Since most of us need more than a "machine learning only" server, I'll focus on the reality of how Intel Xeon SP Platinum processors remain the best choice for servers, including servers needing to do machine learning as part of their workload. Here is a partial run down of key software for accelerating deep learning on Intel Xeon Platinum processor versions enough that the best performance advantage of GPUs is closer to 2X than to 100X. There is also a good article in Parallel Universe Magazine, Issue 28, starting on page 26, titled Solving Real-World Machine Learning Problems with Intel Data Analytics Acceleration Library. High-core count CPUs (the Intel Xeon Phi processors – in particular the upcoming "Knights Mill" version), and FPGAs (Intel Xeon processors coupled with Intel/Altera FPGAs), offer highly flexible options excellent price/performance and power efficiencies.


New Optimizations Improve Deep Learning Frameworks For CPUs

#artificialintelligence

Since most of us need more than a "machine learning only" server, I'll focus on the reality of how Intel Xeon SP Platinum processors remain the best choice for servers, including servers needing to do machine learning as part of their workload. Here is a partial run down of key software for accelerating deep learning on Intel Xeon Platinum processor versions enough that the best performance advantage of GPUs is closer to 2X than to 100X. There is also a good article in Parallel Universe Magazine, Issue 28, starting on page 26, titled Solving Real-World Machine Learning Problems with Intel Data Analytics Acceleration Library. High-core count CPUs (the Intel Xeon Phi processors – in particular the upcoming "Knights Mill" version), and FPGAs (Intel Xeon processors coupled with Intel/Altera FPGAs), offer highly flexible options excellent price/performance and power efficiencies.


Optimizing OpenCV on the Raspberry Pi - PyImageSearch

#artificialintelligence

Otherwise, if you're compiling OpenCV for Python 3, check the "Python 3" output of CMake: Figure 2: After running CMake, Python 3 NumPy are correctly set from within our cv virtualenv on the Raspberry Pi. Now that we've updated the swap size, kick off the optimized OpenCV compile using all four cores: Figure 3: Our optimized compile of OpenCV 3.3 for the Raspberry Pi 3 has been completed successfully. Given that we just optimized for floating point operations a great test would be to run a pre-trained deep neural network on the Raspberry Pi, similar to what we did last week. Let's give SqueezeNet a try: Figure 5: Squeezenet on the Raspberry Pi 3 also achieves performance gains using our optimized install of OpenCV 3.3.


Alison machine learning predicts mobile ad campaign results

#artificialintelligence

YellowHead has launched Alison, a machine learning technology that predicts how mobile advertising campaigns, known as paid user acquisition, will turn out. It specializes in paid user acquisition campaigns, app store optimization, and search engine optimization. And now it has added Alison to use machine learning to predict a campaign's performance in the hopes of uncovering more insights for brands and wasting less advertising money. Top university math professors at the Data Science Research Team at Tel Aviv University and the company's developers worked on Alison, which supplements human intelligence to optimize campaigns based on predicted results across multiple ad platforms such as Facebook and Google.