Results


Vertex.AI - Announcing PlaidML: Open Source Deep Learning for Every Platform

@machinelearnbot

We're pleased to announce the next step towards deep learning for every device and platform. Today Vertex.AI is releasing PlaidML, our open source portable deep learning engine. Our mission is to make deep learning accessible to every person on every device, and we're building PlaidML to help make that a reality. We're starting by supporting the most popular hardware and software already in the hands of developers, researchers, and students. The initial version of PlaidML runs on most existing PC hardware with OpenCL-capable GPUs from NVIDIA, AMD, or Intel.


Getting started with TensorFlow

@machinelearnbot

In the context of machine learning, tensor refers to the multidimensional array used in the mathematical models that describe neural networks. In other words, a tensor is usually a higher-dimension generalization of a matrix or a vector. Through a simple notation that uses a rank to show the number of dimensions, tensors allow the representation of complex n-dimensional vectors and hyper-shapes as n-dimensional arrays. Tensors have two properties: a datatype and a shape. TensorFlow is an open source deep learning framework that was released in late 2015 under the Apache 2.0 license.


Komatsu Helps Improve Mining Performance with Industrial Internet of Things (IIoT) Platform Powered by Cloudera

#artificialintelligence

Cloudera, Inc. (NYSE: CLDR), the modern platform for machine learning and analytics, optimized for the cloud, announced that Komatsu, a leading global heavy equipment manufacturer, has implemented a cloud-based Industrial Internet of Things (IIoT) analytics platform powered by Cloudera Enterprise and Microsoft Azure. The platform enables Komatsu teams to help mining customers around the world continuously monitor the performance of some of the largest equipment used in surface and underground mining, increase asset utilization and productivity, and deliver essential resources including energy and industrial minerals for the global economy. Komatsu's JoySmart Solutions is an IIoT-based service that helps customers optimize machine performance using machine data and analytics. The JoySmart platform ingests, stores and processes a wide variety of data collected from mining equipment operating around the globe, often at very remote locations in harsh conditions. Types of equipment monitored includes longwall mining systems, electric rope shovels, continuous miners and wheel loaders.


Four ways AI is already being applied to sales and marketing

#artificialintelligence

Behind the scenes, artificial intelligence (AI) technology is increasingly present in sales and marketing software. And many believe that it is not just going to have an impact but that it is going to dramatically reshape how sales and marketing function in the coming years. While the phone call is an ancient phenomenon to many individuals, companies large and small still conduct a lot of their sales activity over the phone. Unfortunately, for obvious reasons, tracking, analyzing and improving the performance of salespeople on phone calls is a much more challenging task than, say, tracking, analyzing and improving the performance of email sales. But a number of companies, including Marketo, AdRoll and Qualtrics, are using "conversation intelligence" company Chorus.ai's


Energy Data Insights: The Missing "Smart Step" to Better Building Performance

#artificialintelligence

Following our last article on "Artificial Intelligence in Energy Management Software", we got many responses from vendors and users alike. Over the next couple of weeks and month we will follow them all up and present here what we do believe are relevant and smart new solutions. As EEIP we are having a particular focus on solutions not only delivering a higher return but are adressing (and solving) a specific barrier in the market. Dexma showed us a new solution based on the insight of the "2 steps to energy efficiency". Basically it means that you need to equip someone within a company with some arguments to kick-off an energy audit.


HPE Introduces New Set of Artificial Intelligence Platforms and Services

#artificialintelligence

HPE Rapid Software Installation for AI: HPE introduced an integrated hardware and software solution, purpose-built for high performance computing and deep learning applications. Based on the HPE Apollo 6500 system in collaboration with Bright Computing to enable rapid deep learning application development, this solution includes pre-configured deep learning software frameworks, libraries, automated software updates and cluster management optimized for deep learning and supports NVIDIA Tesla V100 GPUs. HPE Deep Learning Cookbook: Built by the AI Research team at Hewlett Packard Labs, the deep learning cookbook is a set of tools to guide customers in selecting the best hardware and software environment for different deep learning tasks. These tools help enterprises estimate performance of various hardware platforms, characterize the most popular deep learning frameworks, and select the ideal hardware and software stacks to fit their individual needs. The Deep Learning Cookbook can also be used to validate the performance and tune the configuration of already purchased hardware and software stacks.


Microsoft And Cray Form Alliance To Bring Supercomputing To The Azure Cloud

@machinelearnbot

Microsoft and Cray just announced a strategic alliance that gives enterprise users of the Microsoft Azure cloud platform access to dedicated Cray supercomputing systems. As a result of the agreement, moving forward, Microsoft and Cray will both be able to offer customers access to Cray supercomputing systems in Microsoft Azure datacenters, to run AI, advanced analytics, and other HPC-class workloads. "Our partnership with Microsoft will introduce Cray supercomputers to a whole new class of customers that need the most advanced computing resources to expand their problem-solving capabilities, but want this new capability available to them in the cloud," said Peter Ungaro, president and CEO of Cray. "Dedicated Cray supercomputers in Azure not only give customers all of the breadth of features and services from the leader in enterprise cloud, but also the advantages of running a wide array of workloads on a true supercomputer, the ability to scale applications to unprecedented levels, and the performance and capabilities previously only found in the largest on-premise supercomputing centers." Availability of Cray supercomputer resources in Azure, allows researchers, analysts, and other professionals to do things like train AI deep learning models, perform whole genome sequencing, conduct crash simulation, perform computational fluid dynamic simulations, or run any other type of HPC workload that would typically require massive hardware and IT management investments, from machines attached to the cloud.


The Role of Hadoop in Digital Transformations and Managing the IoT

@machinelearnbot

The digital transformation underway at Under Armour is erasing any stale stereotypes that athletes and techies don't mix. While hardcore runners sporting the company's latest microthread singlet can't see Hadoop, Apache Hive, Apache Spark, or Presto, these technologies are teaming up to track some serious mileage. Under Armour is working on a "connected fitness" vision that connects body, apparel, activity level, and health. By combining the data from all these sources into an app, consumers will gain a better understanding of their health and fitness, and Under Armour will be able to identify and respond to customer needs more quickly with personalized services and products. The company stores and analyzes data about food and nutrition, recipes, workout activities, music, sleep patterns, purchase histories, and more.


HPE Introduces New Set of AI Platforms and Services

#artificialintelligence

HPE announced new purpose-built platforms and services capabilities to help companies simplify the adoption of Artificial Intelligence, with an initial focus on a key subset of AI known as deep learning. Inspired by the human brain, deep learning is typically implemented for challenging tasks such as image and facial recognition, image classification and voice recognition. To take advantage of deep learning, enterprises need a high performance compute infrastructure to build and train learning models that can manage large volumes of data to recognize patterns in audio, images, videos, text and sensor data. Many organizations lack several integral requirements to implement deep learning, including expertise and resources; sophisticated and tailored hardware and software infrastructure; and the integration capabilities required to assimilate different pieces of hardware and software to scale AI systems. Based on the HPE Apollo 6500 system in collaboration with Bright Computing to enable rapid deep learning application development, this solution includes pre-configured deep learning software frameworks, libraries, automated software updates and cluster management optimized for deep learning and supports NVIDIA Tesla V100 GPUs.


How Machine Intelligence Will Help Refine Your Automated Email Campaign

#artificialintelligence

Films dating back as far as 1927 (and perhaps beyond) predicted machine intelligence would be the future. But for as many who saw the vision, there was an equal number who believed the forecast was soundless--how could a machine possibly replicate and surpass the intelligence of the human who created it? Fast-forward to 2017, and we find two things: machine intelligence is here and rising, and it's quite capable of learning at a much faster rate than us. Look no further than search engines, which compile the digital footprints we leave behind, analyze them to better interpret our unique and changing behaviors, then use that information to tailor individual online experiences. With every search we do, the engines grow smarter about its billions of users and how to better accommodate them.