Results


How telecom providers are embracing cognitive app development

#artificialintelligence

As an example, mobile network operators are increasing their investment in big data analytics and machine learning technologies as they transform into digital application developers and cognitive service providers. With a long history of handling huge datasets, and with their path now led by the IT ecosystem, mobile operators will devote more than $50 billion to big data analytics and machine learning technologies through 2021, according to the latest global market study by ABI Research. Machine learning can deliver benefits across telecom provider operations with financially-oriented applications - including fraud mitigation and revenue assurance - which currently make the most compelling use cases. Predictive machine learning applications for network performance optimization and real-time management will introduce more automation and efficient resource utilization.


Model evaluation, model selection, and algorithm selection in machine learning

#artificialintelligence

In contrast to k-nearest neighbors, a simple example of a parametric method would be logistic regression, a generalized linear model with a fixed number of model parameters: a weight coefficient for each feature variable in the dataset plus a bias (or intercept) unit. While the learning algorithm optimizes an objective function on the training set (with exception to lazy learners), hyperparameter optimization is yet another task on top of it; here, we typically want to optimize a performance metric such as classification accuracy or the area under a Receiver Operating Characteristic curve. Thinking back of our discussion about learning curves and pessimistic biases in Part II, we noted that a machine learning algorithm often benefits from more labeled data; the smaller the dataset, the higher the pessimistic bias and the variance -- the sensitivity of our model towards the way we partition the data. We start by splitting our dataset into three parts, a training set for model fitting, a validation set for model selection, and a test set for the final evaluation of the selected model.


Deep Learning for Chatbots, Part 2 – Implementing a Retrieval-Based Model in Tensorflow

#artificialintelligence

A positive label means that an utterance was an actual response to a context, and a negative label means that the utterance wasn't – it was picked randomly from somewhere in the corpus. Each record in the test/validation set consists of a context, a ground truth utterance (the real response) and 9 incorrect utterances called distractors. Before starting with fancy Neural Network models let's build some simple baseline models to help us understand what kind of performance we can expect. The Deep Learning model we will build in this post is called a Dual Encoder LSTM network.


Behavior I/O: Using Machine Learning to Empower Human Learning

#artificialintelligence

You can think of this virtuous cycle as "Behavior I/O:" In the consumer world, many companies like Fitbit (NYSE: FIT) and LinkedIn (NYSE: LNKD) learn from people's behavior to help train other people to behave better. Could you use machine learning to observe behavior of analysts, ultimately using those observations to improve how their colleagues use data? Even with such a rich corpus, there are still a lot of problems to solve in implementing a behavioral learning system that actually helps drive behavior. Before Alation, Satyen spent nearly a decade at Oracle, ultimately running the Financial Services Warehousing and Performance Management business where he helped customers get insights out of their systems.


What Is Deep Learning? A Short History Everyone Should Read

#artificialintelligence

Scientists have used deep learning algorithms with multiple processing layers (hence "deep") to make better models from large quantities of unlabeled data (such as photos with no description, voice recordings or videos on YouTube). Google's voice recognition algorithms operate with a massive training set -- yet it's not nearly big enough to predict every possible word or phrase or question you could put to it. And Google's deep learning algorithm discovers cats. Algorithms perform superior face recognition tasks using deep network that take into account 120 million parameters.


Google developed a processor to power its AI bots

#artificialintelligence

Machine learning, which helps computers do things like understand complex voice commands and improve image search capabilities, can be taxing on traditional hardware. The Tensor Processing Unit (TPU) is built expressly for running TensorFlow, Google's in-house machine learning system that it open-sourced last year. Google says it's used TPUs in its data centers for more than a year now, and that the performance improvements they offer are roughly equivalent to fast-forwarding technology about seven years into the future, if you go by Moore's Law. However, you can be sure that these processors will be at the heart of Google's forthcoming technologies and services as AI and machine learning become more important in the future.


Distributed Deep Learning with Caffe Using a MapR Cluster

#artificialintelligence

Google, Baidu, and Microsoft have the resources to build dedicated deep learning clusters that give the deep learning algorithms a level of processing power that both accelerates training time as well as increases their model's accuracy. Yahoo, however, has taken a slightly different approach, by moving away from a dedicated deep learning cluster and combining Caffe with Spark. The ML Big Data team's CaffeOnSpark software has allowed them to run the entire process of building and deploying a deep learning model onto a single cluster. The MapR Converged Data Platform is the ideal platform for this project, giving you all the power of distributed Caffe on a cluster with enterprise-grade robustness, enabling you to take advantage of the MapR high performance file system.


Google I/O 2016 Preview: Machine Learning, Virtual Reality And Android N - ARC

#artificialintelligence

Google, as it normally does, has organized I/O around three distinct categories: development, monetization and the future. The conference will have 190 sessions for developers to learn how to make fast and efficient Web apps, optimize Android development and learn about the tools and features that will progressively make the Internet a more intelligent place. The biggest news on the machine learning front at Google I/O will be around Project Tango, a machine vision framework that allows smartphones to sees what is in front of them and let software react to it. ARC will be at Google I/O 2016 covering everything that matters to people who build software for a living and people who make a living with software.


5 Ways Machine Learning Is Reshaping Our World

#artificialintelligence

More importantly, however, Google and its competitors are moving towards keying their search algorithms to understand natural speech as well, in anticipation of more and more voice search. But new machine learning algorithms are making more accurate, real-time translations possible. You might also be interested in my new big data case study collection, which you can download for free from here: Big Data Case Study Collection: 7 Amazing Companies That Really Get Big Data. My current book is Big Data: Using Smart Big Data, Analytics and Metrics To Make Better Decisions and Improve Performance' and my new books (available to pre-order now) are Key Business Analytics: The 60 Business Analysis Tools Every Manager Needs To Know and Big Data in Practice.


AlphaGo and the Declining Advantage of Big Companies

#artificialintelligence

Last week, machine learning took a big leap forward when Google's AlphaGo, a machine algorithm, beat the world champion, Lee Sedol, in the game Go. When IBM Watson beat former champions Ken Jennings and Brad Rutter in the game show Jeopardy! Even though it doesn't rely on encoded rules, IBM Watson requires close monitoring by domain experts to provide data and evaluate its performance. AlphaGo was programmed to seek positive rewards in the form of scores and continually improve its system by playing millions of games against tweaked versions of itself.