How telecom providers are embracing cognitive app development


As an example, mobile network operators are increasing their investment in big data analytics and machine learning technologies as they transform into digital application developers and cognitive service providers. With a long history of handling huge datasets, and with their path now led by the IT ecosystem, mobile operators will devote more than $50 billion to big data analytics and machine learning technologies through 2021, according to the latest global market study by ABI Research. Machine learning can deliver benefits across telecom provider operations with financially-oriented applications - including fraud mitigation and revenue assurance - which currently make the most compelling use cases. Predictive machine learning applications for network performance optimization and real-time management will introduce more automation and efficient resource utilization.

Model evaluation, model selection, and algorithm selection in machine learning


In contrast to k-nearest neighbors, a simple example of a parametric method would be logistic regression, a generalized linear model with a fixed number of model parameters: a weight coefficient for each feature variable in the dataset plus a bias (or intercept) unit. While the learning algorithm optimizes an objective function on the training set (with exception to lazy learners), hyperparameter optimization is yet another task on top of it; here, we typically want to optimize a performance metric such as classification accuracy or the area under a Receiver Operating Characteristic curve. Thinking back of our discussion about learning curves and pessimistic biases in Part II, we noted that a machine learning algorithm often benefits from more labeled data; the smaller the dataset, the higher the pessimistic bias and the variance -- the sensitivity of our model towards the way we partition the data. We start by splitting our dataset into three parts, a training set for model fitting, a validation set for model selection, and a test set for the final evaluation of the selected model.

Machine Learning and Visualization in Julia – Tom Breloff


In this post, I'll introduce you to the Julia programming language and a couple long-term projects of mine: Plots for easily building complex data visualizations, and JuliaML for machine learning and AI. Easily create strongly-typed custom data manipulators. "User recipes" and "type recipes" can be defined on custom types to enable them to be "plotted" just like anything else. We believe that Julia has the potential to change the way researchers approach science, enabling algorithm designers to truly "think outside the box" (because of the difficulty of implementing non-conventional approaches in other languages).

Deep Learning for Chatbots, Part 2 – Implementing a Retrieval-Based Model in Tensorflow


A positive label means that an utterance was an actual response to a context, and a negative label means that the utterance wasn't – it was picked randomly from somewhere in the corpus. Each record in the test/validation set consists of a context, a ground truth utterance (the real response) and 9 incorrect utterances called distractors. Before starting with fancy Neural Network models let's build some simple baseline models to help us understand what kind of performance we can expect. The Deep Learning model we will build in this post is called a Dual Encoder LSTM network.

Spark Technology Center


One of the main goals of the machine learning team here at the Spark Technology Center is to continue to evolve Apache Spark as the foundation for end-to-end, continuous, intelligent enterprise applications. While working on adding multi-class logistic regression to Spark ML (part of the ongoing push towards parity between ml and mllib), STC team member Seth Hendrickson realized that, due to the way that Spark automatically serializes data when inter-node communication is required (e.g. during a reduce or aggregation operation), the aggregation step of the logistic regression training algorithm resulted in 3x more data being communicated than necessary. What does it mean when we refer to Apache Spark as the "foundation for end-to-end, continuous, intelligent enterprise applications"?


IEEE Spectrum Robotics Channel

Whitney says the device has greater torque per weight (torque density) than highly geared servos or brushless motors coupled with harmonic drives. And more significant: To build an autonomous robot, you'd need a set of motors and a control system capable of replacing the human puppeteer who's manually driving the fluid actuators [below]. John P. Whitney: The original motivation was the same as for the MIT WAM arm and other impedance-based systems designed for human interaction: Using a lightweight high-performance transmission allows placing the drive motors in the body, instead of suffering the cascading inertia if they were placed at each joint. We are learning that many of the "analog" qualities of this system will pay dividends for autonomous "digital" operation; for example, the natural haptic properties of the system can be of equal service to an autonomous control system as they are to a human operator.



Take a sneak peek at the awesome innovative technologies built on Intel architecture featured at this year's Intel Developer Forum. Build Caffe* optimized for Intel architecture, train deep network models using one or more compute nodes, and deploy networks. Find out how the Intel Xeon processor E5 v4 family helped improve the performance of the Chinese search engine Baidu's* deep neural networks Read about Bob Duffy's experiences getting his Microsoft* Surfacebook* set up to best maximize virtual reality (VR) applications. Intel Developer Zone experts, Intel Software Innovators, and Intel Black Belt Software Developers contribute hundreds of helpful articles and blog posts every month.


The Japan Times

SoftBank Group Corp.'s former Chief Operating Officer Nikesh Arora, whose 8 billion package topped the list, hails from India. Higher wages in Japan were typically earned by sticking around, thanks to rigid corporate promotion systems based on tenure. In the U.S., executives have reaped the benefits of a shift from cash to equity-based compensation tied to their companies' performance -- a change that sent pay packages spiraling in recent decades as the stock market soared. Interlocking stock ownership between companies listed on the Tokyo Stock Exchange fell to 16 percent in 2015 from 50 percent in 1990, according to data from Nomura Holdings Inc. Last year's biggest pay packages for Japan executives born in the country were Fanuc Corp. CEO Yoshiharu Inaba's 690 million and Sony Corp. CEO Kazuo Hirai's 513 million, data compiled by Bloomberg show.



A research team from Beth Israel Deaconess Medical Center (BIDMC) and Harvard Medical School (HMS) recently developed artificial intelligence (AI) methods aimed at training computers to interpret pathology images, with the long-term goal of building AI-powered systems to make pathologic diagnoses more accurate. "Our AI method is based on deep learning, a machine-learning algorithm used for a range of applications including speech recognition and image recognition," explained pathologist Andrew Beck, MD, PhD, Director of Bioinformatics at the Cancer Research Institute at Beth Israel Deaconess Medical Center (BIDMC) and an Associate Professor at Harvard Medical School. In an objective evaluation in which researchers were given slides of lymph node cells and asked to determine whether or not they contained cancer, the team's automated diagnostic method proved accurate approximately 92 percent of the time, explained Khosla, adding, "This nearly matched the success rate of a human pathologist, whose results were 96 percent accurate." "But the truly exciting thing was when we combined the pathologist's analysis with our automated computational diagnostic method, the result improved to 99.5 percent accuracy," said Beck.

My Computer Is an Honor Student -- but How Intelligent Is It? Standardized Tests as a Measure of AI

AI Magazine

Given the well-known limitations of the Turing Test, there is a need for objective tests to both focus attention on, and measure progress towards, the goals of AI. In this paper we argue that machine performance on standardized tests should be a key component of any new measure of AI, because attaining a high level of performance requires solving significant AI problems involving language understanding and world modeling - critical skills for any machine that lays claim to intelligence. In addition, standardized tests have all the basic requirements of a practical test: they are accessible, easily comprehensible, clearly measurable, and offer a graduated progression from simple tasks to those requiring deep understanding of the world.