Results


How telecom providers are embracing cognitive app development

#artificialintelligence

As an example, mobile network operators are increasing their investment in big data analytics and machine learning technologies as they transform into digital application developers and cognitive service providers. With a long history of handling huge datasets, and with their path now led by the IT ecosystem, mobile operators will devote more than $50 billion to big data analytics and machine learning technologies through 2021, according to the latest global market study by ABI Research. Machine learning can deliver benefits across telecom provider operations with financially-oriented applications - including fraud mitigation and revenue assurance - which currently make the most compelling use cases. Predictive machine learning applications for network performance optimization and real-time management will introduce more automation and efficient resource utilization.


Model evaluation, model selection, and algorithm selection in machine learning

#artificialintelligence

In contrast to k-nearest neighbors, a simple example of a parametric method would be logistic regression, a generalized linear model with a fixed number of model parameters: a weight coefficient for each feature variable in the dataset plus a bias (or intercept) unit. While the learning algorithm optimizes an objective function on the training set (with exception to lazy learners), hyperparameter optimization is yet another task on top of it; here, we typically want to optimize a performance metric such as classification accuracy or the area under a Receiver Operating Characteristic curve. Thinking back of our discussion about learning curves and pessimistic biases in Part II, we noted that a machine learning algorithm often benefits from more labeled data; the smaller the dataset, the higher the pessimistic bias and the variance -- the sensitivity of our model towards the way we partition the data. We start by splitting our dataset into three parts, a training set for model fitting, a validation set for model selection, and a test set for the final evaluation of the selected model.


Machine Learning and Visualization in Julia – Tom Breloff

#artificialintelligence

In this post, I'll introduce you to the Julia programming language and a couple long-term projects of mine: Plots for easily building complex data visualizations, and JuliaML for machine learning and AI. Easily create strongly-typed custom data manipulators. "User recipes" and "type recipes" can be defined on custom types to enable them to be "plotted" just like anything else. We believe that Julia has the potential to change the way researchers approach science, enabling algorithm designers to truly "think outside the box" (because of the difficulty of implementing non-conventional approaches in other languages).


Deep Learning for Chatbots, Part 2 – Implementing a Retrieval-Based Model in Tensorflow

#artificialintelligence

A positive label means that an utterance was an actual response to a context, and a negative label means that the utterance wasn't – it was picked randomly from somewhere in the corpus. Each record in the test/validation set consists of a context, a ground truth utterance (the real response) and 9 incorrect utterances called distractors. Before starting with fancy Neural Network models let's build some simple baseline models to help us understand what kind of performance we can expect. The Deep Learning model we will build in this post is called a Dual Encoder LSTM network.


Spark Technology Center

#artificialintelligence

One of the main goals of the machine learning team here at the Spark Technology Center is to continue to evolve Apache Spark as the foundation for end-to-end, continuous, intelligent enterprise applications. While working on adding multi-class logistic regression to Spark ML (part of the ongoing push towards parity between ml and mllib), STC team member Seth Hendrickson realized that, due to the way that Spark automatically serializes data when inter-node communication is required (e.g. during a reduce or aggregation operation), the aggregation step of the logistic regression training algorithm resulted in 3x more data being communicated than necessary. What does it mean when we refer to Apache Spark as the "foundation for end-to-end, continuous, intelligent enterprise applications"?


disney-robot-with-air-water-actuators

IEEE Spectrum Robotics Channel

Whitney says the device has greater torque per weight (torque density) than highly geared servos or brushless motors coupled with harmonic drives. And more significant: To build an autonomous robot, you'd need a set of motors and a control system capable of replacing the human puppeteer who's manually driving the fluid actuators [below]. John P. Whitney: The original motivation was the same as for the MIT WAM arm and other impedance-based systems designed for human interaction: Using a lightweight high-performance transmission allows placing the drive motors in the body, instead of suffering the cascading inertia if they were placed at each joint. We are learning that many of the "analog" qualities of this system will pay dividends for autonomous "digital" operation; for example, the natural haptic properties of the system can be of equal service to an autonomous control system as they are to a human operator.


2aMy4eh

#artificialintelligence

Take a sneak peek at the awesome innovative technologies built on Intel architecture featured at this year's Intel Developer Forum. Build Caffe* optimized for Intel architecture, train deep network models using one or more compute nodes, and deploy networks. Find out how the Intel Xeon processor E5 v4 family helped improve the performance of the Chinese search engine Baidu's* deep neural networks Read about Bob Duffy's experiences getting his Microsoft* Surfacebook* set up to best maximize virtual reality (VR) applications. Intel Developer Zone experts, Intel Software Innovators, and Intel Black Belt Software Developers contribute hundreds of helpful articles and blog posts every month.


rule-no-1-scoring-big-pay-japan-dont-japanese

The Japan Times

SoftBank Group Corp.'s former Chief Operating Officer Nikesh Arora, whose 8 billion package topped the list, hails from India. Higher wages in Japan were typically earned by sticking around, thanks to rigid corporate promotion systems based on tenure. In the U.S., executives have reaped the benefits of a shift from cash to equity-based compensation tied to their companies' performance -- a change that sent pay packages spiraling in recent decades as the stock market soared. Interlocking stock ownership between companies listed on the Tokyo Stock Exchange fell to 16 percent in 2015 from 50 percent in 1990, according to data from Nomura Holdings Inc. Last year's biggest pay packages for Japan executives born in the country were Fanuc Corp. CEO Yoshiharu Inaba's 690 million and Sony Corp. CEO Kazuo Hirai's 513 million, data compiled by Bloomberg show.


AI computers could soon be used to diagnose cancer

#artificialintelligence

Scientists have used machine learning to create an artificial intelligence system capable of diagnosing breast cancer from lymph node biopsies with 92 per cent accuracy (cancer cells in a lymph node pictured). When combined with a human pathologist this accuracy increased to 99.5 per cent The system was developed by computer scientists at Harvard Medical School gave a machine learning algorithm slides of lymph nodes from breast cancer patients. The team then identified the specific training examples for which the computer is prone to making mistakes and re-trained the computer using greater numbers of the more difficult training examples. The team then identified the specific training examples for which the computer is prone to making mistakes and re-trained the computer using greater numbers of the more difficult training examples.


Australian Institute of Sport taps into data to help athletes train for gold

ZDNet

The Australian Institute of Sport (AIS) has predicted, and is hopeful, the Australian team will pick up six or seven gold medals at the upcoming Rio Olympics. The information is collected from 2,000 athletes each week, including 300 data points per athlete, making up a total of 600,000 data points per week. According to Nick Brown, AIS performance science and innovation deputy director, using data and analytics means athletes are able to train and compete consistently, without losing days to recovery or illness. AIS has partnered with Microsoft and BizData to use predictive analytics and machine learning to analyse the collected data, which is uploaded each night through an Azure SQL Database to the Athlete Management System, where all of the data is stored.