Machine Learning: Overviews


Takeaways from the Google Speech Summit 2018

@machinelearnbot

Generative Text-to-Speech Synthesis, Heiga Zen, Research Scientist Abstract: Recent progress in deep generative models and its application to text-to-speech (TTS) synthesis has made a breakthrough in the naturalness of artificially generated speech.


The Difference Between Artificial Intelligence and Machine Learning

#artificialintelligence

Confused whether artificial intelligence and machine learning are the same thing? Anna Brown asks the experts to explain. LEARN MORE ABOUT ARTIFICIAL INTELLIGENCE https://www.sas.com/en_us/insights/an... SAS DOES AI - CHECK OUT SAS AI SOLUTIONS https://www.sas.com/en_us/solutions/a... LEARN MORE ABOUT MACHINE LEARNING https://www.sas.com/en_us/insights/an... WEBINAR: INTRODUCTION TO MACHINE LEARNING In this this webinar, Wayne Thompson of SAS delves into those issues and provides an overview of machine learning, as well as key business applications of this technique, including fraud detection, model factories and recommendation systems. Through innovative analytics, business intelligence and data management software and services, SAS helps customers at more than 75,000 sites make better decisions faster. Since 1976, SAS has been giving customers around the world THE POWER TO KNOW .



Artificial Intelligence Foundations: Machine Learning

@machinelearnbot

A high-level course of AI to learn how Machine Learning provides the foundation for AI, and how you can leverage cognitive services in your apps. Artificial Intelligence will define the next generation of software solutions. This computer science course provides an overview of AI, and explains how it can be used to build smart apps that help organizations be more efficient and enrich people's lives. It uses a mix of engaging lectures and hands-on activities to help you take your first steps in the exciting field of AI. Discover how machine learning can be used to build predictive models for AI.


A Guide to Machine Learning PhDs

#artificialintelligence

A machine learning learning PhD doesn't only open up some of the highest-paying jobs around, it sets you up to have an outsized positive impact on the world. This comprehensive guide on machine learning PhDs from 80,000 Hours (YC S15) will help you get started. The guide is based on discussion with six machine learning researchers including two at DeepMind, one at OpenAI, and one running a robotics start-up. Check out the highlights below. Machine learning involves giving software rules to learn from experience rather than directly programming the steps it takes.


Valid Inference for $L_2$-Boosting

arXiv.org Machine Learning

We review several recently proposed post-selection inference frameworks and assess their transferability to the component-wise functional gradient descent algorithm (CFGD) under normality assumption for model errors, also known as $L_2$-Boosting. The CFGD is one of the most versatile toolboxes to analyze data, as it scales well to high-dimensional data sets, allows for a very flexible definition of additive regression models and incorporates inbuilt variable selection. %After addressing several issues associated with Due to the iterative nature, which can repeatedly select the same component to update, an inference framework for component-wise boosting algorithms requires adaptations of existing approaches; we propose tests and confidence intervals for linear, grouped and penalized additive model components estimated using the $L_2$-boosting selection process. We apply our framework to the prostate cancer data set and investigate the properties of our concepts in simulation studies. %The most general and promising selective inference framework for $L_2$-Boosting as well as for more general gradient-descent boosting algorithms is an sampling approach which constitutes an adoption of the recently proposed method by Yang et al. (2016).


Reinforcement Learning and Control as Probabilistic Inference: Tutorial and Review

arXiv.org Machine Learning

The framework of reinforcement learning or optimal control provides a mathematical formalization of intelligent decision making that is powerful and broadly applicable. While the general form of the reinforcement learning problem enables effective reasoning about uncertainty, the connection between reinforcement learning and inference in probabilistic models is not immediately obvious. However, such a connection has considerable value when it comes to algorithm design: formalizing a problem as probabilistic inference in principle allows us to bring to bear a wide array of approximate inference tools, extend the model in flexible and powerful ways, and reason about compositionality and partial observability. In this article, we will discuss how a generalization of the reinforcement learning or optimal control problem, which is sometimes termed maximum entropy reinforcement learning, is equivalent to exact probabilistic inference in the case of deterministic dynamics, and variational inference in the case of stochastic dynamics. We will present a detailed derivation of this framework, overview prior work that has drawn on this and related ideas to propose new reinforcement learning and control algorithms, and describe perspectives on future research.


Nonparametric Learning and Optimization with Covariates

arXiv.org Machine Learning

Modern decision analytics frequently involves the optimization of an objective over a finite horizon where the functional form of the objective is unknown. The decision analyst observes covariates and tries to learn and optimize the objective by experimenting with the decision variables. We present a nonparametric learning and optimization policy with covariates. The policy is based on adaptively splitting the covariate space into smaller bins (hyper-rectangles) and learning the optimal decision in each bin. We show that the algorithm achieves a regret of order $O(\log(T)^2 T^{(2+d)/(4+d)})$, where $T$ is the length of the horizon and $d$ is the dimension of the covariates, and show that no policy can achieve a regret less than $O(T^{(2+d)/(4+d)})$ and thus demonstrate the near optimality of the proposed policy. The role of $d$ in the regret is not seen in parametric learning problems: It highlights the complex interaction between the nonparametric formulation and the covariate dimension. It also suggests the decision analyst should incorporate contextual information selectively.


SURREAL: SUbgraph Robust REpresentAtion Learning

arXiv.org Machine Learning

The success of graph embeddings or node representation learning in a variety of downstream tasks, such as node classification, link prediction, and recommendation systems, has led to their popularity in recent years. Representation learning algorithms aim to preserve local and global network structure by identifying node neighborhood notions. However, many existing algorithms generate embeddings that fail to properly preserve the network structure, or lead to unstable representations due to random processes (e.g., random walks to generate context) and, thus, cannot generate to multi-graph problems. In this paper, we propose a robust graph embedding using connection subgraphs algorithm, entitled: SURREAL, a novel, stable graph embedding algorithmic framework. SURREAL learns graph representations using connection subgraphs by employing the analogy of graphs with electrical circuits. It preserves both local and global connectivity patterns, and addresses the issue of high-degree nodes. Further, it exploits the strength of weak ties and meta-data that have been neglected by baselines. The experiments show that SURREAL outperforms state-of-the-art algorithms by up to 36.85% on multi-label classification problem. Further, in contrast to baselines, SURREAL, being deterministic, is completely stable.


Machine Learning Solves Data Center Problems, But Also Creates New Ones - insideBIGDATA

#artificialintelligence

In this special guest feature, Geoff Tudor, VP and GM of Cloud Data Services at Panzura, believes AI poses both opportunities and risks in the automation of the datacenter. This article provides an overview regarding the impact of AI in the datacenter, and how companies can prepare their storage infrastructure for these technologies. Geoff has over 22 years experience in storage, broadband, and networking. As Chief Cloud Strategist at Hewlett Packard Enterprise, Geoff led CxO engagements for Fortune 100 private cloud opportunities resulting in 10X growth to over $1B in revenues while positioning HPE as the #1 private cloud infrastructure supplier globally. Geoff holds an MBA from The University of Texas at Austin, a BA from Tulane University, and is a patent-holder in satellite communications.