Machine Learning: Overviews


Recent Advances for a Better Understanding of Deep Learning Part I

#artificialintelligence

This call for a better understanding of deep learning was the core of Ali Rahimi's Test-of-Time Award presentation at NIPS in December 2017. By comparing deep learning with alchemy, the goal of Ali was not to dismiss the entire field, but "to open a conversation". This goal has definitely been achieved and people are still debating whether our current practice of deep learning should be considered as alchemy, engineering or science. Seven months later, the machine learning community gathered again, this time in Stockholm for the International Conference on Machine Learning (ICML). With more than 5,000 participants and 629 papers published, it was one of the most important events regarding fundamental machine learning research.


DeepDribble: Simulating Basketball with AI

#artificialintelligence

When training physically simulated characters basketball skills, these competing talents must also be held in balance. While AAA game titles like EA's NBA LIVE and NBA 2K have made drastic improvements to their graphics and character animation, basketball video games still rely heavily on canned animations. The industry is always looking for new methods for creating gripping, on-court action in a more personalized, interactive way. In a recent paper by DeepMotion Chief Scientist, Libin Liu, and Carnegie Mellon University Professor, Jessica Hodgins, virtual agents are trained to simulate a range of complex ball handling skills in real time. This blog gives an overview of their work and results, which will be presented at SIGGRAPH 2018.


AI, Machine Learning and Data Science Roundup: August 2018

#artificialintelligence

This is an eclectic collection of interesting blog posts, software announcements and data applications I've noted over the past month or so. ONNX Model Zoo is now available, providing a library of pre-trained state-of-the-art models in deep learning in the ONNX format. In the 2018 IEEE Spectrum Top Programming Language rankings, Python takes the top spot and R ranks #7. Julia 1.0 has been released, marking the stabilization of the scientific computing language and promising forwards compatibility. Google announces Cloud AutoML, a beta service to train vision, text categorization, or language translation models from provided data.


Small Sample Learning in Big Data Era

arXiv.org Machine Learning

As a promising area in artificial intelligence, a new learning paradigm, called Small Sample Learning (SSL), has been attracting prominent research attention in the recent years. In this paper, we aim to present a survey to comprehensively introduce the current techniques proposed on this topic. Specifically, current SSL techniques can be mainly divided into two categories. The first category of SSL approaches can be called "concept learning", which emphasizes learning new concepts from only few related observations. The purpose is mainly to simulate human learning behaviors like recognition, generation, imagination, synthesis and analysis. The second category is called "experience learning", which usually co-exists with the large sample learning manner of conventional machine learning. This category mainly focuses on learning with insufficient samples, and can also be called small data learning in some literatures. More extensive surveys on both categories of SSL techniques are introduced and some neuroscience evidences are provided to clarify the rationality of the entire SSL regime, and the relationship with human learning process. Some discussions on the main challenges and possible future research directions along this line are also presented.


Parallel Statistical and Machine Learning Methods for Estimation of Physical Load

arXiv.org Machine Learning

Several statistical and machine learning methods are proposed to estimate the type and intensity of physical load and accumulated fatigue . They are based on the statistical analysis of accumulated and moving window data subsets with construction of a kurtosis-skewness diagram. This approach was applied to the data gathered by the wearable heart monitor for various types and levels of physical activities, and for people with various physical conditions. The different levels of physical activities, loads, and fitness can be distinguished from the kurtosis-skewness diagram, and their evolution can be monitored. Several metrics for estimation of the instant effect and accumulated effect (physical fatigue) of physical loads were proposed. The data and results presented allow to extend application of these methods for modeling and characterization of complex human activity patterns, for example, to estimate the actual and accumulated physical load and fatigue, model the potential dangerous development, and give cautions and advice in real time.


An Overview and a Benchmark of Active Learning for One-Class Classification

arXiv.org Machine Learning

Active learning stands for methods which increase classification quality by means of user feedback. An important subcategory is active learning for one-class classifiers, i.e., for imbalanced class distributions. While various methods in this category exist, selecting one for a given application scenario is difficult. This is because existing methods rely on different assumptions, have different objectives, and often are tailored to a specific use case. All this calls for a comprehensive comparison, the topic of this article. This article starts with a categorization of the various methods. We then propose ways to evaluate active learning results. Next, we run extensive experiments to compare existing methods, for a broad variety of scenarios. One result is that the practicality and the performance of an active learning method strongly depend on its category and on the assumptions behind it. Another observation is that there only is a small subset of our experiments where existing approaches outperform random baselines. Finally, we show that a well-laid-out categorization and a rigorous specification of assumptions can facilitate the selection of a good method for one-class classification.


Blog

#artificialintelligence

The Dragonfly Machine Learning Engine (MLE) provides the machine learning and data science capabilities included within OPNids. Data science and machine learning promise to counteract the dynamic threat environment created by growing network traffic and increasing threat actor sophistication. This post will provide an overview of the MLE engine itself, reasoning for why data science and cybersecurity go together, and some insight into using the MLE as part of the OPNids system. The Dragonfly MLE is available as part of OPNids. The Dragonfly MLE provides a powerful framework for deploying anomaly detection algorithms, threat intelligence lookups, and machine learning predictions within a network security infrastructure.


Fleets using AI to accelerate safety, efficiency

#artificialintelligence

"Artificial intelligence" (AI) may evoke fears of robots writing their own software code and not taking orders from humans. The real AI, at least in present form, is delivering results in the business world. Technology companies are using powerful computers and advanced statistical models to accelerate their product development. Most are not calling these efforts AI but rather machine learning. As a form of AI, machine learning is making it possible to quickly find relevant patterns in data captured by Internet of Things (IoT) devices and sensors, explains Adam Kahn, vice president of fleets for Netradyne, which has a vision-based fleet safety system called Driveri ("driver eye").


A Review of Learning with Deep Generative Models from perspective of graphical modeling

arXiv.org Machine Learning

This document aims to provide a review on learning with deep generative models (DGMs), which is an highly-active area in machine learning and more generally, artificial intelligence. This review is not meant to be a tutorial, but when necessary, we provide self-contained derivations for completeness. This review has two features. First, though there are different perspectives to classify DGMs, we choose to organize this review from the perspective of graphical modeling, because the learning methods for directed DGMs and undirected DGMs are fundamentally different. Second, we differentiate model definitions from model learning algorithms, since different learning algorithms can be applied to solve the learning problem on the same model, and an algorithm can be applied to learn different models. We thus separate model definition and model learning, with more emphasis on reviewing, differentiating and connecting different learning algorithms. We also discuss promising future research directions. This review is by no means comprehensive as the field is evolving rapidly. The authors apologize in advance for any missed papers and inaccuracies in descriptions. Corrections and comments are highly welcome.


Kernel Flows: from learning kernels from data into the abyss

arXiv.org Machine Learning

Learning can be seen as approximating an unknown function by interpolating the training data. Kriging offers a solution to this problem based on the prior specification of a kernel. We explore a numerical approximation approach to kernel selection/construction based on the simple premise that a kernel must be good if the number of interpolation points can be halved without significant loss in accuracy (measured using the intrinsic RKHS norm $\|\cdot\|$ associated with the kernel). We first test and motivate this idea on a simple problem of recovering the Green's function of an elliptic PDE (with inhomogeneous coefficients) from the sparse observation of one of its solutions. Next we consider the problem of learning non-parametric families of deep kernels of the form $K_1(F_n(x),F_n(x'))$ with $F_{n+1}=(I_d+\epsilon G_{n+1})\circ F_n$ and $G_{n+1} \in \operatorname{Span}\{K_1(F_n(x_i),\cdot)\}$. With the proposed approach constructing the kernel becomes equivalent to integrating a stochastic data driven dynamical system, which allows for the training of very deep (bottomless) networks and the exploration of their properties. These networks learn by constructing flow maps in the kernel and input spaces via incremental data-dependent deformations/perturbations (appearing as the cooperative counterpart of adversarial examples) and, at profound depths, they (1) can achieve accurate classification from only one data point per class (2) appear to learn archetypes of each class (3) expand distances between points that are in different classes and contract distances between points in the same class. For kernels parameterized by the weights of Convolutional Neural Network, minimizing approximation errors incurred by halving random subsets of interpolation points, appears to outperform training (the same CNN architecture) with relative entropy and dropout.