Transfer Learning


Transfer learning: the dos and don'ts

#artificialintelligence

If you have recently started doing work in deep learning, especially image recognition, you might have seen the abundance of blog posts all over the internet, promising to teach you how to build a world-class image classifier in a dozen or fewer lines and just a few minutes on a modern GPU. What's shocking is not the promise but the fact that most of these tutorials end up delivering on it. To those trained in'conventional' machine learning techniques, the very idea that a model developed for one data set could simply be applied to a different one sounds absurd. The answer is, of course, transfer learning, one of the most fascinating features of deep neural networks. In this post, we'll first look at what transfer learning is, when it will work, when it might work, and why it won't work in some cases, finally concluding with some pointers at best practices for transfer learning.


Building NLP Classifiers Cheaply With Transfer Learning and Weak Supervision

#artificialintelligence

There is a catch to training state-of-the-art NLP models: their reliance on massive hand-labeled training sets. That's why data labeling is usually the bottleneck in developing NLP applications and keeping them up-to-date. For example, imagine how much it would cost to pay medical specialists to label thousands of electronic health records. In general, having domain experts label thousands of examples is too expensive. On top of the initial labeling cost, there is another huge cost in keeping models up-to-date with changing contexts in the real-world.


What Every NLP Engineer Needs to Know About Pre-Trained Language Models

#artificialintelligence

Practical applications of Natural Language Processing (NLP) have gotten significantly cheaper, faster, and easier due to the transfer learning capabilities enabled by pre-trained language models. Transfer learning enables engineers to pre-train an NLP model on one large dataset and then quickly fine-tune the model to adapt to other NLP tasks. This new approach enables NLP models to learn both lower-level and higher-level features of language, leading to much better model performance for virtually all standard NLP tasks and a new standard for industry best practices. To help you quickly understand the significance of this technical achievement and how it accelerates your own work in NLP, we've summarized the key lessons you should know in easy-to-read bullet-point format. We've also included summaries of the 3 most important research papers in the space that you need to be aware of.


Transfer Learning for Performance Modeling of Configurable Systems: A Causal Analysis

arXiv.org Artificial Intelligence

Modern systems (e.g., deep neural networks, big data analytics, and compilers) are highly configurable, which means they expose different performance behavior under different configurations. The fundamental challenge is that one cannot simply measure all configurations due to the sheer size of the configuration space. Transfer learning has been used to reduce the measurement efforts by transferring knowledge about performance behavior of systems across environments. Previously, research has shown that statistical models are indeed transferable across environments. In this work, we investigate identifiability and transportability of causal effects and statistical relations in highly-configurable systems. Our causal analysis agrees with previous exploratory analysis \cite{Jamshidi17} and confirms that the causal effects of configuration options can be carried over across environments with high confidence. We expect that the ability to carry over causal relations will enable effective performance analysis of highly-configurable systems.


Smart City Development With Urban Transfer Learning

IEEE Computer

The governments of many cities just starting smart city development will face a critical cold-start problem: how to develop a new smart city service with limited data. We investigate the common process of urban transfer learning, i.e., leveraging transfer learning to accelerate smart city development, and also provide city planners and relevant practitioners with guidelines for applying this novel learning paradigm.


ML for Flood Forecasting at Scale

arXiv.org Machine Learning

Effective riverine flood forecasting at scale is hindered by a multitude of factors, most notably the need to rely on human calibration in current methodology, the limited amount of data for a specific location, and the computational difficulty of building continent/global level models that are sufficiently accurate. Machine learning (ML) is primed to be useful in this scenario: learned models often surpass human experts in complex high-dimensional scenarios, and the framework of transfer or multitask learning is an appealing solution for leveraging local signals to achieve improved global performance. We propose to build on these strengths and develop ML systems for timely and accurate riverine flood prediction. Floods are the most common and deadly natural disaster in the world. Every year, floods cause from thousands to tens of thousands of fatalities [1, 22, 2, 21, 14], affect hundreds of millions of people [14, 21, 2], and cause tens of billions of dollars worth of damages [1, 2]. These numbers have only been increasing in recent decades [23]. Indeed, the UN charter notes floods to be one of the key motivators for formulating the sustainable development goals (SDGs), and directly challenges us: "They knew that earthquakes and floods were inevitable, but that the high death tolls were not."


Multi-Source Transfer Learning for Non-Stationary Environments

arXiv.org Machine Learning

Abstract--In data stream mining, predictive models typically suffer drops in predictive performance due to concept drift. As enough data representing the new concept must be collected for the new concept to be well learnt, the predictive performance of existing models usually takes some time to recover from concept drift. T o speed up recovery from concept drift and improve predictive performance in data stream mining, this work proposes a novel approach called Multi-sourcE onLine TrAnsfer learning for Non-statIonary Environments (Melanie). Melanie is the first approach able to transfer knowledge between multiple data streaming sources in non-stationary environments. It creates several sub-classifiers to learn different aspects from different source and target concepts over time. The sub-classifiers that match the current target concept well are identified, and used to compose an ensemble for predicting examples from the target concept. We evaluate Melanie on several synthetic data streams containing different types of concept drift and on real world data streams. The results indicate that Melanie can deal with a variety drifts and improve predictive performance over existing data stream learning algorithms by making use of multiple sources. Index Terms --concept drift, non-stationary environment, multi-sources, transfer learning. I NTRODUCTION Many real world applications produce data in a streaming fashion, i.e., as a sequence of observations that arrive over time. Examples include prediction of customer behaviour, credit card approval, fraud detection, software effort estimation, software defect prediction, etc. A challenge in data stream mining is how to describe a given target probability distribution accurately without knowing the whole data stream beforehand.


5 types of deep transfer learning Packt Hub

#artificialintelligence

Transfer learning is a method of reusing a model or knowledge for another related task. Transfer learning is sometimes also considered as an extension of existing ML algorithms. Extensive research and work is being done in the context of transfer learning and on understanding how knowledge can be transferred among tasks. However, the Neural Information Processing Systems (NIPS) 1995 workshop Learning to Learn: Knowledge Consolidation and Transfer in Inductive Systems is believed to have provided the initial motivations for research in this field. The literature on transfer learning has gone through a lot of iterations, and the terms associated with it have been used loosely and often interchangeably.


Adapted Deep Embeddings: A Synthesis of Methods for k-Shot Inductive Transfer Learning

Neural Information Processing Systems

The focus in machine learning has branched beyond training classifiers on a single task to investigating how previously acquired knowledge in a source domain can be leveraged to facilitate learning in a related target domain, known as inductive transfer learning. Three active lines of research have independently explored transfer learning using neural networks. In weight transfer, a model trained on the source domain is used as an initialization point for a network to be trained on the target domain. In deep metric learning, the source domain is used to construct an embedding that captures class structure in both the source and target domains. In few-shot learning, the focus is on generalizing well in the target domain based on a limited number of labeled examples. We compare state-of-the-art methods from these three paradigms and also explore hybrid adapted-embedding methods that use limited target-domain data to fine tune embeddings constructed from source-domain data. We conduct a systematic comparison of methods in a variety of domains, varying the number of labeled instances available in the target domain (k), as well as the number of target-domain classes. We reach three principal conclusions: (1) Deep embeddings are far superior, compared to weight transfer, as a starting point for inter-domain transfer or model re-use (2) Our hybrid methods robustly outperform every few-shot learning and every deep metric learning method previously proposed, with a mean error reduction of 34% over state-of-the-art. (3) Among loss functions for discovering embeddings, the histogram loss (Ustinova & Lempitsky, 2016) is most robust. We hope our results will motivate a unification of research in weight transfer, deep metric learning, and few-shot learning.


Learning To Learn Around A Common Mean

Neural Information Processing Systems

The problem of learning-to-learn (LTL) or meta-learning is gaining increasing attention due to recent empirical evidence of its effectiveness in applications. The goal addressed in LTL is to select an algorithm that works well on tasks sampled from a meta-distribution. In this work, we consider the family of algorithms given by a variant of Ridge Regression, in which the regularizer is the square distance to an unknown mean vector. We show that, in this setting, the LTL problem can be reformulated as a Least Squares (LS) problem and we exploit a novel meta- algorithm to efficiently solve it. At each iteration the meta-algorithm processes only one dataset. Specifically, it firstly estimates the stochastic LS objective function, by splitting this dataset into two subsets used to train and test the inner algorithm, respectively. Secondly, it performs a stochastic gradient step with the estimated value. Under specific assumptions, we present a bound for the generalization error of our meta-algorithm, which suggests the right splitting parameter to choose. When the hyper-parameters of the problem are fixed, this bound is consistent as the number of tasks grows, even if the sample size is kept constant. Preliminary experiments confirm our theoretical findings, highlighting the advantage of our approach, with respect to independent task learning.