Goto

Collaborating Authors

Transfer Learning


Transfer Learning

#artificialintelligence

Transfer learning is a machine learning technique. With the help of this article master transfer learning by using pretrained models in deep learning.


Adversarial robustness as a prior for better transfer learning - Microsoft Research

#artificialintelligence

Editor's note: This post and its research are the collaborative efforts of our team, which includes Andrew Ilyas (PhD Student, MIT), Logan Engstrom (PhD Student, MIT), Aleksander Mądry (Professor at MIT), Ashish Kapoor (Partner Research Manager). In practical machine learning, it is desirable to be able to transfer learned knowledge from some "source" task to downstream "target" tasks. This is known as transfer learning--a simple and efficient way to obtain performant machine learning models, especially when there is little training data or compute available for solving the target task. Transfer learning is very useful in practice. For example, transfer learning allows perception models on a robot or other autonomous system to be trained on a synthetic dataset generated via a high-fidelity simulator, such as AirSim, and then refined on a small dataset collected in the real world.


A Practical Approach towards Causality Mining in Clinical Text using Active Transfer Learning

arXiv.org Artificial Intelligence

Objective: Causality mining is an active research area, which requires the application of state-of-the-art natural language processing techniques. In the healthcare domain, medical experts create clinical text to overcome the limitation of well-defined and schema driven information systems. The objective of this research work is to create a framework, which can convert clinical text into causal knowledge. Methods: A practical approach based on term expansion, phrase generation, BERT based phrase embedding and semantic matching, semantic enrichment, expert verification, and model evolution has been used to construct a comprehensive causality mining framework. This active transfer learning based framework along with its supplementary services, is able to extract and enrich, causal relationships and their corresponding entities from clinical text. Results: The multi-model transfer learning technique when applied over multiple iterations, gains performance improvements in terms of its accuracy and recall while keeping the precision constant. We also present a comparative analysis of the presented techniques with their common alternatives, which demonstrate the correctness of our approach and its ability to capture most causal relationships. Conclusion: The presented framework has provided cutting-edge results in the healthcare domain. However, the framework can be tweaked to provide causality detection in other domains, as well. Significance: The presented framework is generic enough to be utilized in any domain, healthcare services can gain massive benefits due to the voluminous and various nature of its data. This causal knowledge extraction framework can be used to summarize clinical text, create personas, discover medical knowledge, and provide evidence to clinical decision making.


Pinaki Laskar on LinkedIn: #DataScientist #ArtificialIntelligence #DataAnalytics

#artificialintelligence

Transfer learning (TL) in machine learning (ML) that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem. For example, knowledge gained while learning to recognize cars could apply when trying to recognize trucks. Transfer learning is the reuse of a pre-trained model on a new problem. It's currently very popular in deep learning because it can train deep neural networks with comparatively little data. Importance of transfer of learning, The main purpose of any learning or education is that a person who acquires some knowledge or skill in a formal and controlled situation like a classroom, or a training situation, will be able to transfer such knowledge and skill to real life situations and adapt himself more effectively.


3 Pre-Trained Model Series to Use for NLP with Transfer Learning

#artificialintelligence

Before we start, if you are reading this article, I am sure that we share similar interests and are/will be in similar industries. So let's connect via Linkedin! Please do not hesitate to send a contact request! If you have been trying to build machine learning models with high accuracy; but never tried Transfer Learning, this article will change your life. At least, it did mine!


SB-MTL: Score-based Meta Transfer-Learning for Cross-Domain Few-Shot Learning

arXiv.org Artificial Intelligence

While many deep learning methods have seen significant success in tackling the problem of domain adaptation and few-shot learning separately, far fewer methods are able to jointly tackle both problems in Cross-Domain Few-Shot Learning (CD-FSL). This problem is exacerbated under sharp domain shifts that typify common computer vision applications. In this paper, we present a novel, flexible and effective method to address the CD-FSL problem. Our method, called Score-based Meta Transfer-Learning (SB-MTL), combines transfer-learning and meta-learning by using a MAML-optimized feature encoder and a score-based Graph Neural Network. First, we have a feature encoder with specific layers designed to be fine-tuned. To do so, we apply a first-order MAML algorithm to find good initializations. Second, instead of directly taking the classification scores after fine-tuning, we interpret the scores as coordinates by mapping the pre-softmax classification scores onto a metric space. Subsequently, we apply a Graph Neural Network to propagate label information from the support set to the query set in our score-based metric space. We test our model on the Broader Study of Cross-Domain Few-Shot Learning (BSCD-FSL) benchmark, which includes a range of target domains with highly varying dissimilarity to the miniImagenet source domain. We observe significant improvements in accuracy across 5, 20 and 50 shot, and on the four target domains. In terms of average accuracy, our model outperforms previous transfer-learning methods by 5.93% and previous meta-learning methods by 14.28%.


Transfer Learning in Action: From ImageNet to Tiny-ImageNet

#artificialintelligence

Transfer learning is an important topic. As a civilization, we have been passing on the knowledge from one generation to the other, enabling the technological advancement that we enjoy today. It's the edifice that supports most of the state-of-the-art models that are blowing steam, empowering many services that we take for granted. Transfer learning is about having a good starting point for the downstream task we're interested in solving. In this article, we're going to discuss how to piggyback on transfer learning to get a warm start to solve an image classification task.


Transfer Learning

#artificialintelligence

The concept of transfer learning lies in imparting knowledge learned for performing a task to another task that is different but similar. How is Transfer Learning Useful to Me? In the context of humans, transfer learning is crucial to our lives. Let us use the CIFAR-10 dataset that contains 10 categories of images -- airplane, automobile, bird, cat, deer, dog, frog, horse, ship and truck. Our task of interest is to classify every image to its corresponding category.


Predicting S&P500 Index direction with Transfer Learning and a Causal Graph as main Input

arXiv.org Artificial Intelligence

We propose a unified multi-tasking framework to represent the complex and uncertain causal process of financial market dynamics, and then to predict the movement of any type of index with an application on the monthly direction of the S&P500 index. our solution is based on three main pillars: (i) the use of transfer learning to share knowledge and feature (representation, learning) between all financial markets, increase the size of the training sample and preserve the stability between training, validation and test sample. (ii) The combination of multidisciplinary knowledge (Financial economics, behavioral finance, market microstructure and portfolio construction theories) to represent a global top-down dynamics of any financial market, through a graph. (iii) The integration of forward looking unstructured data, different types of contexts (long, medium and short term) through latent variables/nodes and then, use a unique VAE network (parameter sharing) to learn simultaneously their distributional representation. We obtain Accuracy, F1-score, and Matthew Correlation of 74.3 %, 67 % and 0.42 above the industry and other benchmark on 12 years test period which include three unstable and difficult sub-period to predict.


Planar 3D Transfer Learning for End to End Unimodal MRI Unbalanced Data Segmentation

#artificialintelligence

We present a novel approach of 2D to 3D transfer learning based on mapping pre-trained 2D convolutional neural network weights into planar 3D kernels. The method is validated by the proposed planar 3D res-u-net network with encoder transferred from the 2D VGG-16, which is applied for a single-stage unbalanced 3D image data segmentation. In particular, we evaluate the method on the MICCAI 2016 MS lesion segmentation challenge dataset utilizing solely fluid-attenuated inversion recovery (FLAIR) sequence without brain extraction for training and inference to simulate real medical praxis. The planar 3D res-u-net network performed the best both in sensitivity and Dice score amongst end to end methods processing raw MRI scans and achieved comparable Dice score to a state-of-the-art unimodal not end to end approach. Complete source code was released under the open-source license, and this paper complies with the Machine learning reproducibility checklist.