Collaborating Authors

Transfer Learning

Pinaki Laskar on LinkedIn: #machinelearning #ai #algorithms


AI Researcher, Cognitive Technologist Inventor - AI Thinking, Think Chain Innovator - AIOT, XAI, Autonomous Cars, IIOT Founder Fisheyebox Spatial Computing Savant, Transformative Leader, Industry X.0 Practitioner What is the difference between knowledge and learning in artificial and human intelligence? The primary difference between the Knowledge and Learning is that learning is a process whereas knowledge is the result, outcome or product or effect of the learning by experience, study, education or training data. Human and machine knowledge and its learning is led by scientific activities, as theoretical, experimental, computational or data analytic. There is one fundamental causal learning. But in the behavioristic psychology, learning is divided into the following groups: passive or active, non-associative, associative or active learning (the process by which a person or animal or machine learns an association between two stimuli or events). In ML/DL philosophy, learning is a change of behavior caused by interacting with the environment to form new patterns of responses to stimuli/inputs, internal or external, conscious or subconscious, voluntary or involuntary.

Understanding Transfer Learning for Medical Imaging


ImageNet pre-training) is a common practice in deep learning where a pre-trained network is fine-tuned on a new dataset/task. This practice is implicitly justified by feature-reuse where features learned from ImageNet are beneficial to other datasets/tasks. This paper [1] evaluates this justification on medical images datasets. The paper concludes that (i) transfer learning does not significantly help performance, (ii) smaller, simpler convolutional architectures perform comparably to standard ImageNet models, and (iii) there are feature-independent, and not feature-reuse, benefits to pre-training, i.e., speed convergence. These three differences question the idea of feature-reuse.

Transfer learning vs federated learning: A comparative analysis


Think of the things artificial intelligence and machine learning have accomplished in the last few years– real-time translations, outperforming humans at board games, drug discovery etc. Transfer learning, federated learning, reinforcement learning, self-supervised learning etc., are the cutting-edge techniques that made these milestones possible. While transfer learning is an old machine learning technique, federated learning was introduced in 2017 by Google. Deep learning models need huge swathes of labelled data to be trained on to learn and work effectively. The process is also time-consuming. Transfer learning can help tackle these challenges.

What Is Transfer Learning?


Editor's note: The name of the NVIDIA Transfer Learning Toolkit was changed to NVIDIA TAO Toolkit in August 2021. All references to the name have been updated in this blog. You probably have a career. But hit the books for a graduate degree or take online certificate courses by night, and you could start a new career building on your past experience. Transfer learning is the same idea.

The Idea Behind Transfer Learning: Stand on the Shoulders of Giants


Training big networks on large datasets is expensive considering computational equipment, engineers working on the problem in terms of human resources is also very demanding; trials and errors in training models from the scratch can be time consuming, inefficient and unproductive. Imagine the simple problem of classification on unstructured data in medical domain like sorting the X-rays and training the network to identify if there's broken bone or not. To reach any decent accuracy model has to learn what a broken bone looks like based on images in dataset, it has to make sense of pixels, edges and shapes. This is where the idea of Transfer Learning kicks in: model that is trained on similar data is now taken for the new purpose, weights are frozen and non-trainable layers will be incorporated into a new model that is capable of solving similar problem on smaller dataset. Similarly to Computer Vision type of problem, NLP tasks can also be managed with Transfer Learning methods: for example if we are building a model that takes descriptions of patient symptoms where aim is to predict the possible conditions associated with symptoms; in such case model is required to learn language semantics and how the sequence of words creates the meaning.

Transfer Learning Approaches for Neuroimaging Analysis: A Scoping Review


Deep learning algorithms have been moderately successful in diagnoses of diseases by analyzing medical images especially through neuroimaging that is rich in annotated data. Transfer learning methods have demonstrated strong performance in tackling annotated data. It utilizes and transfers knowledge learned from a source domain to target domain even when the dataset is small. There are multiple approaches to transfer learning that result in a range of performance estimates in diagnosis, detection, and classification of clinical problems. Therefore, in this paper, we reviewed transfer learning approaches, their design attributes, and their applications to neuroimaging problems. We reviewed two main literature databases and included the most relevant studies using predefined inclusion criteria. Among 50 reviewed studies, more than half of them are on transfer learning for Alzheimer's disease. Brain mapping and brain tumor detection were second and third most discussed research problems, respectively. The most common source dataset for transfer learning was ImageNet, which is not a neuroimaging dataset. This suggests that the majority of studies preferred pre-trained models instead of training their own model on a neuroimaging dataset. Although, about one third of studies designed their own architecture, most studies used existing Convolutional Neural Network architectures. Magnetic Resonance Imaging was the most common imaging modality. In almost all studies, transfer learni...

Transfer-Learning Across Datasets with Different Input Dimensions: An Algorithm and Analysis for the Linear Regression Case Machine Learning

With the development of new sensors and monitoring devices, more sources of data become available to be used as inputs for machine learning models. These can on the one hand help to improve the accuracy of a model. On the other hand however, combining these new inputs with historical data remains a challenge that has not yet been studied in enough detail. In this work, we propose a transfer-learning algorithm that combines the new and the historical data, that is especially beneficial when the new data is scarce. We focus the approach on the linear regression case, which allows us to conduct a rigorous theoretical study on the benefits of the approach. We show that our approach is robust against negative transfer-learning, and we confirm this result empirically with real and simulated data.

Generative multitask learning mitigates target-causing confounding Machine Learning

We propose a simple and scalable approach to causal representation learning for multitask learning. Our approach requires minimal modification to existing ML systems, and improves robustness to prior probability shift. The improvement comes from mitigating unobserved confounders that cause the targets, but not the input. We refer to them as target-causing confounders. These confounders induce spurious dependencies between the input and targets. This poses a problem for the conventional approach to multitask learning, due to its assumption that the targets are conditionally independent given the input. Our proposed approach takes into account the dependency between the targets in order to alleviate target-causing confounding. All that is required in addition to usual practice is to estimate the joint distribution of the targets to switch from discriminative to generative classification, and to predict all targets jointly. Our results on the Attributes of People and Taskonomy datasets reflect the conceptual improvement in robustness to prior probability shift.

Zero Experience Required: Plug & Play Modular Transfer Learning for Semantic Visual Navigation Artificial Intelligence

In reinforcement learning for visual navigation, it is common to develop a model for each new task, and train that model from scratch with task-specific interactions in 3D environments. However, this process is expensive; massive amounts of interactions are needed for the model to generalize well. Moreover, this process is repeated whenever there is a change in the task type or the goal modality. We present a unified approach to visual navigation using a novel modular transfer learning model. Our model can effectively leverage its experience from one source task and apply it to multiple target tasks (e.g., ObjectNav, RoomNav, ViewNav) with various goal modalities (e.g., image, sketch, audio, label). Furthermore, our model enables zero-shot experience learning, whereby it can solve the target tasks without receiving any task-specific interactive training. Our experiments on multiple photorealistic datasets and challenging tasks show that our approach learns faster, generalizes better, and outperforms SoTA models by a significant margin.

A Survey on Visual Transfer Learning using Knowledge Graphs Artificial Intelligence

Recent approaches of computer vision utilize deep learning methods as they perform quite well if training and testing domains follow the same underlying data distribution. However, it has been shown that minor variations in the images that occur when using these methods in the real world can lead to unpredictable errors. Transfer learning is the area of machine learning that tries to prevent these errors. Especially, approaches that augment image data using auxiliary knowledge encoded in language embeddings or knowledge graphs (KGs) have achieved promising results in recent years. This survey focuses on visual transfer learning approaches using KGs. KGs can represent auxiliary knowledge either in an underlying graph-structured schema or in a vector-based knowledge graph embedding. Intending to enable the reader to solve visual transfer learning problems with the help of specific KG-DL configurations we start with a description of relevant modeling structures of a KG of various expressions, such as directed labeled graphs, hypergraphs, and hyper-relational graphs. We explain the notion of feature extractor, while specifically referring to visual and semantic features. We provide a broad overview of knowledge graph embedding methods and describe several joint training objectives suitable to combine them with high dimensional visual embeddings. The main section introduces four different categories on how a KG can be combined with a DL pipeline: 1) Knowledge Graph as a Reviewer; 2) Knowledge Graph as a Trainee; 3) Knowledge Graph as a Trainer; and 4) Knowledge Graph as a Peer. To help researchers find evaluation benchmarks, we provide an overview of generic KGs and a set of image processing datasets and benchmarks including various types of auxiliary knowledge. Last, we summarize related surveys and give an outlook about challenges and open issues for future research.