Goto

Collaborating Authors

Transfer Learning


Transfer Learning : the time savior

#artificialintelligence

The whole backdrop of Artificial intelligence and deep learning is to imitate the human brain, and one of the most notable feature of our brain is it's inherent ability to transfer knowledge across tasks. Which in simple terms means using what you have learnt in kindergarten, adding 2 numbers, to solving matrix addition in high school mathematics. The field of machine learning also makes use of such a concept where a well trained model trained with lots and lots of data can add to the accuracy of our model. Here is my code for the transfer learning project I have implemented. I have made use of open cv to capture real time images of the face and use them as training and test datasets.


4S-DT: Self Supervised Super Sample Decomposition for Transfer learning with application to COVID-19 detection

#artificialintelligence

Due to the high availability of large-scale annotated image datasets, knowledge transfer from pre-trained models showed outstanding performance in medical image classification. However, building a robust image classification model for datasets with data irregularity or imbalanced classes can be a very challenging task, especially in the medical imaging domain. In this paper, we propose a novel deep convolutional neural network, we called Self Supervised Super Sample Decomposition for transfer learning (4S-DT) model.4S-DTencourages Our main contribution is a novel self-supervised learning mechanism guided by a super sample decomposition of unlabelled chest X-ray images. We used 50,000 unlabelled chest X-ray images to achieve our coarse-to-fine transfer learning with an application to COVID-19 detection, as an exemplar.4S-DThas


9 Free Online Resources To Learn Transfer Learning

#artificialintelligence

Transfer learning can be said as a shortcut to solving complex machine learning problems. In simple words, this learning is used to enhance the learning of the model, shorten the time as well as make the learning process quick for the current task. This technique can be applied in computer vision when the model has to learn from images or videos and in NLP techniques. In this article, we list down the top 9 free resources in Transfer Learning one must-read. About: This tutorial is provided by the developers of TensorFlow, where you will learn how to classify images of cats and dogs by using transfer learning from a pre-trained network.


Dense pose for animal classes with transfer learning

#artificialintelligence

The most advanced framework for dense pose estimation for chimpanzees. It will help primatologists and other scientists study how chimps across Africa behave in the wild and in captive settings. The framework leverages a large-scale data set of unlabeled videos in the wild, a pretrained dense pose estimator for humans, and dense self-training techniques. This is a joint project in collaboration with our partners the Max Planck Institute for Evolutionary Anthropology (MPI EVA) and the Pan African Programme: The Cultured Chimpanzee, and their network of collaborators. We show that we can train a model to detect and recognize chimpanzees by transferring knowledge from existing detection, segmentation, and human dense pose labeling models.


Towards Knowledgeable Supervised Lifelong Learning Systems

Journal of Artificial Intelligence Research

Learning a sequence of tasks is a long-standing challenge in machine learning. This setting applies to learning systems that observe examples of a range of tasks at different points in time. A learning system should become more knowledgeable as more related tasks are learned. Although the problem of learning sequentially was acknowledged for the first time decades ago, the research in this area has been rather limited. Research in transfer learning, multitask learning, metalearning and deep learning has studied some challenges of these kinds of systems. Recent research in lifelong machine learning and continual learning has revived interest in this problem. We propose Proficiente, a full framework for long-term learning systems. Proficiente relies on knowledge transferred between hypotheses learned with Support Vector Machines. The first component of the framework is focused on transferring forward selectively from a set of existing hypotheses or functions representing knowledge acquired during previous tasks to a new target task. A second component of Proficiente is focused on transferring backward, a novel ability of long-term learning systems that aim to exploit knowledge derived from recent tasks to encourage refinement of existing knowledge. We propose a method that transfers selectively from a task learned recently to existing hypotheses representing previous tasks. The method encourages retention of existing knowledge whilst refining. We analyse the theoretical properties of the proposed framework. Proficiente is accompanied by an agnostic metric that can be used to determine if a long-term learning system is becoming more knowledgeable. We evaluate Proficiente in both synthetic and real-world datasets, and demonstrate scenarios where knowledgeable supervised learning systems can be achieved by means of transfer.


Recycling AI Algorithms with Transfer Learning

#artificialintelligence

Algorithm developers are using transfer learning to reuse the experience gained by one algorithm as the starting point for building another one for performing related tasks. Humans can transfer their knowledge across different tasks. For instance, if a person knows how to ride a bike, then he can easily transfer his knowledge to learn how to drive a car. Transfer learning is a similar concept. It is a process that enables developers to use the experience gained by one model while performing one task and apply it to a second model to solve different but related tasks.


Transfer Learning in Computer Vision a case Study

#artificialintelligence

The conclusion to the series on computer vision talks about the benefits of transfer learning and how anyone can train networks with reasonable accuracy. Usually, articles and tutorials on the web don't include methods and hacks to improve accuracy. The aim of this article is to help you get the most information from one source. Stick on till the end to build your own classifier. The ImageNet moment was remarkable in computer vision and deep learning, as it created opportunities for people to reuse the knowledge procured through several hours or days of training with high-end GPUs.


QuantNet: Transferring Learning Across Systematic Trading Strategies

arXiv.org Machine Learning

In this work we introduce QuantNet: an architecture that is capable of transferring knowledge over systematic trading strategies in several financial markets. By having a system that is able to leverage and share knowledge across them, our aim is two-fold: to circumvent the so-called Backtest Overfitting problem; and to generate higher risk-adjusted returns and fewer drawdowns. To do that, QuantNet exploits a form of modelling called Transfer Learning, where two layers are market-specific and another one is market-agnostic. This ensures that the transfer occurs across trading strategies, with the market-agnostic layer acting as a vehicle to share knowledge, cross-influence each strategy parameters, and ultimately the trading signal produced. In order to evaluate QuantNet, we compared its performance in relation to the option of not performing transfer learning, that is, using market-specific old-fashioned machine learning. In summary, our findings suggest that QuantNet performs better than non transfer-based trading strategies, improving Sharpe ratio in 15% and Calmar ratio in 41% across 3103 assets in 58 equity markets across the world. Code coming soon.


On-Device Transfer Learning for Personalising Psychological Stress Modelling using a Convolutional Neural Network

arXiv.org Machine Learning

Stress is a growing concern in modern society adversely impacting the wider population more than ever before. The accurate inference of stress may result in the possibility for personalised interventions. However, individual differences between people limits the generalisability of machine learning models to infer emotions as people's physiology when experiencing the same emotions widely varies. In addition, it is time consuming and extremely challenging to collect large datasets of individuals' emotions as it relies on users labelling sensor data in real-time for extended periods. We propose the development of a personalised, cross-domain 1D CNN by utilising transfer learning from an initial base model trained using data from 20 participants completing a controlled stressor experiment. By utilising physiological sensors (HR, HRV EDA) embedded within edge computing interfaces that additionally contain a labelling technique, it is possible to collect a small real-world personal dataset that can be used for on-device transfer learning to improve model personalisation and cross-domain performance.


Google, MIT Partner on Visual Transfer Learning to Help Robots Learn to Grasp, Manipulate Objects

#artificialintelligence

A team from the Massachusetts Institute of Technology (MIT) and Google's artificial intelligence (AI) arm has found a way to use visual transfer learning to help robots grasp and manipulate objects more accurately. "We investigate whether existing pre-trained deep learning visual feature representations can improve the efficiency of learning robotic manipulation tasks, like grasping objects," write Google's Yen-Chen Lin and Andy Zeng of the research. "By studying how we can intelligently transfer neural network weights between vision models and affordance-based manipulation models, we can evaluate how different visual feature representations benefit the exploration process and enable robots to quickly acquire manipulation skills using different grippers. "We initialized our affordance-based manipulation models with backbones based on the ResNet-50 architecture and pre-trained on different vision tasks, including a classification model from ImageNet and a segmentation model from COCO. With different initialisations, the robot was then tasked with learning to grasp a diverse set of objects through trial and error.