deep multi-task learning


Modular Universal Reparameterization: Deep Multi-task Learning Across Diverse Domains

Neural Information Processing Systems

As deep learning applications continue to become more diverse, an interesting question arises: Can general problem solving arise from jointly learning several such diverse tasks? To approach this question, deep multi-task learning is extended in this paper to the setting where there is no obvious overlap between task architectures. The idea is that any set of (architecture,task) pairs can be decomposed into a set of potentially related subproblems, whose sharing is optimized by an efficient stochastic algorithm. The approach is first validated in a classic synthetic multi-task learning benchmark, and then applied to sharing across disparate architectures for vision, NLP, and genomics tasks. It discovers regularities across these domains, encodes them into sharable modules, and combines these modules systematically to improve performance in the individual tasks.


Overcome model déjà vu by leveraging diverse datasets with deep multi-task learning - Artificial Intelligence

#artificialintelligence

"When we try to pick out anything by itself, we find it hitched to everything else in the universe" – John Muir Often in real-world tasks, there isn't enough data to take full advantage of deep learning. However, it is possible to leverage other datasets to reach a critical mass. Sharing knowledge across diverse datasets leads to more general knowledge, deeper insights and more well-informed decisions. This is especially true in domains like healthcare, where data for any particular task can be expensive or dangerous to collect. Modeling datasets separately wastes useful structure that could be shared between them.


Face Allignment: Deep multi-task learning - WebSystemer.no

#artificialintelligence

If you're already familiar with deep learning, by this time, you got that this is a multi-output problem because we're trying to solve this mutiple tasks at the same time. As we're going to use keras for implementation, a multi-output model can be implemented through Functional API, and not sequential API. As per the data, we've 5 tasks at the hand, out of which face alignment is the main one. So, we're going to train the model for these 5 tasks together using a multi-output model. We will train the main task(face alignment) with different auxiliary tasks to evaluate the effectiveness of deep multi-task learning.


A Deep Multi-Task Learning Approach to Skin Lesion Classification

AAAI Conferences

Skin lesion identification is a key step toward dermatological diagnosis. When describing a skin lesion, it is very important to note its body site distribution as many skin diseases commonly affect particular parts of the body. To exploit the correlation between skin lesions and their body site distributions, in this study, we investigate the possibility of improving skin lesion classification using the additional context information provided by body location. Specifically, we build a deep multi-task learning (MTL) framework to jointly optimize skin lesion classification and body location classification (the latter is used as an inductive bias). Our MTL framework uses the state-of-the-art ImageNet pretrained model with specialized loss functions for the two related tasks. Our experiments show that the proposed MTL based method performs more robustly than its standalone (single-task) counterpart.