Unleashing the Power of Multi-Task Learning: A Comprehensive Survey Spanning Traditional, Deep, and Pretrained Foundation Model Eras
Yu, Jun, Dai, Yutong, Liu, Xiaokang, Huang, Jin, Shen, Yishan, Zhang, Ke, Zhou, Rong, Adhikarla, Eashan, Ye, Wenxuan, Liu, Yixin, Kong, Zhaoming, Zhang, Kai, Yin, Yilong, Namboodiri, Vinod, Davison, Brian D., Moore, Jason H., Chen, Yong
–arXiv.org Artificial Intelligence
MTL is a learning paradigm that effectively leverages both task-specific and shared information to address multiple related tasks simultaneously. In contrast to STL, MTL offers a suite of benefits that enhance both the training process and the inference efficiency. MTL's key advantages encompass streamlined model architecture, performance enhancement, and cross-domain generalizability. Over the past twenty years, MTL has become widely recognized as a flexible and effective approach in various fields, including CV, NLP, recommendation systems, disease prognosis and diagnosis, and robotics. This survey provides a comprehensive overview of the evolution of MTL, encompassing the technical aspects of cutting-edge methods from traditional approaches to deep learning and the latest trend of pretrained foundation models. Our survey methodically categorizes MTL techniques into five key areas: regularization, relationship learning, feature propagation, optimization, and pre-training. This categorization not only chronologically outlines the development of MTL but also dives into various specialized strategies within each category. Furthermore, the survey reveals how the MTL evolves from handling a fixed set of tasks to embracing a more flexible approach free from task or modality constraints. It explores the concepts of task-promptable and -agnostic training, along with the capacity for ZSL, which unleashes the untapped potential of this historically coveted learning paradigm. Overall, we hope this survey provides the research community with a comprehensive overview of the advancements in MTL from its inception in 1997 to the present in 2023. We address present challenges and look ahead to future possibilities, shedding light on the opportunities and potential avenues for MTL research in a broad manner. This project is publicly available at https://github.com/junfish/Awesome-Multitask-Learning.
arXiv.org Artificial Intelligence
Apr-29-2024
- Country:
- Asia (1.00)
- Europe > United Kingdom
- England (0.27)
- North America > United States (1.00)
- Genre:
- Overview (1.00)
- Research Report
- New Finding (0.92)
- Promising Solution (1.00)
- Industry:
- Automobiles & Trucks (0.67)
- Education (1.00)
- Health & Medicine
- Diagnostic Medicine > Imaging (1.00)
- Health Care Technology (0.67)
- Pharmaceuticals & Biotechnology (1.00)
- Therapeutic Area > Neurology
- Alzheimer's Disease (0.45)
- Information Technology > Security & Privacy (0.67)
- Leisure & Entertainment (0.67)
- Media (0.67)
- Transportation > Ground (0.67)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Learning Graphical Models > Directed Networks
- Bayesian Learning (0.67)
- Neural Networks > Deep Learning (1.00)
- Performance Analysis > Accuracy (1.00)
- Statistical Learning (1.00)
- Learning Graphical Models > Directed Networks
- Natural Language
- Large Language Model (1.00)
- Text Processing (0.92)
- Representation & Reasoning
- Object-Oriented Architecture (0.87)
- Optimization (1.00)
- Uncertainty (1.00)
- Vision > Face Recognition (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence