Disentangling Transfer and Interference in Multi-Domain Learning
Zhang, Yipeng, Hayes, Tyler L., Kanan, Christopher
–arXiv.org Artificial Intelligence
Convolutional Neural Networks (CNNs) have achieved great success in a variety of computer vision tasks, including image classification, object detection, and semantic segmentation [56]. Although inputs for a particular task can come from various domains, many studies develop models that only solve one task on a single domain. In contrast, humans and animals learn multiple tasks at the same time and utilize task similarities to make better task-level decisions. Inspired by this phenomenon, multi-task learning (MTL) seeks to jointly learn a single model for various tasks, typically on the same input domain [51]. Multi-domain learning (MDL) takes this a step further and requires models to learn from multiple tasks of various domains [4]. By jointly learning feature representations, MTL and MDL models can achieve superior per-task performance than models trained on a single task in isolation. This is a result of positive knowledge transfer [21]. Unfortunately, jointly training models on multiple tasks does not guarantee performance gains [54, 58].
arXiv.org Artificial Intelligence
Jul-15-2021
- Country:
- North America > United States > New York (0.14)
- Genre:
- Research Report > New Finding (0.93)
- Technology: