Enabling Asymmetric Knowledge Transfer in Multi-Task Learning with Self-Auxiliaries
Graffeuille, Olivier, Koh, Yun Sing, Wicker, Joerg, Lehmann, Moritz
–arXiv.org Artificial Intelligence
Knowledge transfer in multi-task learning is typically viewed as a dichotomy; positive transfer, which improves the performance of all tasks, or negative transfer, which hinders the performance of all tasks. In this paper, we investigate the understudied problem of asymmetric task relationships, where knowledge transfer aids the learning of certain tasks while hindering the learning of others. We propose an optimisation strategy that includes additional cloned tasks named selfauxiliaries into the learning process to flexibly transfer knowledge between tasks asymmetrically. Our method can exploit asymmetric task relationships, benefiting from the positive transfer component while avoiding the negative transfer component. We demonstrate that asymmetric knowledge transfer provides substantial improvements in performance compared to existing multi-task optimisation strategies on benchmark computer vision problems. Multi-Task Learning (MTL) models learn multiple tasks jointly to exploit shared knowledge between tasks and improve the performance of all tasks. Knowledge is transferred between tasks in deep MTL systems by sharing neural network parameters [1, 2] or feature representations [3, 4] between tasks. Generally, it is assumed that if the tasks being learnt are related then the knowledge transfer will be beneficial for learning, while dissimilar tasks may result in negative transfer where the tasks' performance decreases. This view of knowledge transfer in multi-task learning implicitly assumes that task relationships are symmetric.
arXiv.org Artificial Intelligence
Oct-21-2024
- Country:
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- Genre:
- Research Report (0.50)
- Industry:
- Health & Medicine > Therapeutic Area (0.74)
- Technology: