Do Text-to-Text Multi-Task Learners Suffer from Task Conflict?
Mueller, David, Andrews, Nicholas, Dredze, Mark
–arXiv.org Artificial Intelligence
Traditional multi-task learning architectures train a single model across multiple tasks through a shared encoder followed by task-specific decoders. Learning these models often requires specialized training algorithms that address task-conflict in the shared parameter updates, which otherwise can lead to negative transfer. A new type of multi-task learning within NLP homogenizes multi-task architectures as a shared encoder and language model decoder, which does surprisingly well across a range of diverse tasks. Does this new architecture suffer from task-conflicts that require specialized training algorithms? We study how certain factors in the shift towards text-to-text models affects multi-task conflict and negative transfer, finding that both directional conflict and transfer are surprisingly constant across architectures.
arXiv.org Artificial Intelligence
Dec-13-2022
- Country:
- Europe (1.00)
- North America > United States (0.28)
- Genre:
- Research Report > New Finding (0.94)
- Technology: