Inductive biases of multi-task learning and finetuning: multiple regimes of feature reuse
–Neural Information Processing Systems
Neural networks are often trained on multiple tasks, either simultaneously (multi-task learning, MTL) or sequentially (pretraining and subsequent finetuning, PT+FT). In particular, it is common practice to pretrain neural networks on a large auxiliary task before finetuning on a downstream task with fewer samples. Despite the prevalence of this approach, the inductive biases that arise from learning multiple tasks are poorly characterized. In this work, we address this gap.
Neural Information Processing Systems
Feb-18-2026, 07:44:09 GMT
- Country:
- Europe
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Latvia > Lubāna Municipality
- Lubāna (0.04)
- Ireland > Leinster
- North America
- Canada > Ontario
- Toronto (0.04)
- United States > California
- San Francisco County > San Francisco (0.14)
- Canada > Ontario
- Europe
- Genre:
- Research Report
- Experimental Study (0.93)
- New Finding (0.67)
- Research Report
- Industry:
- Education (0.46)
- Technology: