Auxiliary Learning by Implicit Differentiation
Navon, Aviv, Achituve, Idan, Maron, Haggai, Chechik, Gal, Fetaya, Ethan
Training with multiple auxiliary tasks is a common practice used in deep learning for improving the performance on the main task of interest. Two main challenges arise in this multi-task learning setting: (i) Designing useful auxiliary tasks; and (ii) Combining auxiliary tasks into a single coherent loss. We propose a novel framework, AuxiLearn, that targets both challenges, based on implicit differentiation. First, when useful auxiliaries are known, we propose learning a network that combines all losses into a single coherent objective function. This network can learn non-linear interactions between auxiliary tasks. Second, when no useful auxiliary task is known, we describe how to learn a network that generates a meaningful, novel auxiliary task. We evaluate AuxiLearn in a series of tasks and domains, including image segmentation and learning with attributes. We find that AuxiLearn consistently improves accuracy compared with competing methods.
Oct-5-2020
- Country:
- Asia > Middle East
- Israel (0.05)
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > California (0.04)
- Canada > Ontario
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.67)
- Technology: