Multi-Task Learning as Multi-Objective Optimization

Ozan Sener, Vladlen Koltun

Neural Information Processing Systems 

To this end, we use algorithms developed in the gradient-based multi-objective optimization literature. These algorithms are not directly applicable to large-scale learning problems since they scale poorly with the dimensionality of the gradients and the number of tasks. We therefore propose an upper bound for the multi-objective loss and show that it can be optimized efficiently. We further prove that optimizing this upper bound yields a Pareto optimal solution under realistic assumptions.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found