NeurIPS20

Nan Du

Neural Information Processing Systems 

The paradigm of'pretraining' from a set of relevant auxiliary tasks and then'finetuning' on a target task has been successfully applied in many different domains. However, when the auxiliary tasks are abundant, with complex relationships to the target task, using domain knowledge or searching over all possible pretraining setups is inefficient and suboptimal. To address this challenge, we propose a method to automatically select from a large set of auxiliary tasks, which yields a representation most useful to the target task. In particular, we develop an efficient algorithm that uses automatic auxiliary task selection within a nested-loop metalearning process. We have applied this algorithm to the task of clinical outcome predictions in electronic medical records, learning from a large number of selfsupervised tasks related to forecasting patient trajectories. Experiments on a real clinical dataset demonstrate the superior predictive performance of our method compared to direct supervised learning, naive pretraining and simple multitask learning, in particular in low-data scenarios when the primary task has very few examples. With detailed ablation analysis, we further show that the selection rules are interpretable and able to generalize to unseen target tasks with new data.

Duplicate Docs Excel Report

Similar Docs  Excel Report  more

TitleSimilaritySource
None found