Goto

Collaborating Authors

 meta-learning








RapidModelArchitectureAdaptionfor Meta-Learning

Neural Information Processing Systems

MostNASmethodstodayfocusona single task with afixedhardwaresystem, yetreal-life model deployments covering multiple tasks andvarioushardwareplatforms willsignificantly prolong thisprocess.



HowFine-TuningAllowsforEffectiveMeta-Learning

Neural Information Processing Systems

We illustrate these bounds in the logistic regression and neural network settings. In contrast, we establish settings where learning one representation for all tasks (i.e. using a "frozen representation" objective) fails. Notably, any such algorithm cannot outperform directly learning the target task with no other information, in the worst case.