Distributionally robust minimization in meta-learning for system identification

Rufolo, Matteo, Piga, Dario, Forgione, Marco

arXiv.org Artificial Intelligence 

-- Meta learning aims at learning how to solve tasks, and thus it allows to estimate models that can be quickly adapted to new scenarios. This work explores distributionally robust minimization in meta learning for system identification. Standard meta learning approaches optimize the expected loss, overlooking task variability. We use an alternative approach, adopting a distributionally robust optimization paradigm that prioritizes high-loss tasks, enhancing performance in worst-case scenarios. Evaluated on a meta model trained on a class of synthetic dynamical systems and tested in both in-distribution and out-of-distribution settings, the proposed approach allows to reduce failures in safety-critical applications. This often leads to suboptimal results when data is scarce. A promising alternative is to integrate meta learning, a framework introduced in the 1980s [1] and recently revitalized for its ability to enable rapid adaptation across related tasks. By training a meta-learner on a distribution of similar systems, the model can generalize efficiently to unseen dynamics with minimal additional data [2].