Physics-informed features in supervised machine learning

Lampani, Margherita, Guastavino, Sabrina, Piana, Michele, Benvenuto, Federico

arXiv.org Machine Learning 

The intrinsic ill-posedness of this problem can be addressed within the framework of regularization theory (Kaipio & Somersalo 2006), i.e., as the problem of minimizing a non-linear functional made of the sum of two terms: a fitting term in which the empirical risk is assessed by means of a loss function, and a penalty term that allows generalization while controlling the complexity of the solution. Finally, a real positive regularization parameter that balances the trade-off between the two terms has to be chosen by means of some regularization algorithm (Engl et al. 1996). When described in a Hilbert space setting, a representer theorem (Sch olkopf et al. 2001; De Vito et al. 2004) provides an analytical solution of the minimum problem that is given by the action of a feature-dependent kernel operator onto a vector whose components can be analytically determined by means of classical Tikhonov regularization (Tikhonov 1963). From an operational perspective, a feature-based supervised machine learning process works as follows. Given an archive of annotated descriptors of the physical phenomenon, named features, 1. A standardization procedure generates a corresponding archive of annotated standardized features that are re-scaled and made dimensionless.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found