A note on the physical interpretation of neural PDE's

Succi, Sauro

arXiv.org Artificial Intelligence 

Machine Learning (ML) has taken science (and society) by storm in the last decade, with numerous applications which seem to defy our best theoretical and modeling tools [11]. Leaving aside a significant amount of hype, ML raises a number of genuine hopes to counter some the most vexing challenges for the scientific method, particularly the curse of dimensionality [2]. This however does not come for free; in particular the current trends towards the use of an astronomical number of parameters (trillions in the case of recent chatbots), none of which lends itself to a direct physical interpretation, jointly with an unsustainable power demand, beg for a change of strategy, namely less weights and more insight [17]. In this paper, we present an attempt along this line. In particular, by highlighting the one-to-one mapping between ML procedures and discrete dynamical systems, we suggest that ML could possibly be conducted by means of a restricted and more economical class of weight matrices, each of which can be interpreted as a specific information-propagation process.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found