Goto

Collaborating Authors

 parisi


Underwater robotics expert reveals 'shipwreck city' hiding beneath major urban lake

FOX News

ROV specialist Phil Parisi is documenting nearly 100 underwater targets in Seattle's Lake Union, calling the urban lake a "shipwreck city" hiding a century of maritime history.


A CLuP algorithm to practically achieve $\sim 0.76$ SK--model ground state free energy

Stojnic, Mihailo

arXiv.org Machine Learning

We consider algorithmic determination of the $n$-dimensional Sherrington-Kirkpatrick (SK) spin glass model ground state free energy. It corresponds to a binary maximization of an indefinite quadratic form and under the \emph{worst case} principles of the classical NP complexity theory it is hard to approximate within a $\log(n)^{const.}$ factor. On the other hand, the SK's random nature allows (polynomial) spectral methods to \emph{typically} approach the optimum within a constant factor. Naturally one is left with the fundamental question: can the residual (constant) \emph{computational gap} be erased? Following the success of \emph{Controlled Loosening-up} (CLuP) algorithms in planted models, we here devise a simple practical CLuP-SK algorithmic procedure for (non-planted) SK models. To analyze the \emph{typical} success of the algorithm we associate to it (random) CLuP-SK models. Further connecting to recent random processes studies [94,97], we characterize the models and CLuP-SK algorithm via fully lifted random duality theory (fl RDT) [98]. Moreover, running the algorithm we demonstrate that its performance is in an excellent agrement with theoretical predictions. In particular, already for $n$ on the order of a few thousands CLuP-SK achieves $\sim 0.76$ ground state free energy and remarkably closely approaches theoretical $n\rightarrow\infty$ limit $\approx 0.763$. For all practical purposes, this renders computing SK model's near ground state free energy as a \emph{typically} easy problem.


Japan-born Syukuro Manabe among three winners of Nobel Prize in physics

The Japan Times

Japanese-American scientist Syukuro Manabe, Klaus Hasselmann of Germany and Giorgio Parisi of Italy on Tuesday won the Nobel Physics Prize for climate models and the understanding of physical systems. The Nobel committee said it was sending a message with its prize announcement just weeks before the COP26 climate summit in Glasgow, as the rate of global warming sets off alarm bells around the world. "The world leaders that haven't got the message yet, I'm not sure they will get it because we are saying it," said Thor Hans Hansson, chair of the Nobel Committee for Physics. "But … what we are saying is that the modeling of climate is solidly based in physics theory." Manabe, 90, and Hasselmann, 89, will share half of the 10 million kronor ($1.1 million) prize for their research on climate models.


M-ar-K-Fast Independent Component Analysis

Parisi, Luca

arXiv.org Artificial Intelligence

This study presents the m-arcsinh Kernel ('m-ar-K') Fast Independent Component Analysis ('FastICA') method ('m-ar-K-FastICA') for feature extraction. The kernel trick has enabled dimensionality reduction techniques to capture a higher extent of non-linearity in the data; however, reproducible, open-source kernels to aid with feature extraction are still limited and may not be reliable when projecting features from entropic data. The m-ar-K function, freely available in Python and compatible with its open-source library 'scikit-learn', is hereby coupled with FastICA to achieve more reliable feature extraction in presence of a high extent of randomness in the data, reducing the need for pre-whitening. Different classification tasks were considered, as related to five (N = 5) open access datasets of various degrees of information entropy, available from scikit-learn and the University California Irvine (UCI) Machine Learning repository. Experimental results demonstrate improvements in the classification performance brought by the proposed feature extraction. The novel m-ar-K-FastICA dimensionality reduction approach is compared to the 'FastICA' gold standard method, supporting its higher reliability and computational efficiency, regardless of the underlying uncertainty in the data.


Why Does Deep Learning Not Have a Local Minimum?

@machinelearnbot

Editor's note: This post originally appeared as an answer to a Quora question, which also included the following: "As I understand, the chance of having a derivative zero in each of the thousands of direction is low. Is there some other reason besides this?" Yes, there is a'theoretical justification', and has taken a couple decades to flush it out. I will first point out, however, it has been observed in practice. This was pointed out by LeCun in his early work on LeNet, and is actually discussed in the'orange book', "Pattern Classification" by David G. Stork, Peter E. Hart, and Richard O. Duda. The problem has been addressed in condensed matter physics 20 years ago in the study of spin glasses.


Multi-objective Reinforcement Learning through Continuous Pareto Manifold Approximation

Parisi, Simone, Pirotta, Matteo, Restelli, Marcello

Journal of Artificial Intelligence Research

Many real-world control applications, from economics to robotics, are characterized by the presence of multiple conflicting objectives. In these problems, the standard concept of optimality is replaced by Pareto-optimality and the goal is to find the Pareto frontier, a set of solutions representing different compromises among the objectives. Despite recent advances in multi-objective optimization, achieving an accurate representation of the Pareto frontier is still an important challenge. In this paper, we propose a reinforcement learning policy gradient approach to learn a continuous approximation of the Pareto frontier in multi-objective Markov Decision Problems (MOMDPs). Differently from previous policy gradient algorithms, where n optimization routines are executed to have n solutions, our approach performs a single gradient ascent run, generating at each step an improved continuous approximation of the Pareto frontier. The idea is to optimize the parameters of a function defining a manifold in the policy parameters space, so that the corresponding image in the objectives space gets as close as possible to the true Pareto frontier. Besides deriving how to compute and estimate such gradient, we will also discuss the non-trivial issue of defining a metric to assess the quality of the candidate Pareto frontiers. Finally, the properties of the proposed approach are empirically evaluated on two problems, a linear-quadratic Gaussian regulator and a water reservoir control task.