Goto

Collaborating Authors

 barrera


The Lattice Overparametrization Paradigm for the Machine Learning of Lattice Operators

Marcondes, Diego, Barrera, Junior

arXiv.org Artificial Intelligence

The machine learning of lattice operators has three possible bottlenecks. From a statistical standpoint, it is necessary to design a constrained class of operators based on prior information with low bias, and low complexity relative to the sample size. From a computational perspective, there should be an efficient algorithm to minimize an empirical error over the class. From an understanding point of view, the properties of the learned operator need to be derived, so its behavior can be theoretically understood. The statistical bottleneck can be overcome due to the rich literature about the representation of lattice operators, but there is no general learning algorithm for them. In this paper, we discuss a learning paradigm in which, by overparametrizing a class via elements in a lattice, an algorithm for minimizing functions in a lattice is applied to learn. We present the stochastic lattice descent algorithm as a general algorithm to learn on constrained classes of operators as long as a lattice overparametrization of it is fixed, and we discuss previous works which are proves of concept. Moreover, if there are algorithms to compute the basis of an operator from its overparametrization, then its properties can be deduced and the understanding bottleneck is also overcome. This learning paradigm has three properties that modern methods based on neural networks lack: control, transparency and interpretability. Nowadays, there is an increasing demand for methods with these characteristics, and we believe that mathematical morphology is in a unique position to supply them. The lattice overparametrization paradigm could be a missing piece for it to achieve its full potential within modern machine learning.


Learning Value-at-Risk and Expected Shortfall

Barrera, D, Crépey, S, Gobet, E, Nguyen, Hoang-Dung, Saadeddine, B

arXiv.org Machine Learning

We propose a non-asymptotic convergence analysis of a two-step approach to learn a conditional value-at-risk (VaR) and expected shortfall (ES) in a nonparametric setting using Rademacher and Vapnik-Chervonenkis bounds. Our approach for the VaR is extended to the problem of learning at once multiple VaRs corresponding to different quantile levels. This results in efficient learning schemes based on neural network quantile and least-squares regressions. An a posteriori Monte Carlo (non-nested) procedure is introduced to estimate distances to the ground-truth VaR and ES without access to the latter. This is illustrated using numerical experiments in a Gaussian toy-model and a financial case-study where the objective is to learn a dynamic initial margin.


Future Robots In The Workplace Are Coming For Retail Jobs

International Business Times

This article originally appeared on the Motley Fool. Robots will take jobs formerly done by people, and that will hit the retail space pretty hard. That does not mean an army of robotic workers will eliminate the need for humans entirely. Instead, jobs that can be easily automated will be, according to ZipRecruiter's Chief Economic Adviser Cathy Barrera in an email interview with The Motley Fool. ZipRecruiter, which was launched in 2010, started as a tool to help small businesses distribute job postings affordably.