Plotting

 Friedman, Jerome H.


Function Trees: Transparent Machine Learning

arXiv.org Machine Learning

A fundamental exercise in machine learning is the approximation of a function of several to many variables given values of the function, often contaminated with noise, at observed joint values of the input variables. The result can then be used to estimate unknown function values given corresponding inputs. The goal is to accurately estimate the underlying (non noisy) outcome values since the noise is by definition unpredictable. To the extent that this is successful the estimated function may, in addition, be used to try to understand underlying phenomena giving rise to the data. Even when prediction accuracy is the dominate concern, being able to comprehend the way in which the input variables are jointly combining to produce predictions may lead to important sanity checks on the validity of the function estimate. Besides accuracy, the success of this latter exercise requires that the structure of the function estimate be represented in a comprehensible form.


Lockout: Sparse Regularization of Neural Networks

arXiv.org Machine Learning

Many regression and classification procedures fit a parameterized function $f(x;w)$ of predictor variables $x$ to data $\{x_{i},y_{i}\}_1^N$ based on some loss criterion $L(y,f)$. Often, regularization is applied to improve accuracy by placing a constraint $P(w)\leq t$ on the values of the parameters $w$. Although efficient methods exist for finding solutions to these constrained optimization problems for all values of $t\geq0$ in the special case when $f$ is a linear function, none are available when $f$ is non-linear (e.g. Neural Networks). Here we present a fast algorithm that provides all such solutions for any differentiable function $f$ and loss $L$, and any constraint $P$ that is an increasing monotone function of the absolute value of each parameter. Applications involving sparsity inducing regularization of arbitrary Neural Networks are discussed. Empirical results indicate that these sparse solutions are usually superior to their dense counterparts in both accuracy and interpretability. This improvement in accuracy can often make Neural Networks competitive with, and sometimes superior to, state-of-the-art methods in the analysis of tabular data.