to

### Theoretical foundations of the potential function method in pattern recognition learning

This article presents a design principle of a neural network using Gaussian activation functions, referred to as a Gaussian Potential Function Network (GPFN), and explores the capability of a GPFN in learning a continuous input-output mapping from a given set of teaching patterns. The design principle is highlighted by a Hierarchically Self-Organizing Learning (HSOL) algorithm featuring the automatic recruitment of hidden units under the paradigm of hierarchical learning. A GPFN generates an arbitrary shape of a potential field over the domain of the input space, as an input-output mapping, by synthesizing a number of Gaussian potential functions provided by individual hidden units referred to as Gaussian Potential Function Units (GPFUs). The construction of a GPFN is carried out by the HSOL algorithm which incrementally recruits the minimum necessary number of GPFUs based on the control of the effective radii of individual GPFUs, and trains the locations (mean vectors) and shapes (variances) of individual Gaussian potential functions, as well as their summation weights, based on the Backpropagation algorithm. Simulations were conducted for the demonstration and evaluation of the GPFNs constructed based on the HSOL algorithm for several sets of teaching patterns.

### Regression Modeling in Practice Coursera

Multiple regression analysis is tool that allows you to expand on your research question, and conduct a more rigorous test of the association between your explanatory and response variable by adding additional quantitative and/or categorical explanatory variables to your linear regression model. In this session, you will apply and interpret a multiple regression analysis for a quantitative response variable, and will learn how to use confidence intervals to take into account error in estimating a population parameter. You will also learn how to account for nonlinear associations in a linear regression model. Finally, you will develop experience using regression diagnostic techniques to evaluate how well your multiple regression model predicts your observed response variable. Note that if you have not yet identified additional explanatory variables, you should choose at least one additional explanatory variable from your data set.

### Spatial Analysis Made Easy with Linear Regression and Kernels

Kernel methods are an incredibly popular technique for extending linear models to non-linear problems via a mapping to an implicit, high-dimensional feature space. While kernel methods are computationally cheaper than an explicit feature mapping, they are still subject to cubic cost on the number of points. Given only a few thousand locations, this computational cost rapidly outstrips the currently available computational power. This paper aims to provide an overview of kernel methods from first-principals (with a focus on ridge regression), before progressing to a review of random Fourier features (RFF), a set of methods that enable the scaling of kernel methods to big datasets. At each stage, the associated R code is provided. We begin by illustrating how the dual representation of ridge regression relies solely on inner products and permits the use of kernels to map the data into high-dimensional spaces. We progress to RFFs, showing how only a few lines of code provides a significant computational speed-up for a negligible cost to accuracy. We provide an example of the implementation of RFFs on a simulated spatial data set to illustrate these properties. Lastly, we summarise the main issues with RFFs and highlight some of the advanced techniques aimed at alleviating them.

### Exhaustive search for sparse variable selection in linear regression

We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.