polynomial regression
05311655a15b75fab86956663e1819cd-Supplemental.pdf
In what follows we will call each experiment by its corresponding figure or table number for convenience. For the rotated/shifted MNIST images (Figure 8, 9), we use the Affine transformation function in the TorchVisionlibrary. In experiments (Table 2, 3, 4, 5), we use either or both of the Large (L) and Small (S) dataset for the standard benchmark vision data: MNIST, FMNIST, KMNIST, Omniglot, SVHN, CIFAR10, CIFAR100, CELEBA. For Figure 10, Table 3, the regularization coefficients for CAE, WAE are searched around 0.01 0.001, the noise level used in DAE is searched around0.1 0.01, and the regularization coefficient andλforSPAEandNRAE aresearched around0.001 Ontheother hand, the runtimes of our algorithms are comparable with other existing methods.
Loss-Complexity Landscape and Model Structure Functions
We develop a framework for dualizing the Kolmogorov structure function $h_x(α)$, which then allows using computable complexity proxies. We establish a mathematical analogy between information-theoretic constructs and statistical mechanics, introducing a suitable partition function and free energy functional. We explicitly prove the Legendre-Fenchel duality between the structure function and free energy, showing detailed balance of the Metropolis kernel, and interpret acceptance probabilities as information-theoretic scattering amplitudes. A susceptibility-like variance of model complexity is shown to peak precisely at loss-complexity trade-offs interpreted as phase transitions. Practical experiments with linear and tree-based regression models verify these theoretical predictions, explicitly demonstrating the interplay between the model complexity, generalization, and overfitting threshold.
$p$-Adic Polynomial Regression as Alternative to Neural Network for Approximating $p$-Adic Functions of Many Variables
A method for approximating continuous functions $\mathbb{Z}_{p}^{n}\rightarrow\mathbb{Z}_{p}$ by a linear superposition of continuous functions $\mathbb{Z}_{p}\rightarrow\mathbb{Z}_{p}$ is presented and a polynomial regression model is constructed that allows approximating such functions with any degree of accuracy. A physical interpretation of such a model is given and possible methods for its training are discussed. The proposed model can be considered as a simple alternative to possible $p$-adic models based on neural network architecture.
- Asia > Russia (0.05)
- Europe > Russia > Volga Federal District > Samara Oblast > Samara (0.04)
- North America > United States > New York (0.04)
- (3 more...)
Data-Driven Approximation of Binary-State Network Reliability Function: Algorithm Selection and Reliability Thresholds for Large-Scale Systems
While exact reliability computation for binarystate networks is NP-hard/#P-hard, existing approximation methods face critical tradeoffs between accuracy, scalability, and data efficiency. This study evaluates 20 machine learning methods across three reliability regimes--full range (0.0-1.0), high reliability (0.9-1.0), and ultra-high reliability (0.99-1.0)--to address these gaps. We demonstrate that large-scale networks with arc reliability 0.9 exhibit near-unity system reliability, enabling computational simplifications. Further, we establish a datasetscale-driven paradigm for algorithm selection: Artificial Neural Networks (ANN) excel with limited data (size < m), while Polynomial Regression (PR) achieves superior accuracy in data-rich environments (size m). Our findings reveal ANN's Test-MSE of 7.24E 05 at 30,000 samples and PR's optimal performance (5.61E 05) at 40,000 samples, outperforming traditional Monte Carlo simulations. These insights provide actionable guidelines for balancing accuracy, interpretability, and computational efficiency in reliability engineering, with implications for infrastructure resilience and system optimization. Keywords: Binary-State Networks; Network Reliability Approximated Function; Reliability Thresholds; Dataset Scalability; Artificial Neural Networks (ANN); Polynomial Regression; Monte Carlo Simulation (MCS); Binary-Addition-Tree Algorithm (BAT); BAT-MCS 1. INTRODUCTION Modern infrastructure systems--from power grids and communication networks to IoT ecosystems--demand rigorous reliability analysis to ensure operational resilience. These systems are often modeled as binary-state networks, where components (arcs/nodes) operate in either functional (1) or failed (0) states [1, 2, 3]. Within this paradigm, network reliability--the probability of maintaining 2 connectivity between specified nodes under given conditions--serves as a critical performance metric [4, 5-7].
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Asia > Taiwan (0.04)
- Transportation (0.67)
- Information Technology (0.67)
- Energy > Power Industry (0.34)
Robust Local Polynomial Regression with Similarity Kernels
The polynomial is fitted using weighted ordinary least-squares, giving more weight to nearby points and less weight to points farther away. The value of the regression function for the point is then obtained by evaluating the fitted local polynomial using the predictor variable value for that data point. LPR has good accuracy near the boundary and performs better than all other linear smoothers in a minimax sense [2]. The biggest advantage of this class of methods is not requiring a prior specification of a function i.e. a parameterized model. Instead, only a small number of hyperparameters need to be specified such as the type of kernel, a smoothing parameter and the degree of the local polynomial. The method is therefore suitable for modeling complex processes such as non-linear relationships, or complex dependencies for which no theoretical models exist. These two advantages, combined with the simplicity of the method, makes it one of the most attractive of the modern regression methods for applications that fit the general framework of least-squares regression but have a complex deterministic structure. Local polynomial regression incorporates the notion of proximity in two ways.
LASER: A new method for locally adaptive nonparametric regression
Chatterjee, Sabyasachi, Goswami, Subhajit, Mukherjee, Soumendu Sundar
In this article, we introduce \textsf{LASER} (Locally Adaptive Smoothing Estimator for Regression), a computationally efficient locally adaptive nonparametric regression method that performs variable bandwidth local polynomial regression. We prove that it adapts (near-)optimally to the local H\"{o}lder exponent of the underlying regression function \texttt{simultaneously} at all points in its domain. Furthermore, we show that there is a single ideal choice of a global tuning parameter under which the above mentioned local adaptivity holds. Despite the vast literature on nonparametric regression, instances of practicable methods with provable guarantees of such a strong notion of local adaptivity are rare. The proposed method achieves excellent performance across a broad range of numerical experiments in comparison to popular alternative locally adaptive methods.
- North America > United States > New York (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (2 more...)
Improved identification of breakpoints in piecewise regression and its applications
Kim, Taehyeong, Lee, Hyungu, Choi, Hayoung
Identifying breakpoints in piecewise regression is critical in enhancing the reliability and interpretability of data fitting. In this paper, we propose novel algorithms based on the greedy algorithm to accurately and efficiently identify breakpoints in piecewise polynomial regression. The algorithm updates the breakpoints to minimize the error by exploring the neighborhood of each breakpoint. It has a fast convergence rate and stability to find optimal breakpoints. Moreover, it can determine the optimal number of breakpoints. The computational results for real and synthetic data show that its accuracy is better than any existing methods. The real-world datasets demonstrate that breakpoints through the proposed algorithm provide valuable data information.
- Asia > South Korea > Daegu > Daegu (0.04)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Energy (0.68)
- Banking & Finance > Trading (0.46)
- Health & Medicine > Therapeutic Area (0.30)
Recent advances in Meta-model of Optimal Prognosis
In real case applications within the virtual prototyping process, it is not always possible to reduce the complexity of the physical models and to obtain numerical models which can be solved quickly. Usually, every single numerical simulation takes hours or even days. Although the progresses in numerical methods and high performance computing, in such cases, it is not possible to explore various model configurations, hence efficient surrogate models are required. Generally the available meta-model techniques show several advantages and disadvantages depending on the investigated problem. In this paper we present an automatic approach for the selection of the optimal suitable meta-model for the actual problem. Together with an automatic reduction of the variable space using advanced filter techniques an efficient approximation is enabled also for high dimensional problems.
- Europe > Germany (0.05)
- North America > United States > Missouri > St. Louis County > St. Louis (0.04)
Detecting Car Speed using Object Detection and Depth Estimation: A Deep Learning Framework
Dasgupta, Subhasis, Naaz, Arshi, Choudhury, Jayeeta, Lahiri, Nancy
Road accidents are quite common in almost every part of the world, and, in majority, fatal accidents are attributed to over speeding of vehicles. The tendency to over speeding is usually tried to be controlled using check points at various parts of the road but not all traffic police have the device to check speed with existing speed estimating devices such as LIDAR based, or Radar based guns. The current project tries to address the issue of vehicle speed estimation with handheld devices such as mobile phones or wearable cameras with network connection to estimate the speed using deep learning frameworks.
Polynomial Regression as a Task for Understanding In-context Learning Through Finetuning and Alignment
Wilcoxson, Max, Svendgård, Morten, Doshi, Ria, Davis, Dylan, Vir, Reya, Sahai, Anant
Simple function classes have emerged as toy problems to better understand in-context-learning in transformer-based architectures used for large language models. But previously proposed simple function classes like linear regression or multi-layer-perceptrons lack the structure required to explore things like prompting and alignment within models capable of in-context-learning. We propose univariate polynomial regression as a function class that is just rich enough to study prompting and alignment, while allowing us to visualize and understand what is going on clearly.
- Europe > Austria > Vienna (0.14)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Perceptrons (0.54)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.35)