Convex Regression with a Penalty
A common way to estimate an unknown convex regression function $f_0: Ω\subset \mathbb{R}^d \rightarrow \mathbb{R}$ from a set of $n$ noisy observations is to fit a convex function that minimizes the sum of squared errors. However, this estimator is known for its tendency to overfit near the boundary of $Ω$, posing significant challenges in real-world applications. In this paper, we introduce a new estimator of $f_0$ that avoids this overfitting by minimizing a penalty on the subgradient while enforcing an upper bound $s_n$ on the sum of squared errors. The key advantage of this method is that $s_n$ can be directly estimated from the data. We establish the uniform almost sure consistency of the proposed estimator and its subgradient over $Ω$ as $n \rightarrow \infty$ and derive convergence rates. The effectiveness of our estimator is illustrated through its application to estimating waiting times in a single-server queue.
Sep-25-2025
- Country:
- Europe
- Netherlands > North Holland
- Amsterdam (0.04)
- Switzerland (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Netherlands > North Holland
- North America > United States
- New Jersey > Mercer County
- Princeton (0.04)
- New York > Nassau County
- Garden City (0.04)
- Pennsylvania > Philadelphia County
- Philadelphia (0.04)
- New Jersey > Mercer County
- Europe
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Energy (0.46)
- Technology: