Convex Regression with a Penalty

2025-09-24 19:00 GMT · 6 months ago aimagpro.com

arXiv:2509.19788v1 Announce Type: new
Abstract: A common way to estimate an unknown convex regression function $f_0: Omega subset mathbb{R}^d rightarrow mathbb{R}$ from a set of $n$ noisy observations is to fit a convex function that minimizes the sum of squared errors. However, this estimator is known for its tendency to overfit near the boundary of $Omega$, posing significant challenges in real-world applications. In this paper, we introduce a new estimator of $f_0$ that avoids this overfitting by minimizing a penalty on the subgradient while enforcing an upper bound $s_n$ on the sum of squared errors. The key advantage of this method is that $s_n$ can be directly estimated from the data. We establish the uniform almost sure consistency of the proposed estimator and its subgradient over $Omega$ as $n rightarrow infty$ and derive convergence rates. The effectiveness of our estimator is illustrated through its application to estimating waiting times in a single-server queue.