Efficient and Minimax-optimal In-context Nonparametric Regression with Transformers
Ching, Michelle, Popescu, Ioana, Smith, Nico, Ma, Tianyi, Underwood, William G., Samworth, Richard J.
We study in-context learning for nonparametric regression with $α$-Hölder smooth regression functions, for some $α>0$. We prove that, with $n$ in-context examples and $d$-dimensional regression covariates, a pretrained transformer with $Θ(\log n)$ parameters and $Ω\bigl(n^{2α/(2α+d)}\log^3 n\bigr)$ pretraining sequences can achieve the minimax-optimal rate of convergence $O\bigl(n^{-2α/(2α+d)}\bigr)$ in mean squared error. Our result requires substantially fewer transformer parameters and pretraining sequences than previous results in the literature. This is achieved by showing that transformers are able to approximate local polynomial estimators efficiently by implementing a kernel-weighted polynomial basis and then running gradient descent.
Jan-22-2026
- Country:
- Asia > Japan
- Honshū > Kansai > Wakayama Prefecture > Wakayama (0.04)
- Europe
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.14)
- Italy > Calabria
- North America > United States
- New York > New York County > New York City (0.04)
- Asia > Japan
- Genre:
- Research Report (0.70)
- Technology: