Looking for the Holy Grail of nonparametric regression

#artificialintelligence 

Unfortunately, to state precisely the question, I need some formal preliminaries. Let $(X,Y), (X_1,Y_1), (X_2,Y_2), \dots$ be a family of $[0,1] d\times[0,1]$-valued random variables. The second point basically requires that the data-driven regressor determined by the sequence $(A_t)_{t \in \mathbb{N}}$ is minimax-optimal in the mean-square sense (the reason why I suspect that those should be the dependencies on the dimension and on the Lipschitz constant can be found for example in Theorem 3.2 in the book by Györfi, Kohler, Krzyzak, Walk - A Distribution-Free Theory of Nonparametric Regression) adapting automatically to the actual Lipschitz constant $L$ of the actual regression function $\eta$, and to the effective dimension $d *$ of the manifold where the actual distribution $\mu$ of the features lives, without knowing these parameters in advance. I didn't manage to find anything in the literature and I strongly suspect that something like this should be too good to exist (maybe some kind of no free-lunch theorem?). Does anyone know if this problem was tackled anywhere and what could be the answer?

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found