A Linear regression with Gaussian features
–Neural Information Processing Systems
In the setting of Section 2.1, we assume The proof is based on the following lemma, that we state clearly for another use below. Lemma 2. Let θ H . Then for all β This lemma follows from Hölder's inequality with Applying Hölder's inequality, we get E We start with a few preliminary remarks. By summing for k = 1,...,n and using the bound (17), ϕ We continue the proof of Theorem 1 to prove Theorem 3. By the log-convexity Property 1, for all This proves conclusion 1 of the theorem. Both terms of the equality can be infinite: here we are using the convention stated in Section 2.1 that We can assume that (a) is satisfied, i.e., Thus the theorem below extends Theorem 5. Theorem 6. This theorem is proved at the end of this section.
Neural Information Processing Systems
Feb-7-2026, 16:24:29 GMT
- Technology: