On the Generalization Properties of Learning the Random Feature Models with Learnable Activation Functions
Ma, Zailin, Yang, Jiansheng, Yang, Yaodong
–arXiv.org Artificial Intelligence
This paper studies the generalization properties of a recently proposed kernel method, the Random Feature models with Learnable Activation Functions (RFLAF). By applying a data-dependent sampling scheme for generating features, we provide by far the sharpest bounds on the required number of features for learning RFLAF in both the regression and classification tasks. We provide a unified theorem that describes the complexity of the feature number $s$, and discuss the results for the plain sampling scheme and the data-dependent leverage weighted scheme. Through weighted sampling, the bound on $s$ in the MSE loss case is improved from $Ω(1/ε^2)$ to $\tildeΩ((1/ε)^{1/t})$ in general $(t\geq 1)$, and even to $Ω(1)$ when the Gram matrix has a finite rank. For the Lipschitz loss case, the bound is improved from $Ω(1/ε^2)$ to $\tildeΩ((1/ε^2)^{1/t})$. To learn the weighted RFLAF, we also propose an algorithm to find an approximate kernel and then apply the leverage weighted sampling. Empirical results show that the weighted RFLAF achieves the same performances with a significantly fewer number of features compared to the plainly sampled RFLAF, validating our theories and the effectiveness of this method.
arXiv.org Artificial Intelligence
Oct-20-2025
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States
- Georgia > Fulton County
- Atlanta (0.04)
- New York > New York County
- New York City (0.04)
- Virginia > Arlington County
- Arlington (0.04)
- Georgia > Fulton County
- Europe > United Kingdom
- Genre:
- Research Report > New Finding (0.87)
- Technology: