Hypothesis Spaces for Deep Learning
Wang, Rui, Xu, Yuesheng, Yan, Mingsong
Deep learning has been a huge success in applications. Mathematically, its success is due to the use of deep neural networks (DNNs), neural networks of multiple layers, to describe decision functions. Various mathematical aspects of DNNs as an approximation tool were investigated recently in a number of studies [9, 11, 13, 16, 20, 27, 28, 31]. As pointed out in [8], learning processes do not take place in a vacuum. Classical learning methods took place in a reproducing kernel Hilbert space (RKHS) [1], which leads to representation of learning solutions in terms of a combination of a finite number of kernel sessions [19] of a universal kernel [17]. Reproducing kernel Hilbert spaces as appropriate hypothesis spaces for classical learning methods provide a foundation for mathematical analysis of the learning methods. A natural and imperative question is what are appropriate hypothesis spaces for deep learning. Although hypothesis spaces for learning with shallow neural networks (networks of one hidden layer) were investigated recently in a number of studies, (e.g.
Mar-11-2024
- Country:
- Asia > China (0.04)
- North America > United States
- New York > Onondaga County
- Syracuse (0.04)
- Virginia > Norfolk City County
- Norfolk (0.04)
- New York > Onondaga County
- Genre:
- Research Report (0.63)
- Industry:
- Education (0.68)
- Technology: