A Generalization Bound of Deep Neural Networks for Dependent Data
Do, Quan Huu, Nguyen, Binh T., Ho, Lam Si Tung
Explaining the generalization ability of machine learning methods (that is, they can provide a close fit to new, unseen data) lies at the heart of theoretical machine learning. The main direction for this research topic is to bound the difference between the expected loss (population loss) and the empirical loss (training loss). This is known as generalization bound, which has been studied extensively in various settings (Freund et al., 2004;Zou et al.,2009;Agarwal and Duchi,2012;Cuong et al.,2013;Bartlett et al.,2017; Golowich et al., 2018; Lugosi and Neu, 2022). In the last decade, deep neural networks have become the central attention of the machine learning community due to their remarkable success in solving complex tasks that are considered to be challenging for existing machine learning methods. For example, in computer vision, tasks like image classification, facial recognition, and object detection have significant progress by applying deep neural networks (Krizhevsky et al., 2012). In natural language processing, deep learning models have become state-of-the-art in language translation, sentiment analysis, and chatbots (Vaswani et al., 2017).
Oct-9-2023
- Country:
- Asia
- Singapore (0.04)
- Vietnam > Hồ Chí Minh City
- Hồ Chí Minh City (0.05)
- North America > Canada
- Nova Scotia > Halifax Regional Municipality > Halifax (0.04)
- Asia
- Genre:
- Research Report (0.50)
- Technology: