Regularized deep learning with a non-convex penalty
Regularization methods are often employed in deep learning neural networks (DNNs) to prevent overfitting. For penalty based methods for DNN regularization, typically only convex penalties are considered because of their optimization guarantees. Recent theoretical work have shown that non-convex penalties that satisfy certain regularity conditions are also guaranteed to perform well with standard optimization algorithms. In this paper, we examine new and currently existing non-convex penalties for DNN regularization. We provide theoretical justifications for the new penalties and also assess the performance of all penalties on DNN analysis of real datasets. Introduction The success of DNNs in learning complex relationships between the inputs and outputs may be mainly attributed to multiple nonlinear hidden layers [1,2]. Corresponding author, address: 350 Community Drive, Manhasset, NY 11030. Such large number of parameters gives the method incredible amount of flexibility. However on the downside, this may lead to overfitting the data, especially if the training sample is not large enough.
Sep-11-2019
- Country:
- North America > United States
- District of Columbia > Washington (0.04)
- Illinois > Cook County
- Chicago (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- New York > Nassau County
- Hempstead (0.04)
- North America > United States
- Genre:
- Research Report (0.64)
- Industry:
- Health & Medicine (0.68)
- Technology: