Regularized deep learning with a non-convex penalty

Vettam, Sujit, John, Majnu

arXiv.org Machine Learning 

Regularization methods are often employed in deep learning neural networks (DNNs) to prevent overfitting. For penalty based methods for DNN regularization, typically only convex penalties are considered because of their optimization guarantees. Recent theoretical work have shown that non-convex penalties that satisfy certain regularity conditions are also guaranteed to perform well with standard optimization algorithms. In this paper, we examine new and currently existing non-convex penalties for DNN regularization. We provide theoretical justifications for the new penalties and also assess the performance of all penalties on DNN analysis of real datasets. Introduction The success of DNNs in learning complex relationships between the inputs and outputs may be mainly attributed to multiple nonlinear hidden layers [1,2]. Corresponding author, address: 350 Community Drive, Manhasset, NY 11030. Such large number of parameters gives the method incredible amount of flexibility. However on the downside, this may lead to overfitting the data, especially if the training sample is not large enough.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found