Searching for lottery tickets inside large neural networks

#artificialintelligence 

What if behind every modern deep neural network a "lottery ticket" was hidden? In 2019 a paper by Frankle and Carbin[1] appeared with a very intriguing conjecture, based on experimental observation of current large neural networks it seemed that one could grab a small portion of the same network and train it to achieve results not only with the same accuracy but sometimes even better than the original neural network. "The Lottery Ticket Hypothesis: A randomly-initialized, dense neural network contains a sub-network that is initialized such that -- when trained in isolation -- it can match the test accuracy of the original network after training for at most the same number of iterations."-J. As incredible as this conjecture appeared to be, this was only the beginning…. The team of researchers started to realize that some of the modern oversized networks they were working with, not only contained a "lottery ticket" but actually the sub-network itself was already having the same accuracy as other trained networks with nothing more than a random initialization, no training involved.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found