[D] Machine Learning - WAYR (What Are You Reading) - Week 93

#artificialintelligence 

Deep Ensembles: A Loss Landscape Perspective: This paper takes a dig into why the ensemble of deep networks works better than a single deep network. The authors did a qualitative investigation that actually demystifies some of the inner workings of deep neural nets. These are some of the observation: Same model trained with different initial initializations is functionally dissimilar. Neural networks map inputs to outputs and thus act as a function(which we learn obviously). If we start with init1 we end up with function1 which is not similar to the same model trained with init2. However, if we take a snapshot of the model at different epochs they are functionally similar.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found