On original and latent space connectivity in deep neural networks

Gu, Boyang, Borovykh, Anastasia

arXiv.org Artificial Intelligence 

The manifold hypothesis states that high-dimensional real-world data typically lies in a lower-dimensional submanifold, the axes of this dimensionality-reduced space representing factors of variation [26, 5]. Relatedly, the flattening hypothesis [2] and work in disentanglement [22] states that througout learning, subsequent layers in a deep neural network (DNN) disentangle the data in such a way that finally a linear model can separate the classes. Understanding how a DNN itself views its input space can be related to explainability (e.g.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found