Dimensionality reduction and width of deep neural networks based on topological degree theory

Yang, Xiao-Song

arXiv.org Artificial Intelligence 

Dimensionality reduction (DR) and deep neural networks (DNNs) are two important aspects in data analysis. In data analysis and deep learning, the datasets are often high-dimensional and exhibit some complicated topological structures due to various backgrounds from science to engineering [1,2,4-7]. Traditional approaches to data analysis and visualization, in particular on images recognition, often fail in the high-dimensional setting, and a common practice is to perform dimensionality reduction [2, 6, 11] in order to make data analysis tractable and economic, and the DNNs is a powerful tool in dealing with non-linear dimensionality reduction problems. It has now been recognized that practical datasets often consists of features of low intrinsic dimensions with some nontrivial topological structures [1,2,6], and the geometric structure of datasets heavily affect the architecture of the deep neural networks. Nonetheless, how and to what extent the geometric (topological) structure of datasets is connected with the architecture of a deep neural network remains unclear and is an active research area of deep learning in recent years. 1