Identifying Weights and Architectures of Unknown ReLU Networks

Rolnick, David, Kording, Konrad P.

arXiv.org Machine Learning 

The relation of individual parameters to the network's output is highly nonlinear and is generally unclear to an external observer. Consequently, it has been widely supposed in the field that it is impossible to recover the parameters of a network merely by observing its output on different inputs. Beyond informing our understanding of deep learning, going from function to parameters could have serious implications for security and privacy. In many deployed deep learning systems, the output is freely available, but the network used to generate that output is not disclosed. The ability to uncover a confidential network not only would make it available for public use but could even expose data used to train the network if such data could be reconstructed from the network's weights. This topic also has implications for the study of biological neural networks. Experimental neuroscientists can record some variables within the brain (e.g. the output of a complex cell in primary visual cortex) but not others (e.g. the pre-synaptic simple cells), and many biological neurons appear to be well modeled as the ReLU of a linear combination of their inputs (Chance et al., 2002). It would be highly useful if we could reverse engineer the internal components of a neural circuit based on recordings of the output and our choice of input stimuli. 1 arXiv:1910.00744v1

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found