Deep supervised feature selection using Stochastic Gates
Yamada, Yutaro, Lindenbaum, Ofir, Negahban, Sahand, Kluger, Yuval
Our approach relies on the continuous relaxation of Bernoulli distributions, which allows our model to learn the parameters of the approximate Bernoulli distributions via tractable methods. Using these tools we present a general neural network that simultaneously minimizes a loss function while selecting relevant features. We also provide an information-theoretic justification of incorporating Bernoulli distribution into our approach. Finally, we demonstrate the potential of the approach on synthetic and real-life applications.
Oct-9-2018
- Country:
- North America
- Canada > British Columbia (0.14)
- United States (0.28)
- North America
- Genre:
- Research Report
- Experimental Study (0.46)
- New Finding (0.68)
- Research Report
- Industry:
- Technology: