Reviews: Greedy Hash: Towards Fast Optimization for Accurate Hash Coding in CNN
–Neural Information Processing Systems
The paper presents a greedy approach to train a deep neural network to directly produce binary codes that build on the straight through estimator. During forward propagation the model uses the sgn output whereas at the back-propagation stage it passes derivatives as it the output were a simple linear function. There are relevant papers that already proposed such an approach and that are not referred to as earlier work: [1] Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation, Bengio Y. et al. [2] Techniques for learning binary stochastic feedforward neural networks, Tapani R. et al. The experimental setting is not very clear and I would suggest the authors to better explain the supervised setting. Do they produce a binary code of length k and then classify it with a single final output layer?
Neural Information Processing Systems
Oct-7-2024, 05:27:27 GMT
- Technology: