One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective
–Neural Information Processing Systems
A deep hashing model typically has two main learning objectives: to make the learned binary hash codes discriminative and to minimize a quantization error. With further constraints such as bit balance and code orthogonality, it is not uncommon for existing models to employ a large number ( 4) of losses. This leads to difficulties in model training and subsequently impedes their effectiveness. In this work, we propose a novel deep hashing model with only \textit{a single learning objective} . Specifically, we show that maximizing the cosine similarity between the continuous codes and their corresponding \textit{binary orthogonal codes} can ensure both hash code discriminativeness and quantization error minimization.
Neural Information Processing Systems
Jan-19-2025, 05:03:07 GMT
- Technology: