trannorm
Reviews: Transferable Normalization: Towards Improving Transferability of Deep Neural Networks
This paper is different from most works focus on reducing the domain shift from perspective of loss functions, contributing to network design by developing a novel transferable normalization (TranNorm) layer. TranNorm is well motivated, separately normalizing source and target features in a minibatch and meanwhile weighting each channel in terms of transferability. It is clear different from and meanwhile significantly outperformes related methods, e.g., AdaBN [15] and AutoDIAL [21]. The TranNorm layer is simple and free of parameters, which can be conveniently plugged in mainstream networks. I think that this work will have a non-trivial impact: the proposed TranNorm can be used as backbone layer improving other state-of-the-art methods.
Technology: Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.40)