Chongqing University of Posts and Telecommunications
Information-Theoretic Domain Adaptation Under Severe Noise Conditions
Wang, Wei (Institute of Software, Chinese Academy of Sciences) | Wang, Hao (360 Search Lab, Qihoo 360) | Ran, Zhi-Yong (Chongqing University of Posts and Telecommunications) | He, Ran (Institute of Automation, Chinese Academy of Sciences)
Cross-domain data reconstruction methods derive a shared transformation across source and target domains. These methods usually make a specific assumption on noise, which exhibits limited ability when the target data are contaminated by different kinds of complex noise in practice. To enhance the robustness of domain adaptation under severe noise conditions, this paper proposes a novel reconstruction based algorithm in an information-theoretic setting. Specifically, benefiting from the theoretical property of correntropy, the proposed algorithm is distinguished with: detecting the contaminated target samples without making any specific assumption on noise; greatly suppressing the negative influence of noise on cross-domain transformation. Moreover, a relative entropy based regularization of the transformation is incorporated to avoid trivial solutions with the reaped theoretic advantages, i.e., non-negativity and scale-invariance. For optimization, a half-quadratic technique is developed to minimize the non-convex information-theoretic objectives with explicitly guaranteed convergence. Experiments on two real-world domain adaptation tasks demonstrate the superiority of our method.
Two-Stream Contextualized CNN for Fine-Grained Image Classification
Liu, Jiang (Chongqing University of Posts and Telecommunications) | Gao, Chenqiang (Chongqing University of Posts and Telecommunications) | Meng, Deyu (Xi'an Jiaotong University) | Zuo, Wangmeng (Harbin Institute of Technology)
Human's cognition system prompts that context information provides potentially powerful clue while recognizing objects. However, for fine-grained image classification, the contribution of context may vary over different images, and sometimes the context even confuses the classification result. To alleviate this problem, in our work, we develop a novel approach, two-stream contextualized Convolutional Neural Network, which provides a simple but efficient context-content joint classification model under deep learning framework. The network merely requires the raw image and a coarse segmentation as input to extract both content and context features without need of human interaction. Moreover, our network adopts a weighted fusion scheme to combine the content and the context classifiers, while a subnetwork is introduced to adaptively determine the weight for each image. According to our experiments on public datasets, our approach achieves considerable high recognition accuracy without any tedious human's involvements, as compared with the state-of-the-art approaches.