deep subspace clustering
Deep Subspace Clustering with Data Augmentation
The idea behind data augmentation techniques is based on the fact that slight changes in the percept do not change the brain cognition. In classification, neural networks use this fact by applying transformations to the inputs to learn to predict the same label. However, in deep subspace clustering (DSC), the ground-truth labels are not available, and as a result, one cannot easily use data augmentation techniques. We propose a technique to exploit the benefits of data augmentation in DSC algorithms. We learn representations that have consistent subspaces for slightly transformed inputs.
- North America > United States > New Jersey > Middlesex County > New Brunswick (0.06)
- North America > United States > Maryland > Baltimore (0.05)
- North America > Canada (0.05)
Exploring a Principled Framework for Deep Subspace Clustering
Meng, Xianghan, Huang, Zhiyuan, He, Wei, Qi, Xianbiao, Xiao, Rong, Li, Chun-Guang
Subspace clustering is a classical unsupervised learning task, built on a basic assumption that high-dimensional data can be approximated by a union of subspaces (UoS). Nevertheless, the real-world data are often deviating from the UoS assumption. To address this challenge, state-of-the-art deep subspace clustering algorithms attempt to jointly learn UoS representations and self-expressive coefficients. However, the general framework of the existing algorithms suffers from a catastrophic feature collapse and lacks a theoretical guarantee to learn desired UoS representation. In this paper, we present a Principled fRamewOrk for Deep Subspace Clustering (PRO-DSC), which is designed to learn structured representations and self-expressive coefficients in a unified manner. Specifically, in PRO-DSC, we incorporate an effective regularization on the learned representations into the self-expressive model, prove that the regularized self-expressive model is able to prevent feature space collapse, and demonstrate that the learned optimal representations under certain condition lie on a union of orthogonal subspaces. Moreover, we provide a scalable and efficient approach to implement our PRO-DSC and conduct extensive experiments to verify our theoretical findings and demonstrate the superior performance of our proposed deep subspace clustering approach. The code is available at https://github.com/mengxianghan123/PRO-DSC.
Review for NeurIPS paper: Deep Subspace Clustering with Data Augmentation
The proposed policy method found via the proposed greedy search strategy results outperforms policies found in the fully-supervised setting of ImageNet classification (by AutoAugment and practitioners). However, it is hard to tell if a different search method would result a better policy. It would be good to include baselines for the search method. It would be good to discuss this. It would also be good to discuss related work on searching for data augmentation policies (e.g. It would be nice to also show results on using the learnt features for a downstream tasks (e.g.
Deep Subspace Clustering with Data Augmentation
The idea behind data augmentation techniques is based on the fact that slight changes in the percept do not change the brain cognition. In classification, neural networks use this fact by applying transformations to the inputs to learn to predict the same label. However, in deep subspace clustering (DSC), the ground-truth labels are not available, and as a result, one cannot easily use data augmentation techniques. We propose a technique to exploit the benefits of data augmentation in DSC algorithms. We learn representations that have consistent subspaces for slightly transformed inputs.