Coupled Dictionary Learning for Unsupervised Feature Selection

Zhu, Pengfei (Tianjin University) | Hu, Qinghua (Tianjin University) | Zhang, Changqing (Tianjin University) | Zuo, Wangmeng (Harbin Institute of Technology)

AAAI Conferences 

Hence, manifold regularization terminals and social networks, mountains of highdimensional is used in unsupervised feature selection algorithms to preserve data explosively emerge and grow. Curse of dimensionality sample similarity (Li et al. 2012; Tang and Liu 2012; leads to great storage burden, high time complexity Wang, Tang, and Liu 2015). Similar to the class labels in supervised and failure of the classic learning machines (Wolf cases, cluster structure indicates the affiliation relations and Shashua 2005). Feature selection searches the most of samples, and it can be discovered by spectral clustering representative and discriminative features by keeping the (SPEC (Zhao and Liu 2007), MCFS (Cai, Zhang, and He data properties and removing the redundancy. According to 2010), matrix factorization (NDFS (Li et al. 2012), RUFS the availability of the label information, feature selection (Qian and Zhai 2013), EUFS (Wang, Tang, and Liu 2015) can be categorized into unsupervised (He, Cai, and Niyogi) or linear predictors (UDFS (Yang et al. 2011), JELSR 2005), semi-supervised (Benabdeslem and Hindawi 2014), (Hou et al. 2011)).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found