interpretable kernel dimensionality reduction
Solving Interpretable Kernel Dimensionality Reduction
Kernel dimensionality reduction (KDR) algorithms find a low dimensional representation of the original data by optimizing kernel dependency measures that are capable of capturing nonlinear relationships. The standard strategy is to first map the data into a high dimensional feature space using kernels prior to a projection onto a low dimensional space. While KDR methods can be easily solved by keeping the most dominant eigenvectors of the kernel matrix, its features are no longer easy to interpret. Alternatively, Interpretable KDR (IKDR) is different in that it projects onto a subspace \textit{before} the kernel feature mapping, therefore, the projection matrix can indicate how the original features linearly combine to form the new features. Unfortunately, the IKDR objective requires a non-convex manifold optimization that is difficult to solve and can no longer be solved by eigendecomposition.
Reviews: Solving Interpretable Kernel Dimensionality Reduction
Summary: [19] has proposed recently an efficient iterative spectral (eigendecomposition) method (ISM) for the non-convex interpretable kernel dimensionality reduction (IKDR) objective in the context of alternative clustering. It established theoretical guarantees of ISM for the Gaussian kernel. The paper extends the theoretical guarantees of ISM to a family of kernels [Definition 1]. Each kernel in the ISM family has an associated surrogate matrix \Phi and the optimal projection is formed by the most dominant eigenvectors of \Phi [Theorem 1 and 2]. They showed that any conic combination of the ISM kernels is still an ISM kernel [Proposition 1] and therefore ISM can be extend to conic combination of kernels.
Solving Interpretable Kernel Dimensionality Reduction
Kernel dimensionality reduction (KDR) algorithms find a low dimensional representation of the original data by optimizing kernel dependency measures that are capable of capturing nonlinear relationships. The standard strategy is to first map the data into a high dimensional feature space using kernels prior to a projection onto a low dimensional space. While KDR methods can be easily solved by keeping the most dominant eigenvectors of the kernel matrix, its features are no longer easy to interpret. Alternatively, Interpretable KDR (IKDR) is different in that it projects onto a subspace \textit{before} the kernel feature mapping, therefore, the projection matrix can indicate how the original features linearly combine to form the new features. Unfortunately, the IKDR objective requires a non-convex manifold optimization that is difficult to solve and can no longer be solved by eigendecomposition.
Solving Interpretable Kernel Dimensionality Reduction
Wu, Chieh, Miller, Jared, Chang, Yale, Sznaier, Mario, Dy, Jennifer
Kernel dimensionality reduction (KDR) algorithms find a low dimensional representation of the original data by optimizing kernel dependency measures that are capable of capturing nonlinear relationships. The standard strategy is to first map the data into a high dimensional feature space using kernels prior to a projection onto a low dimensional space. While KDR methods can be easily solved by keeping the most dominant eigenvectors of the kernel matrix, its features are no longer easy to interpret. Alternatively, Interpretable KDR (IKDR) is different in that it projects onto a subspace \textit{before} the kernel feature mapping, therefore, the projection matrix can indicate how the original features linearly combine to form the new features. Unfortunately, the IKDR objective requires a non-convex manifold optimization that is difficult to solve and can no longer be solved by eigendecomposition.