Goto

Collaborating Authors

 eigenspace



A Canonicalization Perspective on Invariant and Equivariant Learning George Ma

Neural Information Processing Systems

In many applications, we desire neural networks to exhibit invariance or equivari-ance to certain groups due to symmetries inherent in the data. Recently, frame-averaging methods emerged to be a unified framework for attaining symmetries efficiently by averaging over input-dependent subsets of the group, i.e., frames. What we currently lack is a principled understanding of the design of frames.







Spectral Superposition: A Theory of Feature Geometry

Ivanov, Georgi, Oozeer, Narmeen, Raval, Shivam, Pejovic, Tasana, Upadhyay, Shriyash, Abdullah, Amir

arXiv.org Machine Learning

Neural networks represent more features than they have dimensions via superposition, forcing features to share representational space. Current methods decompose activations into sparse linear features but discard geometric structure. We develop a theory for studying the geometric structre of features by analyzing the spectra (eigenvalues, eigenspaces, etc.) of weight derived matrices. In particular, we introduce the frame operator $F = WW^\top$, which gives us a spectral measure that describes how each feature allocates norm across eigenspaces. While previous tools could describe the pairwise interactions between features, spectral methods capture the global geometry (``how do all features interact?''). In toy models of superposition, we use this theory to prove that capacity saturation forces spectral localization: features collapse onto single eigenspaces, organize into tight frames, and admit discrete classification via association schemes, classifying all geometries from prior work (simplices, polygons, antiprisms). The spectral measure formalism applies to arbitrary weight matrices, enabling diagnosis of feature localization beyond toy settings. These results point toward a broader program: applying operator theory to interpretability.


Physics-informed Gaussian Process Regression in Solving Eigenvalue Problem of Linear Operators

Bai, Tianming, Yang, Jiannan

arXiv.org Machine Learning

Applying Physics-Informed Gaussian Process Regression to the eigenvalue problem $(\mathcal{L}-λ)u = 0$ poses a fundamental challenge, where the null source term results in a trivial predictive mean and a degenerate marginal likelihood. Drawing inspiration from system identification, we construct a transfer function-type indicator for the unknown eigenvalue/eigenfunction using the physics-informed Gaussian Process posterior. We demonstrate that the posterior covariance is only non-trivial when $λ$ corresponds to an eigenvalue of the partial differential operator $\mathcal{L}$, reflecting the existence of a non-trivial eigenspace, and any sample from the posterior lies in the eigenspace of the linear operator. We demonstrate the effectiveness of the proposed approach through several numerical examples with both linear and non-linear eigenvalue problems.


Summary

Neural Information Processing Systems

We would like to thank the entire review team for their efforts and insightful comments. DZPS18] ([DZPS18] refers to arXiv:1810.02054) approach zero (i.e., ImageNet dataset has 14 million images. For those applications, a non-diminishing convergence rate is more desirable. Response to the concern on fixed second layer . Specifically, the same assumption is made in [ADH+19] and [ZCZG18] (arXiv:1811.08888),