Goto

Collaborating Authors

 testing structural change


Limits on Testing Structural Changes in Ising Models

Neural Information Processing Systems

We present novel information-theoretic limits on detecting sparse changes in Isingmodels, a problem that arises in many applications where network changes canoccur due to some external stimuli. We show that the sample complexity fordetecting sparse changes, in a minimax sense, is no better than learning the entiremodel even in settings with local sparsity. This is a surprising fact in light of priorwork rooted in sparse recovery methods, which suggest that sample complexityin this context scales only with the number of network changes. To shed light onwhen change detection is easier than structured learning, we consider testing ofedge deletion in forest-structured graphs, and high-temperature ferromagnets ascase studies. We show for these that testing of small changes is similarly hard, buttesting oflargechanges is well-separated from structure learning. These resultsimply that testing of graphical models may not be amenable to concepts such asrestricted strong convexity leveraged for sparsity pattern recovery, and algorithmdevelopment instead should be directed towards detection of large changes.


Review for NeurIPS paper: Limits on Testing Structural Changes in Ising Models

Neural Information Processing Systems

Strengths: The problems studied in this paper are well motivated as the statistical limits on the sample complexity of functional inference over two or more graph models have not been established for a wide class of problems. This paper tries to make a contribution in that direction by considering the detection and learning of changes between Ising models. The analysis primarily uses standard information theoretic workhorses like Le Cam's method and Chi-squared based bounds with novel and non-trivial ensemble constructions to derive the results. The paper establishes theoretically that the lower bounds on the detection and learning of changes in the Ising models have approximately the same scaling behavior as that of structure learning for a wide range of regimes. This is in contrast to the claims in several existing algorithmic approaches that imply that recovery of sparse changes is possible with smaller sample complexity than structure learning of the complete graphs.


Limits on Testing Structural Changes in Ising Models

Neural Information Processing Systems

We present novel information-theoretic limits on detecting sparse changes in Isingmodels, a problem that arises in many applications where network changes canoccur due to some external stimuli. We show that the sample complexity fordetecting sparse changes, in a minimax sense, is no better than learning the entiremodel even in settings with local sparsity. This is a surprising fact in light of priorwork rooted in sparse recovery methods, which suggest that sample complexityin this context scales only with the number of network changes. To shed light onwhen change detection is easier than structured learning, we consider testing ofedge deletion in forest-structured graphs, and high-temperature ferromagnets ascase studies. We show for these that testing of small changes is similarly hard, buttesting oflargechanges is well-separated from structure learning.