Not enough data to create a plot.
Try a different view from the menu above.
Cameron Musco
Toward a Characterization of Loss Functions for Distribution Learning
Nika Haghtalab, Cameron Musco, Bo Waggoner
In this work we study loss functions for learning and evaluating probability distributions over large discrete domains. Unlike classification or regression where a wide variety of loss functions are used, in the distribution learning and density estimation literature, very few losses outside the dominant log loss are applied. We aim to understand this fact, taking an axiomatic approach to the design of loss functions for distributions. We start by proposing a set of desirable criteria that any good loss function should satisfy. Intuitively, these criteria require that the loss function faithfully evaluates a candidate distribution, both in expectation and when estimated on a few samples.
Recursive Sampling for the Nystrom Method
Cameron Musco, Christopher Musco
Inferring Networks From Random Walk-Based Node Similarities
Jeremy Hoskins, Cameron Musco, Christopher Musco, Babis Tsourakakis
Digital presence in the world of online social media entails significant privacy risks [31, 56]. In this work we consider a privacy threat to a social network in which an attacker has access to a subset of random walk-based node similarities, such as effective resistances (i.e., commute times) or personalized PageRank scores. Using these similarities, the attacker seeks to infer as much information as possible about the network, including unknown pairwise node similarities and edges. For the effective resistance metric, we show that with just a small subset of measurements, one can learn a large fraction of edges in a social network. We also show that it is possible to learn a graph which accurately matches the underlying network on all other effective resistances.