Goto

Collaborating Authors

 logm



LearningSocialWelfareFunctions

Neural Information Processing Systems

Consider a standard decision making setting that includes a set of possible actions (decisions or policies), and a set of individuals who assign utilities to the actions.


943d6dca1884955e645d8997ae2fa938-Paper-Conference.pdf

Neural Information Processing Systems

For this reason, it is important to design algorithms that are able to maintain a stable and high quality solution and that at the same time can process updates efficiently.


SupplementaryMaterial MatrixCompletionwithHierarchical GraphSideInformation

Neural Information Processing Systems

This implies that M(δ) = T(δ), i.e., the constraint(13) made in T(δ) does not lose any generality in matrix representation. One technical distinction relative to the previous works [2,3] arises from the fact that in our setting, the hamming distances(dx1(`),dx2(`),dx3(`)) defined w.r.t. We focus on the family of rating matrices{Mhci: c T`}. First, we present the following lemma that guarantees the existence of two subsets of users with certainproperties. The proof of this case follows the same structure as that of the grouping-limited regime. It is shown that the groups within each cluster are recovered with a vanishing fraction of errors if Ig = ω(1/n).



211ab571cc9f3802afa6ffff52ae3e5b-Paper-Conference.pdf

Neural Information Processing Systems

In addition, the underlying signalxisassumed to lie in the range of anL-Lipschitz continuous generativemodel with boundedkdimensionalinputs.Weproposeatwo-stepapproach,forwhichthefirststepplays the role ofspectral initialization and the second step refines the estimated vector produced by the first step iteratively. We show that both steps enjoy a statistical rate oforder p (klogL) (logm)/mundersuitable conditions.


Supplementary File for " Stochastic Gradient Descent in Correlated Settings: AStudy on Gaussian Processes "

Neural Information Processing Systems

The supplementary file is organized as follows: Section 1 restates the assumptions and main theorems on the convergence of parameter iterates and the full gradient; Section 2 is devoted to the proofs of the two main theorems, while Section 3 includes the proofs of supporting lemmas; Section 4 includes additional figures from the numerical study. Under Assumptions 1.1 to 1.3, when m > C for some constant C > 0, we have the following results under two corresponding conditions on sl(m): First we present the following lemma, showing that the loss function has a property similar from strong convexity. For the first case discussed in Lemma 2.1, define eg(θ(k)) = (g(θ(k)))2, and for the second case define eg(θ(k)) = g(θ(k)). Therefore, combining Lemma 2.1, Lemma 2.2 and (7) leads to the following conclusion. Apply(15)inLemma 2.3 with = 12, then for any 0<α<1, with probability at least 1 2m α, we have A11 1 Under this case, we can still apply (15) in Lemma 2.3.


Supplementaryfor NeuralMethodsforPoint-wiseDependencyEstimation

Neural Information Processing Systems

Four approaches are discussed: Variational Bounds of Mutual Information, Density Matching, ProbabilisticClassifier,andDensity-RatioFitting. Proposition3(IJS and its neural estimation, restating Jensen-Shannon bound with f-GAN objective [22]). We adopt the "concatenate critic" design [20, 22, 23] for our neural network parametrized function. NotethatProbabilistic Classifier method applies sigmoid function to the outputs to ensure probabilistic outputs. To proceed, it suffices if we could provide an upper bound forPrS(|lS(θk)| ε/2).


Minimax Rates for Hyperbolic Hierarchical Learning

Rawal, Divit, Vishwanath, Sriram

arXiv.org Machine Learning

We prove an exponential separation in sample complexity between Euclidean and hyperbolic representations for learning on hierarchical data under standard Lipschitz regularization. For depth-$R$ hierarchies with branching factor $m$, we first establish a geometric obstruction for Euclidean space: any bounded-radius embedding forces volumetric collapse, mapping exponentially many tree-distant points to nearby locations. This necessitates Lipschitz constants scaling as $\exp(Ω(R))$ to realize even simple hierarchical targets, yielding exponential sample complexity under capacity control. We then show this obstruction vanishes in hyperbolic space: constant-distortion hyperbolic embeddings admit $O(1)$-Lipschitz realizability, enabling learning with $n = O(mR \log m)$ samples. A matching $Ω(mR \log m)$ lower bound via Fano's inequality establishes that hyperbolic representations achieve the information-theoretic optimum. We also show a geometry-independent bottleneck: any rank-$k$ prediction space captures only $O(k)$ canonical hierarchical contrasts.


On the Capacity of Self-Attention

Adler, Micah

arXiv.org Artificial Intelligence

While self-attention is known to learn relations among tokens, we lack a formal understanding of its capacity: how many distinct relations can a single layer reliably recover for a given budget? To formalize this, we introduce Relational Graph Recognition (RGR), where the key-query channel represents a graph on $m$ items with $m'$ directed edges, and, given a context of items, must recover the neighbors of each item. We measure resources by the total key dimension $D_K = h\,d_k$. Within this framework, we analytically derive a capacity scaling law and validate it empirically. We show that $D_K = Θ(m' \log m' / d_{\text{model}})$ is both necessary (information-theoretic lower bound) and sufficient (explicit construction) in a broad class of graphs to recover $m'$ relations. This scaling law directly leads to a new, capacity-based rationale for multi-head attention that applies even when each item only attends to a single target. When embeddings are uncompressed ($m = d_{\text{model}}$) and the graph is a permutation, a single head suffices. However, compression ($m > d_{\text{model}}$) forces relations into overlapping subspaces, creating interference that a single large head cannot disentangle. Our analysis shows that allocating a fixed $D_K$ across many small heads mitigates this interference, increasing the number of recoverable relations. Controlled single-layer experiments mirror the theory, revealing a sharp performance threshold that matches the predicted capacity scaling and confirms the benefit of distributing $D_K$ across multiple heads. Altogether, these results provide a concrete scaling law for self-attention capacity and a principled design rule for allocating key-query budget across heads.