Goto

Collaborating Authors

 protocol


Major leap towards reanimation after death as mammal's brain preserved

New Scientist

Major leap towards reanimation after death as mammal's brain preserved A pig's brain has been frozen with its cellular activity locked in place and minimal damage. Could our brains one day be preserved in a way that locks in our thoughts, feelings and perceptions? An entire mammalian brain has been successfully preserved using a technique that will now be offered to people who are terminally ill. The intention is to preserve all the neural information thought necessary to one day reconstruct the mind of the person it once belonged to. "They would need to donate their brain and body for scientific research," says Borys Wróbel at Nectome in San Francisco, California, a research company focused on memory preservation.


Communication-Efficient Distributed Learning of Discrete Distributions

Neural Information Processing Systems

We initiate a systematic investigation of distribution learning (density estimation) when the data is distributed across multiple servers. The servers must communicate with a referee and the goal is to estimate the underlying distribution with as few bits of communication as possible. We focus on non-parametric density estimation of discrete distributions with respect to the l1 and l2 norms. We provide the first non-trivial upper and lower bounds on the communication complexity of this basic estimation task in various settings of interest. Specifically, our results include the following: 1. When the unknown discrete distribution is unstructured and each server has only one sample, we show that any blackboard protocol (i.e., any protocol in which servers interact arbitrarily using public messages) that learns the distribution must essentially communicate the entire sample.


Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols

Neural Information Processing Systems

Learning to communicate through interaction, rather than relying on explicit supervision, is often considered a prerequisite for developing a general AI. We study a setting where two agents engage in playing a referential game and, from scratch, develop a communication protocol necessary to succeed in this game. Unlike previous work, we require that messages they exchange, both at train and test time, are in the form of a language (i.e.


Reservoir Subspace Injection for Online ICA under Top-n Whitening

Xiao, Wenjun, Bi, Yuda, Calhoun, Vince D

arXiv.org Machine Learning

Reservoir expansion can improve online independent component analysis (ICA) under nonlinear mixing, yet top-$n$ whitening may discard injected features. We formalize this bottleneck as \emph{reservoir subspace injection} (RSI): injected features help only if they enter the retained eigenspace without displacing passthrough directions. RSI diagnostics (IER, SSO, $ρ_x$) identify a failure mode in our top-$n$ setting: stronger injection increases IER but crowds out passthrough energy ($ρ_x: 1.00\!\rightarrow\!0.77$), degrading SI-SDR by up to $2.2$\,dB. A guarded RSI controller preserves passthrough retention and recovers mean performance to within $0.1$\,dB of baseline $1/N$ scaling. With passthrough preserved, RE-OICA improves over vanilla online ICA by $+1.7$\,dB under nonlinear mixing and achieves positive SI-SDR$_{\mathrm{sc}}$ on the tested super-Gaussian benchmark ($+0.6$\,dB).






e13a3071bd0aeb97ce41b2da921dfdb6-Paper-Datasets_and_Benchmarks_Track.pdf

Neural Information Processing Systems

Significant progress has been made inthepast decade thanks to the availability of pedestrian trajectory datasets, which enable trajectory prediction methods to learn from pedestrians' past movements and predict future trajectories. However, these datasets and methods typically assume that theobservedtrajectory sequence iscomplete, ignoring real-world issues such as sensor failure, occlusion, and limited fields of view that can result in missing valuesinobservedtrajectories.


Reranking Laws for Language Generation: A Communication-Theoretic Perspective

Neural Information Processing Systems

To ensure large language models (LLMs) are used safely, one must reduce their propensity to hallucinate or to generate unacceptable answers. A simple and often used strategy is to first let the LLM generate multiple hypotheses and then employ a reranker to choose the best one.