Not enough data to create a plot.
Try a different view from the menu above.
Decentralized Randomly Distributed Multi-agent Multi-armed Bandit with Heterogeneous Rewards
We study a decentralized multi-agent multi-armed bandit problem in which multiple clients are connected by time dependent random graphs provided by an environment. The reward distributions of each arm vary across clients and rewards are generated independently over time by an environment based on distributions that include both sub-exponential and sub-Gaussian distributions. Each client pulls an arm and communicates with neighbors based on the graph provided by the environment. The goal is to minimize the overall regret of the entire system through collaborations. To this end, we introduce a novel algorithmic framework, which first provides robust simulation methods for generating random graphs using rapidly mixing Markov chains or the random graph model, and then combines an averaging-based consensus approach with a newly proposed weighting technique and the upper confidence bound to deliver a UCB-type solution. Our algorithms account for the randomness in the graphs, removing the conventional doubly stochasticity assumption, and only require the knowledge of the number of clients at initialization. We derive optimal instance-dependent regret upper bounds of order log T in both sub-Gaussian and sub-exponential environments, and a nearly optimal mean-gap independent regret upper bound of order T log T up to a log T factor. Importantly, our regret bounds hold with high probability and capture graph randomness, whereas prior works consider expected regret under assumptions and require more stringent reward distributions.
Trial matching: capturing variability with data-constrained spiking neural networks
Simultaneous behavioral and electrophysiological recordings call for new methods to reveal the interactions between neural activity and behavior. A milestone would be an interpretable model of the co-variability of spiking activity and behavior across trials. Here, we model a mouse cortical sensory-motor pathway in a tactile detection task reported by licking with a large recurrent spiking neural network (RSNN), fitted to the recordings via gradient-based optimization. We focus specifically on the difficulty to match the trial-to-trial variability in the data. Our solution relies on optimal transport to define a distance between the distributions of generated and recorded trials. The technique is applied to artificial data and neural recordings covering six cortical areas. We find that the resulting RSNN can generate realistic cortical activity and predict jaw movements across the main modes of trial-to-trial variability. Our analysis also identifies an unexpected mode of variability in the data corresponding to task-irrelevant movements of the mouse.
A Proofs
Let (x, y) be a binary classification task that admits a smooth separator as in Assumption 1. Further, since sgn is nonpolynomial, we can use it as the nonlinearity of this network [21]. Note that this means we can achieve zero-error in classification, N(x) = y(x), x X. Further taking the usual assumption that X is compact. Let (x, y) be a G-invariant task with a smooth separator as in Assumption 1. Assumption 2) is both probabilistic G-invariant and universal in (x, y). For the reader more familiar with group representation theory, the result follows immediately from noting that compact groups admit unitary representations, see [28] for a good resource on the matter.
Replicability in Reinforcement Learning
We initiate the mathematical study of replicability as an algorithmic property in the context of reinforcement learning (RL). We focus on the fundamental setting of discounted tabular MDPs with access to a generative model. Inspired by Impagliazzo et al. [2022], we say that an RL algorithm is replicable if, with high probability, it outputs the exact same policy after two executions on i.i.d.