Plotting

c164bbc9d6c72a52c599bbb43d8db8e1-Paper.pdf

Neural Information Processing Systems

Deep neural networks have achieved impressive performance in many areas. Designing a fast and provable method for training neural networks is a fundamental question in machine learning. The classical training method requires paying ฮฉ(mnd) cost for both forward computation and backward computation, where m is the width of the neural network, and we are given n training points in d-dimensional space.


Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model

Neural Information Processing Systems

"Distribution shift" is the main obstacle to the success of offline reinforcement learning. A learning policy may take actions beyond the behavior policy's knowledge, referred to as Out-of-Distribution (OOD) actions. The Q-values for these OOD actions can be easily overestimated. As a result, the learning policy is biased by using incorrect Q-value estimates. One common approach to avoid Q-value overestimation is to make a pessimistic adjustment.



61c00c07e6d27285e4b952e96cc65666-Paper-Conference.pdf

Neural Information Processing Systems

When evaluating stimuli reconstruction results it is tempting to assume that higher fidelity text and image generation is due to an improved understanding of the brain or more powerful signal extraction from neural recordings. However, in practice, new reconstruction methods could improve performance for at least three other reasons: learning more about the distribution of stimuli, becoming better at reconstructing text or images in general, or exploiting weaknesses in current image and/or text evaluation metrics. Here we disentangle how much of the reconstruction is due to these other factors vs. productively using the neural recordings. We introduce BrainBits, a method that uses a bottleneck to quantify the amount of signal extracted from neural recordings that is actually necessary to reproduce a method's reconstruction fidelity. We find that it takes surprisingly little information from the brain to produce reconstructions with high fidelity. In these cases, it is clear that the priors of the methods' generative models are so powerful that the outputs they produce extrapolate far beyond the neural signal they decode. Given that reconstructing stimuli can be improved independently by either improving signal extraction from the brain or by building more powerful generative models, improving the latter may fool us into thinking we are improving the former. We propose that methods should report a method-specific random baseline, a reconstruction ceiling, and a curve of performance as a function of bottleneck size, with the ultimate goal of using more of the neural recordings.


Online Active Learning with Surrogate Loss Functions

Neural Information Processing Systems

We derive a novel active learning algorithm in the streaming setting for binary classification tasks. The algorithm leverages weak labels to minimize the number of label requests, and trains a model to optimize a surrogate loss on a resulting set of labeled and weak-labeled points. Our algorithm jointly admits two crucial properties: theoretical guarantees in the general agnostic setting and a strong empirical performance. Our theoretical analysis shows that the algorithm attains favorable generalization and label complexity bounds, while our empirical study on 18 real-world datasets demonstrate that the algorithm outperforms standard baselines, including the Margin Algorithm, or Uncertainty Sampling, a highperforming active learning algorithm favored by practitioners.



Improving Sparse Vector Technique with Renyi Differential Privacy Yu-Xiang Wang Department of Computer Science Department of Computer Science UC Santa Barbara

Neural Information Processing Systems

The Sparse Vector Technique (SVT) is one of the most fundamental algorithmic tools in differential privacy (DP). It also plays a central role in the state-of-the-art algorithms for adaptive data analysis and model-agnostic private learning. In this paper, we revisit SVT from the lens of Renyi differential privacy, which results in new privacy bounds, new theoretical insight and new variants of SVT algorithms. A notable example is a Gaussian mechanism version of SVT, which provides better utility over the standard (Laplace-mechanism-based) version thanks to its more concentrated noise. Extensive empirical evaluation demonstrates the merits of Gaussian SVT over the Laplace SVT and other alternatives, which encouragingly suggests that using Gaussian SVT as a drop-in replacement could make SVT-based algorithms more practical in downstream tasks.


Robot Talk Episode 108 โ€“ Giving robots the sense of touch, with Anuradha Ranasinghe

Robohub

Anuradha Ranasinghe earned her PhD in robotics from King's College London in 2015, focusing on haptic-based human control in low-visibility conditions. She is now a senior lecturer in robotics at Liverpool Hope University, researching haptics, miniaturized sensors, and perception. Her work has received national and international media attention, including features by EPSRC, CBS Radio, Liverpool Echo, and Techxplore. She has published in leading robotics conferences and journals, and she has presented her findings at various international conferences.


Robot Talk Episode 107 โ€“ Animal-inspired robot movement, with Robert Siddall

Robohub

Claire chatted to Robert Siddall from the University of Surrey about novel robot designs inspired by the way real animals move. Robert Siddall is an aerospace engineer with an enthusiasm for unconventional robotics. He is interested in understanding animal locomotion for the benefit of synthetic locomotion, particularly flight. Before becoming a Lecturer at the University of Surrey, he worked at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, where he studied the arboreal acrobatics of rainforest-dwelling reptiles. His work focuses on the design of novel robots that can tackle important environmental problems.


Robot Talk Episode 109 โ€“ Building robots at home, with Dan Nicholson

Robohub

Claire chatted to Dan Nicholson from Maker Forge about creating open source robotics projects you can do at home. Dan Nicholson is a seasoned Software Engineering Manager with over 20 years of experience as a software engineer and architect. Four years ago, he began exploring robotics as a hobby, which quickly evolved into a large-scale bipedal robotics project that has inspired a wide audience. After making the project open-source and 3D printable, Dan built a vibrant community around it, with over 25k followers. Dan shares insights and project details while collaborating with partners and fellow makers to continue expanding the project's impact.