Plotting


Sparse Deep Learning: A New Framework Immune to Local Traps and Miscalibration

Neural Information Processing Systems

We first define the equivalent class of neural network parameters. Remark on the notation: ฮฝ() is similar to ฮฝ() defined in Section 2.1 of the main text. In what follows, we will use ฮฝ(ฮฒ) and ฮฝ(ฮณ) to denote the connection weight and network structure of ฮฝ(ฮฒ, ฮณ), respectively. The proof of Theorem 2.2 can be done using the same strategy as that used in proving Theorem 2.1. Here we provide a simpler proof using the result of Theorem 2.1.


Sparse Deep Learning: A New Framework Immune to Local Traps and Miscalibration

Neural Information Processing Systems

Deep learning has powered recent successes of artificial intelligence (AI). However, the deep neural network, as the basic model of deep learning, has suffered from issues such as local traps and miscalibration. In this paper, we provide a new framework for sparse deep learning, which has the above issues addressed in a coherent way. In particular, we lay down a theoretical foundation for sparse deep learning and propose prior annealing algorithms for learning sparse neural networks. The former has successfully tamed the sparse deep neural network into the framework of statistical modeling, enabling prediction uncertainty correctly quantified. The latter can be asymptotically guaranteed to converge to the global optimum, enabling the validity of the down-stream statistical inference. Numerical result indicates the superiority of the proposed method compared to the existing ones.


HourVideo: 1-Hour Video-Language Understanding

Neural Information Processing Systems

Our dataset consists of a novel task suite comprising summarization, perception (recall, tracking), visual reasoning (spatial, temporal, predictive, causal, counterfactual), and navigation (room-to-room, object retrieval) tasks. HourVideo includes 500 manually curated egocentric videos from the Ego4D dataset, spanning durations of 20 to 120 minutes, and features 12,976 high-quality, five-way multiple-choice questions. Benchmarking results reveal that multimodal models, including GPT-4 and LLaVA-NeXT, achieve marginal improvements over random chance. In stark contrast, human experts significantly outperform the state-of-the-art long-context multimodal model, Gemini Pro 1.5 (85.0% vs. 37.3%), highlighting a substantial gap in multimodal capabilities. Our benchmark, evaluation toolkit, prompts, and documentation are available at hourvideo.stanford.edu.


Understanding Emergent Abilities of Language Models from the Loss Perspective

Neural Information Processing Systems

Recent studies have put into question the belief that emergent abilities [58] in language models are exclusive to large models. This skepticism arises from two observations: 1) smaller models can also exhibit high performance on emergent abilities and 2) there is doubt on the discontinuous metrics used to measure these abilities. In this paper, we propose to study emergent abilities in the lens of pretraining loss, instead of model size or training compute. We demonstrate that the Transformer models with the same pre-training loss, but different model and data sizes, generate the same performance on various downstream tasks, with a fixed data corpus, tokenization, and model architecture. We also discover that a model exhibits emergent abilities on certain tasks--regardless of the continuity of metrics--when its pre-training loss falls below a specific threshold. Before reaching this threshold, its performance remains at the level of random guessing. This inspires us to redefine emergent abilities as those that manifest in models with lower pre-training losses, highlighting that these abilities cannot be predicted by merely extrapolating the performance trends of models with higher pre-training losses.


Nearly Minimax Optimal Reinforcement Learning for Discounted MDPs

Neural Information Processing Systems

We study the reinforcement learning problem for discounted Markov Decision Processes (MDPs) under the tabular setting. We propose a model-based algorithm named UCBVI-ฮณ, which is based on the optimism in the face of uncertainty principle and the Bernstein-type bonus.


Nearly Minimax Optimal Reinforcement Learning for Discounted MDPs

Neural Information Processing Systems

We study the reinforcement learning problem for discounted Markov Decision Processes (MDPs) under the tabular setting. We propose a model-based algorithm named UCBVI-ฮณ, which is based on the optimism in the face of uncertainty principle and the Bernstein-type bonus.


A List of Notations Table 1: Notations and their meanings Notation Meaning C = { C, C2

Neural Information Processing Systems

Based on Minkowski's inequality for sums [2] with order 2: Using Eq. 1 and 3, Eq. 4 can be proved. Using Eq. 3, 10, and 2, we have the following distance(o Eq. 6 can be proved. Similar to proof in C, Theorem 4 can be proved. Theorem 1, 2 and Theorem 3, 4 can be generalized to Minkowski distance with order q, q > 1. Using Eq. 11 and 3, Eq. 4 can be proved.


Wisdom ofthe Ensemble: Improving Consistency of Deep Learning Models

Neural Information Processing Systems

Deep learning classifiers are assisting humans in making decisions and hence the user's trust in these models is of paramount importance. Trust is often a function of constant behavior. From an AI model perspective it means given the same input the user would expect the same output, especially for correct outputs, or in other words consistently correct outputs. This paper studies a model behavior in the context of periodic retraining of deployed models where the outputs from successive generations of the models might not agree on the correct labels assigned to the same input. We formally define consistency and correct-consistency of a learning model. We prove that consistency and correct-consistency of an ensemble learner is not less than the average consistency and correct-consistency of individual learners and correct-consistency can be improved with a probability by combining learners with accuracy not less than the average accuracy of ensemble component learners. To validate the theory using three datasets and two state-ofthe-art deep learning classifiers we also propose an efficient dynamic snapshot ensemble method and demonstrate its value.