Plotting

Splitting Steepest Descent for Growing Neural Architectures

Neural Information Processing Systems

We develop a progressive training approach for neural networks which adaptively grows the network structure by splitting existing neurons to multiple off-springs. By leveraging a functional steepest descent idea, we derive a simple criterion for deciding the best subset of neurons to split and a splitting gradient for optimally updating the off-springs. Theoretically, our splitting strategy is a second-order functional steepest descent for escaping saddle points in an 1-Wasserstein metric space, on which the standard parametric gradient descent is a first-order steepest descent. Our method provides a new practical approach for optimizing neural network structures, especially for learning lightweight neural architectures in resource-constrained settings.


3a01fc0853ebeba94fde4d1cc6fb842a-AuthorFeedback.pdf

Neural Information Processing Systems

"Training efficiency comparison": 1) Because splitting GD and pruning methods work in a very different fashion, it is We will release our implementation. We will add more discussion on the time efficiency in the revision. We will release our code to demonstrate this after acceptance. For example, a 3 3 Conv-filter with 64 input channels has d = 64 3 3 = 576. We will release our implementation.


Improving Equivariant Model Training via Constraint Relaxation

Neural Information Processing Systems

Equivariant neural networks have been widely used in a variety of applications due to their ability to generalize well in tasks where the underlying data symmetries are known. Despite their successes, such networks can be difficult to optimize and require careful hyperparameter tuning to train successfully. In this work, we propose a novel framework for improving the optimization of such models by relaxing the hard equivariance constraint during training: We relax the equivariance constraint of the network's intermediate layers by introducing an additional nonequivariant term that we progressively constrain until we arrive at an equivariant solution. By controlling the magnitude of the activation of the additional relaxation term, we allow the model to optimize over a larger hypothesis space containing approximate equivariant networks and converge back to an equivariant solution at the end of training. We provide experimental results on different state-of-theart network architectures, demonstrating how this training framework can result in equivariant models with improved generalization performance.



A Further Results on the Existence of Matching Subnetworks in BERT

Neural Information Processing Systems

In Table 2 in Section 3, we show the highest sparsities for which IMP subnetwork performance is within one standard deviation of the unpruned BERT model on each task. In Table 4 below, we plot the same information for the highest sparsities at which IMP subnetworks match or exceed the performance of the unpruned BERT model on each task. The sparsest winning tickets are in many cases larger under this stricter criterion. QQP goes from 90% sparsity to 70% sparsity, STS-B goes from 50% sparsity to 40% sparsity, QNLI goes from 70% sparsity to 50% sparsity, MRPC goes from 50% sparsity to 40% sparsity, RTE goes from 60% sparsity to 50%, SST-2 goes from 60% sparsity to 50%, CoLA goes from 50% sparsity to 40% sparsity, SQuAD goes from 40% sparsity to 20% sparsity, and MLM goes from 70% sparsity to 50% sparsity. As broader context for the relationship between sparsity and accuracy, Figure 11 shows the performance of IMP subnetworks across all sparsities on each task.


The Lottery Ticket Hypothesis for Pre-trained BERT Networks

Neural Information Processing Systems

In natural language processing (NLP), enormous pre-trained models like BERT have become the standard starting point for training on a range of downstream tasks, and similar trends are emerging in other areas of deep learning. In parallel, work on the lottery ticket hypothesis has shown that models for NLP and computer vision contain smaller matching subnetworks capable of training in isolation to full accuracy and transferring to other tasks. In this work, we combine these observations to assess whether such trainable, transferrable subnetworks exist in pre-trained BERT models. For a range of downstream tasks, we indeed find matching subnetworks at 40% to 90% sparsity. We find these subnetworks at (pre-trained) initialization, a deviation from prior NLP research where they emerge only after some amount of training. Subnetworks found on the masked language modeling task (the same task used to pre-train the model) transfer universally; those found on other tasks transfer in a limited fashion if at all. As large-scale pre-training becomes an increasingly central paradigm in deep learning, our results demonstrate that the main lottery ticket observations remain relevant in this context.


Equipping Experts/Bandits with Long-term Memory

Neural Information Processing Systems

We propose the first reduction-based approach to obtaining long-term memory guarantees for online learning in the sense of Bousquet and Warmuth [8], by reducing the problem to achieving typical switching regret. Specifically, for the classical expert problem with K actions and T rounds, using our framework we develop various algorithms with a regret bound of order O( T (S ln T + n ln K)) compared to any sequence of experts with S 1 switches among n min{S, K} distinct experts. In addition, by plugging specific adaptive algorithms into our framework we also achieve the best of both stochastic and adversarial environments simultaneously.


39ae2ed11b14a4ccb41d35e9d1ba5d11-AuthorFeedback.pdf

Neural Information Processing Systems

We thank all reviewers for their valuable comments. This is admittedly true from a theoretical viewpoint. Therefore, we believe that the significance of our results goes beyond the theoretical improvement of regret bounds. We will add more discussion on this in the next version of our paper, as suggested by the reviewer. For the bandit setting, again there is no known lower bound.


97fe251c25b6f99a2a23b330a75b11d4-Paper-Conference.pdf

Neural Information Processing Systems

Despite the effectiveness of data selection for pretraining and instruction fine-tuning large language models (LLMs), improving data efficiency in supervised fine-tuning (SFT) for specialized domains poses significant challenges due to the complexity of fine-tuning data.


TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks

Neural Information Processing Systems

While tabular classification has traditionally relied on from-scratch training, a recent breakthrough called prior-data fitted networks (PFNs) challenges this approach. Similar to large language models, PFNs make use of pretraining and in-context learning to achieve strong performance on new tasks in a single forward pass. However, current PFNs have limitations that prohibit their widespread adoption. Notably, TabPFN achieves very strong performance on small tabular datasets but is not designed to make predictions for datasets of size larger than 1000. In this work, we overcome these limitations and substantially improve the performance of PFNs via context optimization. We introduce TuneTables, a parameter-efficient fine-tuning strategy for PFNs that compresses large datasets into a smaller learned context. We conduct extensive experiments on nineteen algorithms over 98 datasets and find that TuneTables achieves the best performance on average, outperforming boosted trees such as CatBoost, while optimizing fewer than 5% of TabPFN's parameters. Furthermore, we show that TuneTables can be used as an interpretability tool and can even be used to mitigate biases by optimizing a fairness objective.