Plotting

Labour's open door to big tech leaves critics crying foul

The Guardian

The problem with the UK, according to the former Google boss Eric Schmidt, is that it has "so many ways that people can say no". However, for some critics of the Labour government, it has a glaring issue with saying yes: to big tech. Schmidt made his comment in a Q&A conversation with Keir Starmer at a big investment summit in October last year. The prominent position of a tech bigwig at the event underlined the importance of the sector to a government that has made growth a priority and believes the sector is crucial to achieving it. Top US tech firms have a big presence in the UK, including Google, Mark Zuckerberg's Meta, Amazon, Apple, Microsoft and Palantir, the data intelligence firm co-founded by the Maga movement backer Peter Thiel.


Rank Diminishing in Deep Neural Networks

Neural Information Processing Systems

It is an instance of a key structural condition that applies across broad domains of machine learning. In particular, the assumption of low-rank feature representations led to algorithmic developments in many architectures. For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear. To fill this gap, we perform a rigorous study on the behavior of network rank, focusing particularly on the notion of rank deficiency. We theoretically establish a universal monotone decreasing property of network ranks from the basic rules of differential and algebraic composition, and uncover rank deficiency of network blocks and deep function coupling.


Approximate Value Equivalence

Neural Information Processing Systems

Model-based reinforcement learning agents must make compromises about which aspects of the environment their models should capture. The value equivalence (VE) principle posits that these compromises should be made considering the model's eventual use in value-based planning. Given sets of functions and policies, a model is said to be order- k VE to the environment if k applications of the Bellman operators induced by the policies produce the correct result when applied to the functions. Prior work investigated the classes of models induced by VE when we vary k and the sets of policies and functions. This gives rise to a rich collection of topological relationships and conditions under which VE models are optimal for planning.


Multi-Objective Deep Learning with Adaptive Reference Vectors

Neural Information Processing Systems

Many deep learning models involve optimizing multiple objectives. Since objectives are often conflicting, we aim to get diverse and representative trade-off solutions among these objectives. Gradient-based multi-objective optimization (MOO) algorithms using reference vectors have shown promising performance. However, they may still produce undesirable solutions due to mismatch between the pre-specified reference vectors and the problem's underlying Pareto front. In this paper, we propose a novel gradient-based MOO algorithm with adaptive reference vectors. We formulate reference vector adaption as a bilevel optimization problem, and solve it with an efficient solver.


On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias

Neural Information Processing Systems

We study the dynamics and implicit bias of gradient flow (GF) on univariate ReLU neural networks with a single hidden layer in a binary classification setting. We show that when the labels are determined by the sign of a target network with r neurons, with high probability over the initialization of the network and the sampling of the dataset, GF converges in direction (suitably defined) to a network achieving perfect training accuracy and having at most \mathcal{O}(r) linear regions, implying a generalization bound. Unlike many other results in the literature, under an additional assumption on the distribution of the data, our result holds even for mild over-parameterization, where the width is \tilde{\mathcal{O}}(r) and independent of the sample size.


Improved Fine-Tuning by Better Leveraging Pre-Training Data

Neural Information Processing Systems

As a dominant paradigm, fine-tuning a pre-trained model on the target data is widely used in many deep learning applications, especially for small data sets. However, recent studies have empirically shown that training from scratch has the final performance that is no worse than this pre-training strategy once the number of training samples is increased in some vision tasks. In this work, we revisit this phenomenon from the perspective of generalization analysis by using excess risk bound which is popular in learning theory. The result reveals that the excess risk bound may have a weak dependency on the pre-trained model. The observation inspires us to leverage pre-training data for fine-tuning, since this data is also available for fine-tuning.


New Winxvideo AI โ€“ One-stop Video/Image Enhancer & Toolkit

PCWorld

We seem to have more video footage and still images than ever before, thanks to smartphones, GoPro cameras and the backlog of older ones collected across a lifetime. Managing all these formats, as well as making sure they look their best, can be a frightening proposition. Thankfully, Winxvideo AI is a powerful all-in-one solution that not only uses advanced Artificial Intelligence software to upgrade the quality of your content but can rescue old photos and footage too. The newly updated version 4.0 also brings huge improvements to speed, plus a special price offer, so you can save both time and money while you upgrade your photo and video library. Winxvideo AI comes with an impressive array of features that can turn tired, old, blurry videos into something far more professional.


Model-based Lifelong Reinforcement Learning with Bayesian Exploration

Neural Information Processing Systems

We propose a model-based lifelong reinforcement-learning approach that estimates a hierarchical Bayesian posterior distilling the common structure shared across different tasks. The learned posterior combined with a sample-based Bayesian exploration procedure increases the sample efficiency of learning across a family of related tasks. We first derive an analysis of the relationship between the sample complexity and the initialization quality of the posterior in the finite MDP setting. We next scale the approach to continuous-state domains by introducing a Variational Bayesian Lifelong Reinforcement Learning algorithm that can be combined with recent model-based deep RL methods, and that exhibits backward transfer. Experimental results on several challenging domains show that our algorithms achieve both better forward and backward transfer performance than state-of-the-art lifelong RL methods.


Data augmentation for efficient learning from parametric experts

Neural Information Processing Systems

We present a simple, yet powerful data-augmentation technique to enable data-efficient learning from parametric experts for reinforcement and imitation learning. We focus on what we call the policy cloning setting, in which we use online or offline queries of an expert or expert policy to inform the behavior of a student policy. This setting arises naturally in a number of problems, for instance as variants of behavior cloning, or as a component of other algorithms such as DAGGER, policy distillation or KL-regularized RL. Our approach, augmented policy cloning (APC), uses synthetic states to induce feedback-sensitivity in a region around sampled trajectories, thus dramatically reducing the environment interactions required for successful cloning of the expert. We achieve highly data-efficient transfer of behavior from an expert to a student policy for high-degrees-of-freedom control problems.


PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization

Neural Information Processing Systems

While there has been progress in developing non-vacuous generalization bounds for deep neural networks, these bounds tend to be uninformative about why deep learning works. In this paper, we develop a compression approach based on quantizing neural network parameters in a linear subspace, profoundly improving on previous results to provide state-of-the-art generalization bounds on a variety of tasks, including transfer learning. We use these tight bounds to better understand the role of model size, equivariance, and the implicit biases of optimization, for generalization in deep learning. Notably, we find large models can be compressed to a much greater extent than previously known, encapsulating Occam's razor.