Goto

Collaborating Authors

 Europe


American tennis star Danielle Collins accuses cameraman of 'wildly inappropriate' behavior

FOX News

PongBot is an artificial intelligence-powered tennis robot. American tennis player Danielle Collins had some choice words for the cameraman during her Internationaux de Strasbourg match against Emma Raducanu on Wednesday afternoon. Collins was in the middle of a changeover when she felt the cameraman's hovering was a bit too close for comfort in the middle of the third and defining set. She got off the bench and made the point clear. Danielle Collins celebrates during her match against Madison Keys in the third round of the women's singles at the 2025 Australian Open at Melbourne Park in Melbourne, Australia, on Jan. 18, 2025.


Semialgebraic Optimization for Lipschitz Constants of ReLU Networks

Neural Information Processing Systems

The Lipschitz constant of a network plays an important role in many applications of deep learning, such as robustness certification and Wasserstein Generative Adversarial Network. We introduce a semidefinite programming hierarchy to estimate the global and local Lipschitz constant of a multiple layer deep neural network. The novelty is to combine a polynomial lifting for ReLU functions derivatives with a weak generalization of Putinar's positivity certificate. This idea could also apply to other, nearly sparse, polynomial optimization problems in machine learning. We empirically demonstrate that our method provides a trade-off with respect to state of the art linear programming approach, and in some cases we obtain better bounds in less time.


Improved Coresets and Sublinear Algorithms for Power Means in Euclidean Spaces Vincent Cohen-Addad David Saulpic Chris Schwiegelshohn

Neural Information Processing Systems

Special cases of problem include the well-known Fermat-Weber problem - or geometric median problem - where z = 1, the mean or centroid where z = 2, and the Minimum Enclosing Ball problem, where z = . We consider these problem in the big data regime. Here, we are interested in sampling as few points as possible such that we can accurately estimate m. More specifically, we consider sublinear algorithms as well as coresets for these problems. Sublinear algorithms have a random query access to the set A and the goal is to minimize the number of queries.


ColdGANs: Taming Language GANs with Cautious Sampling Strategies Thomas Scialom, Paul-Alexis Dray

Neural Information Processing Systems

Training regimes based on Maximum Likelihood Estimation (MLE) suffer from known limitations, often leading to poorly generated text sequences. At the root of these limitations is the mismatch between training and inference, i.e. the so-called exposure bias, exacerbated by considering only the reference texts as correct, while in practice several alternative formulations could be as good. Generative Adversarial Networks (GANs) can mitigate those limitations but the discrete nature of text has hindered their application to language generation: the approaches proposed so far, based on Reinforcement Learning, have been shown to underperform MLE. Departing from previous works, we analyze the exploration step in GANs applied to text generation, and show how classical sampling results in unstable training. We propose to consider alternative exploration strategies in a GAN framework that we name ColdGANs, where we force the sampling to be close to the distribution modes to get smoother learning dynamics. For the first time, to the best of our knowledge, the proposed language GANs compare favorably to MLE, and obtain improvements over the state-of-the-art on three generative tasks, namely unconditional text generation, question generation, and abstractive summarization.


Dogs can fulfill our need to nurture

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. Just as birth rates decline in many wealthy and developed nations, dog parenting is remaining steady and even gaining in popularity. Up to half of households in Europe and 66 percent of homes in the United States have at least one dog and these pets are often regarded as a family member or "fur baby." To dig into what this shift says about our society, researchers from Eรถtvรถs Lorรกnd University in Budapest, Hungary conducted a literature review to analyze the data. They propose that while dogs do not replace children, they can offer a chance to fulfill an innate nurturing drive similar to parenting, but with fewer demands than raising biological children.


A Tight Lower Bound and Efficient Reduction for Swap Regret

Neural Information Processing Systems

Swap regret, a generic performance measure of online decision-making algorithms, plays an important role in the theory of repeated games, along with a close connection to correlated equilibria in strategic games. This paper shows an (p TN log N)-lower bound for swap regret, where T and N denote the numbers of time steps and available actions, respectively. Our lower bound is tight up to a constant, and resolves an open problem mentioned, e.g., in the book by Nisan et al. [28]. Besides, we present a computationally efficient reduction method that converts no-external-regret algorithms to no-swap-regret algorithms. This method can be applied not only to the full-information setting but also to the bandit setting and provides a better regret bound than previous results.


Russia-Ukraine war: List of key events, day 1,183

Al Jazeera

Russia's Defence Ministry said air defences shot down 105 Ukrainian drones over Russian regions, including 35 over the Moscow region, after the ministry said a day earlier that it had downed more than 300 Ukrainian drones. Kherson Governor Oleksandr Prokudin said one person was killed in a Russian artillery attack on the region. H said over the past day, 35 areas in Kherson, including Kherson city, came under artillery shelling and air attacks, wounding 11 people. Ukrainian President Zelenskyy said the "most intense situation" is in the Donetsk region, and the army is continuing "active operations in the Kursk and Belgorod regions". Russia's Defence Ministry said air defences shot down 105 Ukrainian drones over Russian regions, including 35 over the Moscow region, after the ministry said a day earlier that it had downed more than 300 Ukrainian drones.



Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity

Neural Information Processing Systems

Two of the most prominent algorithms for solving unconstrained smooth games are the classical stochastic gradient descent-ascent (SGDA) and the recently introduced stochastic consensus optimization (SCO) [Mescheder et al., 2017]. SGDA is known to converge to a stationary point for specific classes of games, but current convergence analyses require a bounded variance assumption. SCO is used successfully for solving large-scale adversarial problems, but its convergence guarantees are limited to its deterministic variant. In this work, we introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO under this condition for solving a class of stochastic variational inequality problems that are potentially non-monotone. We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size, and we propose insightful stepsize-switching rules to guarantee convergence to the exact solution. In addition, our convergence guarantees hold under the arbitrary sampling paradigm, and as such, we give insights into the complexity of minibatching.


Explicit Regularisation in Gaussian Noise Injections

Neural Information Processing Systems

We study the regularisation induced in neural networks by Gaussian noise injections (GNIs). Though such injections have been extensively studied when applied to data, there have been few studies on understanding the regularising effect they induce when applied to network activations. Here we derive the explicit regulariser of GNIs, obtained by marginalising out the injected noise, and show that it penalises functions with high-frequency components in the Fourier domain; particularly in layers closer to a neural network's output. We show analytically and empirically that such regularisation produces calibrated classifiers with large classification margins.