Goto

Collaborating Authors

China's DeepSeek unveils latest models a year after upending global tech

Al Jazeera

China's DeepSeek unveils latest models a year after upending global tech China's DeepSeek has unveiled the latest versions of its signature artificial intelligence-powered chatbot, a year after its flagship model sent shockwaves through the global tech scene. The Chinese start-up launched preview versions of DeepSeek-V4-Pro and DeepSeek-V4-Flash on Friday as it touted its ability to go toe-to-toe with US rivals such as OpenAI and Google. The "flash" model has similar reasoning abilities to the "pro" version, while offering faster response times and more cost-effective pricing, the Hangzhou-based startup said. Like DeepSeek's previous chatbots, V4-Pro and V4-Flash follow an open-source model, meaning developers are free to use and modify them at will. The release comes after DeepSeek-R1 stunned the tech sector upon its launch in January last year with capabilities broadly comparable with those of ChatGPT and Gemini.


Steve Rosenberg: Kremlin's tightening grip on internet fuels public discontent

BBC News

Near the Kremlin several dozen people are queuing outside the presidential administration office. They've come to submit petitions calling on President Vladimir Putin to end a crackdown on the internet. Russian authorities have been tightening control of the country's cyber space. Access to global messaging apps has been restricted and there are widespread disruptions to, even shutdowns of, mobile internet. Petitioning the president is legal.


SoftBank prepares to manufacture batteries for AI data centers

The Japan Times

SoftBank Group's mobile unit plans to transform part of its factory in Osaka Prefecture into one of Japan's biggest production lines for large-scale batteries in an ambitious attempt at powering its own artificial intelligence data centers. SoftBank Corp. aims to bring that production online within the next five years, according to people familiar with the matter. They asked not to be named as deliberations remain private. After SoftBank executives mulled different purposes for the plant in the city of Sakai, including robotics manufacturing, they decided to pursue energy. The Tokyo-based group led by Masayoshi Son is one of the world's foremost supporters of AI, having committed hundreds of billions of dollars to investment in data centers, cloud services and bets on startups like OpenAI.


A new survey reveals the MLB's most foul-mouthed fanbase

FOX News

Sherrone Moore accuser Paige Shiver speaks out in new interview: he'had complete control over me' Megan Rapinoe calls on traditional WNBA media to be replaced with those who'understand queer culture' The NFL Draft continues to be one of the worst'sporting events' of the year New Russini-Vrabel photos raise ESPN conflict questions but the network won't answer them ESPN's Mad Dog Russo melts down over'U-S-A' chants at the RBC Heritage A piece of the UFC White House event's setup is sitting in Pennsylvania Amish country Viral Ottawa Senators fan blamed for team's 0-2 playoff start banished to Taiwan'First Take' host acts disgusted when she has to cover Vrabel-Russini drama Gen Jack Keane: You can't believe anything Iran says until it executes Will Cain: Everything about Hasan Piker is'communism wrapped in a Che Guevara T-shirt' Trump: 'Can I finish my question, wise guy?' DHS attorney speaks out after UCLA protest chaos and claims he received'death threats' Trump: Why would I use a nuclear weapon? A Vegas Insider study combed through all 30 MLB teams' subreddits to find which fanbases swear the most online When you start thinking about which MLB teams' fanbases have the filthiest mouths, there's a good chance a few cities instantly jump to mind. But a new survey from Vegas Insider has found the most foul-mouthed fanbases in the MLB, and the top team might surprise you a little at first... and then it will make total sense. A new survey has found that Athletics fans are the most foul-mouthed in Major League Baseball. Technically, a franchise that played in Philadelphia at one point, but is now in Sacramento limbonow in Sacramento limbo ahead of a move to Vegas: the Athletics.


CLT-Optimal Parameter Error Bounds for Linear System Identification

Zhou, Yichen, Tu, Stephen

arXiv.org Machine Learning

There has been remarkable progress over the past decade in establishing finite-sample, non-asymptotic bounds on recovering unknown system parameters from observed system behavior. Surprisingly, however, we show that the current state-of-the-art bounds do not accurately capture the statistical complexity of system identification, even in the most fundamental setting of estimating a discrete-time linear dynamical system (LDS) via ordinary least-squares regression (OLS). Specifically, we utilize asymptotic normality to identify classes of problem instances for which current bounds overstate the squared parameter error, in both spectral and Frobenius norm, by a factor of the state-dimension of the system. Informed by this discrepancy, we then sharpen the OLS parameter error bounds via a novel second-order decomposition of the parameter error, where crucially the lower-order term is a matrix-valued martingale that we show correctly captures the CLT scaling. From our analysis we obtain finite-sample bounds for both (i) stable systems and (ii) the many-trajectories setting that match the instance-specific optimal rates up to constant factors in Frobenius norm, and polylogarithmic state-dimension factors in spectral norm.


A single algorithm for both restless and rested rotting bandits

Seznec, Julien, Ménard, Pierre, Lazaric, Alessandro, Valko, Michal

arXiv.org Machine Learning

In many application domains (e.g., recommender systems, intelligent tutoring systems), the rewards associated to the actions tend to decrease over time. This decay is either caused by the actions executed in the past (e.g., a user may get bored when songs of the same genre are recommended over and over) or by an external factor (e.g., content becomes outdated). These two situations can be modeled as specific instances of the rested and restless bandit settings, where arms are rotting (i.e., their value decrease over time). These problems were thought to be significantly different, since Levine et al. (2017) showed that state-of-the-art algorithms for restless bandit perform poorly in the rested rotting setting. In this paper, we introduce a novel algorithm, Rotting Adaptive Window UCB (RAW-UCB), that achieves near-optimal regret in both rotting rested and restless bandit, without any prior knowledge of the setting (rested or restless) and the type of non-stationarity (e.g., piece-wise constant, bounded variation). This is in striking contrast with previous negative results showing that no algorithm can achieve similar results as soon as rewards are allowed to increase. We confirm our theoretical findings on a number of synthetic and dataset-based experiments.


The Sample Complexity of Multicalibration

Collina, Natalie, Lu, Jiuyao, Noarov, Georgy, Roth, Aaron

arXiv.org Machine Learning

We study the minimax sample complexity of multicalibration in the batch setting. A learner observes $n$ i.i.d. samples from an unknown distribution and must output a (possibly randomized) predictor whose population multicalibration error, measured by Expected Calibration Error (ECE), is at most $\varepsilon$ with respect to a given family of groups. For every fixed $κ> 0$, in the regime $|G|\le \varepsilon^{-κ}$, we prove that $\widetildeΘ(\varepsilon^{-3})$ samples are necessary and sufficient, up to polylogarithmic factors. The lower bound holds even for randomized predictors, and the upper bound is realized by a randomized predictor obtained via an online-to-batch reduction. This separates the sample complexity of multicalibration from that of marginal calibration, which scales as $\widetildeΘ(\varepsilon^{-2})$, and shows that mean-ECE multicalibration is as difficult in the batch setting as it is in the online setting, in contrast to marginal calibration which is strictly more difficult in the online setting. In contrast we observe that for $κ= 0$, the sample complexity of multicalibration remains $\widetildeΘ(\varepsilon^{-2})$ exhibiting a sharp threshold phenomenon. More generally, we establish matching upper and lower bounds, up to polylogarithmic factors, for a weighted $L_p$ multicalibration metric for all $1 \le p \le 2$, with optimal exponent $3/p$. We also extend the lower-bound template to a regular class of elicitable properties, and combine it with the online upper bounds of Hu et al. (2025) to obtain matching bounds for calibrating properties including expectiles and bounded-density quantiles.


Revealing Geography-Driven Signals in Zone-Level Claim Frequency Models: An Empirical Study using Environmental and Visual Predictors

Alfonso-Sánchez, Sherly, Bravo, Cristián, Stankova, Kristina G.

arXiv.org Machine Learning

Geographic context is often consider relevant to motor insurance risk, yet public actuarial datasets provide limited location identifiers, constraining how this information can be incorporated and evaluated in claim-frequency models. This study examines how geographic information from alternative data sources can be incorporated into actuarial models for Motor Third Party Liability (MTPL) claim prediction under such constraints. Using the BeMTPL97 dataset, we adopt a zone-level modeling framework and evaluate predictive performance on unseen postcodes. Geographic information is introduced through two channels: environmental indicators from OpenStreetMap and CORINE Land Cover, and orthoimagery released by the Belgian National Geographic Institute for academic use. We evaluate the predictive contribution of coordinates, environmental features, and image embeddings across three baseline models: generalized linear models (GLMs), regularized GLMs, and gradient-boosted trees, while raw imagery is modeled using convolutional neural networks. Our results show that augmenting actuarial variables with constructed geographic information improves accuracy. Across experiments, both linear and tree-based models benefit most from combining coordinates with environmental features extracted at 5 km scale, while smaller neighborhoods also improve baseline specifications. Generally, image embeddings do not improve performance when environmental features are available; however, when such features are absent, pretrained vision-transformer embeddings enhance accuracy and stability for regularized GLMs. Our results show that the predictive value of geographic information in zone-level MTPL frequency models depends less on model complexity than on how geography is represented, and illustrate that geographic context can be incorporated despite limited individual-level spatial information.


A Kernel Nonconformity Score for Multivariate Conformal Prediction

Meyer, Louis, Xu, Wenkai

arXiv.org Machine Learning

Multivariate conformal prediction requires nonconformity scores that compress residual vectors into scalars while preserving certain implicit geometric structure of the residual distribution. We introduce a Multivariate Kernel Score (MKS) that produces prediction regions that explicitly adapt to this geometry. We show that the proposed score resembles the Gaussian process posterior variance, unifying Bayesian uncertainty quantification with the coverage guarantees of frequentist-type. Moreover, the MKS can be decomposed into an anisotropic Maximum Mean Discrepancy (MMD) that interpolates between kernel density estimation and covariance-weighted distance. We prove finite-sample coverage guarantees and establish convergence rates that depend on the effective rank of the kernel-based covariance operator rather than the ambient dimension, enabling dimension-free adaptation. On regression tasks, the MKS reduces the volume of prediction region significantly, compared to ellipsoidal baselines while maintaining nominal coverage, with larger gains at higher dimensions and tighter coverage levels.


There Will Be a Scientific Theory of Deep Learning

Simon, Jamie, Kunin, Daniel, Atanasov, Alexander, Boix-Adserà, Enric, Bordelon, Blake, Cohen, Jeremy, Ghosh, Nikhil, Guth, Florentin, Jacot, Arthur, Kamb, Mason, Karkada, Dhruva, Michaud, Eric J., Ottlik, Berkan, Turnbull, Joseph

arXiv.org Machine Learning

In this paper, we make the case that a scientific theory of deep learning is emerging. By this we mean a theory which characterizes important properties and statistics of the training process, hidden representations, final weights, and performance of neural networks. We pull together major strands of ongoing research in deep learning theory and identify five growing bodies of work that point toward such a theory: (a) solvable idealized settings that provide intuition for learning dynamics in realistic systems; (b) tractable limits that reveal insights into fundamental learning phenomena; (c) simple mathematical laws that capture important macroscopic observables; (d) theories of hyperparameters that disentangle them from the rest of the training process, leaving simpler systems behind; and (e) universal behaviors shared across systems and settings which clarify which phenomena call for explanation. Taken together, these bodies of work share certain broad traits: they are concerned with the dynamics of the training process; they primarily seek to describe coarse aggregate statistics; and they emphasize falsifiable quantitative predictions. We argue that the emerging theory is best thought of as a mechanics of the learning process, and suggest the name learning mechanics. We discuss the relationship between this mechanics perspective and other approaches for building a theory of deep learning, including the statistical and information-theoretic perspectives. In particular, we anticipate a symbiotic relationship between learning mechanics and mechanistic interpretability. We also review and address common arguments that fundamental theory will not be possible or is not important. We conclude with a portrait of important open directions in learning mechanics and advice for beginners. We host further introductory materials, perspectives, and open questions at learningmechanics.pub.