Goto

Collaborating Authors

 Zürich


Matthew Tkachuk continues to chase Team USA Hockey dominance as 2026 IIHF World Championship begins

FOX News

President Trump on $1,000 World Cup ticket prices: 'I wouldn't pay it either, to be honest' Pirates vs. Diamondbacks betting preview targets the under as both offenses go cold in series Former LSU coach Brian Kelly uses AI to prepare for job interviews, proving he's just like the rest of us Newsom office source responds to planned protest against trans athlete at state playoff girls' track meet Framber Valdez gets what he deserves for punk move, suspended six games after drilling Boston's Trevor Story MLB's new automated strike zone has a hidden feature helping umpires become more accurate than ever'This can touch anyone': Gorman family speaks following loss of Sheridan'Project Freedom' could soon resume: Report Iranian people are not citizens, but'subjects' of the regime: Middle East expert Vice Admiral Robert Harward weighs in on restarting'Project Freedom' in Strait of Hormuz Largest teachers' union accused of antisemitism in federal civil rights complaint McEnany's URGENT plea: 'Be Spencer Pratt!' WHO doesn't expect large Hantavirus outbreak US blockade keeps stranglehold on Iran's economy The Panthers star told Pat McAfee the U.S. is heading to Switzerland to win, not for a vacation If anyone thought Team USA was satisfied with Olympic gold and ready to coast through the rest of the international hockey calendar, Matthew Tkachuk has a message. The Florida Panthers star joined The Pat McAfee Show on Thursday and discussed his plan to play for Team USA at the 2026 IIHF World Championship in Switzerland. USA Hockey's preliminary roster, announced May 7, includes Tkachuk for the first time, since the Panthers failed to reach the NHL playoffs this season. The tournament begins May 15 in Zurich and Fribourg, and the Americans are trying to win back-to-back gold medals at the event for the first time ever. Tkachuk made his mindset pretty clear.


PGA Tour player goes shirtless in New Orleans, fails at miracle shot from water

FOX News

A piece of the UFC White House event's setup is sitting in Pennsylvania Amish country Viral Ottawa Senators fan blamed for team's 0-2 playoff start banished to Taiwan Edward Cabrera's strikeout prop is the play as struggling Phillies face surging Cubs today Nuggets vs Timberwolves Game 3 pick hinges on Jaden McDaniels calling out Denver's entire defense Charles Barkley was disgusted by Magic's highly questionable pregame handshake ChatGPT predicted the first round of the NFL Draft and here's what it said Curt Cignetti was so focused this offseason, he turned down all external requests: 'I'm 95% football' Former MLB owner claims'despicable' San Francisco Giants are the reason the A's left Oakland Trump weighs in on Iran's internal power struggle and Strait of Hormuz control Hasan Piker justifies'social murder' of CEO Fox News celebrates'Bring Your Kids to Work Day' Trump says there's'no time frame' to secure Iran deal Iranian activist praises Trump's intervention after female protesters saved from execution Michael Brennan's ball found the greenside pond, but with teammate Johnny Keefer in Position A, he decided to go for it LIV Golf Is On Its Death Bed As The PGA Tour Wins The Golf WAR! | Don't @ Me w/ Dan Dakich Broadcasting legend Tim Brando joins Dan Dakich to break down the decline of LIV Golf, Bryson DeChambeau's unique success, and the flaws in modern Masters coverage. It's highly unlikely that Michael Brennan will be the only 24-year-old man to take his shirt off in public in New Orleans on Thursday, but he will be the only one to do so who has a PGA Tour victory under his belt. During the opening round of this week's Zurich Classic, a team event on Tour played at TPC Louisiana, Brennan and teammate Johnny Keefer began on the back nine and got things rolling early, getting to 4-under through their opening six holes. Michael Brennan of the United States catches a ball on the third green during the third round of the RBC Heritage 2026 at Harbour Town Golf Links on April 18, 2026, in Hilton Head Island, South Carolina. LPGA'S MAJOR CHAMPIONSHIP GREENSIDE PLUNGE POOL IS PREPOSTEROUS IN EVERY WAY After back-to-back pars on the 16th and 17th holes, the duo arrived at the Par 5 closing hole, which is when things got messy.


Will fusion power get cheap? Don't count on it.

MIT Technology Review

Will fusion power get cheap? New research suggests that cost declines could be slow for the technology. Fusion power could provide a steady, zero-emissions source of electricity in the future--if companies can get plants built and running. But a new study suggests that even if that future arrives, it might not come cheap. Technologies tend to get less expensive over time. Lithium-ion batteries are now about 90% cheaper than they were in 2013.


Sony AI table tennis robot outplays elite human players

Robohub

In an article published today in Nature, Sony AI introduce Ace, the first robot to beat elite human players in competitive physical sport. Although AI systems have shown advanced performance in digital domains and board games (such as complex video games, chess and Go), translating this to physical performance has remained a significant challenge. Such a feat requires perception, planning, and control to work in a high-speed domain on the scale of milliseconds. Table tennis is a demanding and complex real-world test for robotics, requiring rapid decision-making, precise physical execution, and continuous adaptation to an unpredictable opponent. The ball's high speed, spin, and complex trajectories are central to competitive play.

  Country:
  Industry:

AI-powered robot beats elite table tennis players

The Guardian

In feat hailed as milestone in robotics, Sony AI's Ace wins three out of five matches played under official rules An AI-powered robot has beaten elite players at table tennis in a significant achievement for a machine faced with human athletes in a real-world competitive sport. Named Ace, the robotic system developed by Sony AI, won three out of five matches against elite players, but lost the two it played against professionals, clawing back only one game in the seven contests. The feat has been hailed as a milestone for robotics, a field that has long seen table tennis - and the lightning-fast reactions, perception and skill it demands - as one of the toughest tests of how far the technology has advanced. In the matches, played under official competition rules, Ace displayed a mastery of spin, handled difficult shots, such as balls catching on the net, and pulled off one rapid backspin shot that a professional had thought impossible. A research paper on the robot was published in Nature on Wednesday, but scientists working on the project said Ace had improved since the report was submitted.


We might finally know how to use quantum computers to boost AI

New Scientist

Quantum computers might eventually be able to handle some AI applications that currently require huge amounts of conventional computing power. Such a development would be a major boost to machine learning and similar artificial intelligence algorithms. Quantum computers hold the promise of eventually being able to complete certain calculations that are impossible for conventional computers. For years, researchers have been debating whether these advantages over conventional computers extend to tasks that involve lots of data, and the algorithms that learn from them - in other words, the machine learning that underlies many AI programs. Now, Hsin-Yuan Huang at the quantum computing firm Oratomic and his colleagues argue that the answer ought to be "yes". Their mathematical work aims to lay the foundations for a future where quantum computers offer a broad boost to AI. "Machine learning is really utilised everywhere in science and technology and also everyday life.


Conformal Margin Risk Minimization: An Envelope Framework for Robust Learning under Label Noise

Shi, Yuanjie, Li, Peihong, Zhang, Zijian, Doppa, Janardhan Rao, Yan, Yan

arXiv.org Machine Learning

Most methods for learning with noisy labels require privileged knowledge such as noise transition matrices, clean subsets or pretrained feature extractors, resources typically unavailable when robustness is most needed. We propose Conformal Margin Risk Minimization (CMRM), a plug-and-play envelope framework that improves any classification loss under label noise by adding a single quantile-calibrated regularization term, with no privileged knowledge or training pipeline modification. CMRM measures the confidence margin between the observed label and competing labels, and thresholds it with a conformal quantile estimated per batch to focus training on high-margin samples while suppressing likely mislabeled ones. We derive a learning bound for CMRM under arbitrary label noise requiring only mild regularity of the margin distribution. Across five base methods and six benchmarks with synthetic and real-world noise, CMRM consistently improves accuracy (up to +3.39%), reduces conformal prediction set size (up to -20.44%) and does not hurt under 0% noise, showing that CMRM captures a method-agnostic uncertainty signal that existing mechanisms did not exploit.


High-dimensional reliability-based design optimization using stochastic emulators

Moustapha, M., Sudret, B.

arXiv.org Machine Learning

Reliability-based design optimization (RBDO) is traditionally formulated as a nested optimization and reliability problem. Although surrogate models are generally employed to improve efficiency, the approach remains computationally prohibitive in high-dimensional settings. This paper proposes a novel RBDO framework based on a stochastic simulator viewpoint, in which the deterministic limit-state function and the uncertainty in the model inputs are combined into a unified stochastic representation. Under this formulation, the system response conditioned on a given design is modeled directly through its output distribution, rather than through an explicit limit-state function. Stochastic emulators are constructed in the design space to approximate the conditional response distribution, enabling the semi-analytical evaluation of failure probabilities or associated quantiles without resorting to Monte Carlo simulation. Two classes of stochastic emulators are investigated, namely generalized lambda models and stochastic polynomial chaos expansions. Both approaches provide a deterministic mapping between design variables and reliability constraints, which breaks the classical double-loop structure of RBDO and allows the use of standard deterministic optimization algorithms. The performance of the proposed approach is evaluated on a set of benchmark problems with dimensionality ranging from low to very high, including a case with stochastic excitation. The results are compared against a Kriging-based approach formulated in the full input space. The proposed method yields substantial computational gains, particularly in high-dimensional settings. While its efficiency is comparable to Kriging for low-dimensional problems, it significantly outperforms Kriging as the dimensionality increases.


Scalable Variational Bayesian Fine-Tuning of LLMs via Orthogonalized Low-Rank Adapters

Xiang, Haotian, Li, Bingcong, Lu, Qin

arXiv.org Machine Learning

When deploying large language models (LLMs) to safety-critical applications, uncertainty quantification (UQ) is of utmost importance to self-assess the reliability of the LLM-based decisions. However, such decisions typically suffer from overconfidence, particularly after parameter-efficient fine-tuning (PEFT) for downstream domain-specific tasks with limited data. Existing methods to alleviate this issue either rely on Laplace approximation based post-hoc framework, which may yield suboptimal calibration depending on the training trajectory, or variational Bayesian training that requires multiple complete forward passes through the entire LLM backbone at inference time for Monte Carlo estimation, posing scalability challenges for deployment. To address these limitations, we build on the Bayesian last layer (BLL) model, where the LLM-based deterministic feature extractor is followed by random last layer parameters for uncertainty reasoning. Since existing low-rank adapters (LoRA) for PEFT have limited expressiveness due to rank collapse, we address this with Polar-decomposed Low-rank Adapter Representation (PoLAR), an orthogonalized parameterization paired with Riemannian optimization to enable more stable and expressive adaptation. Building on this PoLAR-BLL model, we leverage the variational (V) inference framework to put forth a scalable Bayesian fine-tuning approach which jointly seeks the PoLAR parameters and approximate posterior of the last layer parameters via alternating optimization. The resulting PoLAR-VBLL is a flexible framework that nicely integrates architecture-enhanced optimization with scalable Bayesian inference to endow LLMs with well-calibrated UQ. Our empirical results verify the effectiveness of PoLAR-VBLL in terms of generalization and uncertainty estimation on both in-distribution and out-of-distribution data for various common-sense reasoning tasks.


Unveiling Hidden Convexity in Deep Learning: a Sparse Signal Processing Perspective

Zeger, Emi, Pilanci, Mert

arXiv.org Machine Learning

Deep neural networks (DNNs), particularly those using Rectified Linear Unit (ReLU) activation functions, have achieved remarkable success across diverse machine learning tasks, including image recognition, audio processing, and language modeling. Despite this success, the non-convex nature of DNN loss functions complicates optimization and limits theoretical understanding. In this paper, we highlight how recently developed convex equivalences of ReLU NNs and their connections to sparse signal processing models can address the challenges of training and understanding NNs. Recent research has uncovered several hidden convexities in the loss landscapes of certain NN architectures, notably two-layer ReLU networks and other deeper or varied architectures. This paper seeks to provide an accessible and educational overview that bridges recent advances in the mathematics of deep learning with traditional signal processing, encouraging broader signal processing applications.