Goto

Collaborating Authors

 Limburg



Conformal prediction for full and sparse polynomial chaos expansions

Hatstatt, A., Zhu, X., Sudret, B.

arXiv.org Machine Learning

Polynomial Chaos Expansions (PCEs) are widely recognized for their efficient computational performance in surrogate modeling. Yet, a robust framework to quantify local model errors is still lacking. While the local uncertainty of PCE prediction can be captured using bootstrap resampling, other methods offering more rigorous statistical guarantees are needed, especially in the context of small training datasets. Recently, conformal predictions have demonstrated strong potential in machine learning, providing statistically robust and model-agnostic prediction intervals. Due to its generality and versatility, conformal prediction is especially valuable, as it can be adapted to suit a variety of problems, making it a compelling choice for PCE-based surrogate models. In this contribution, we explore its application to PCE-based surrogate models. More precisely, we present the integration of two conformal prediction methods, namely the full conformal and the Jackknife+ approaches, into both full and sparse PCEs. For full PCEs, we introduce computational shortcuts inspired by the inherent structure of regression methods to optimize the implementation of both conformal methods. For sparse PCEs, we incorporate the two approaches with appropriate modifications to the inference strategy, thereby circumventing the non-symmetrical nature of the regression algorithm and ensuring valid prediction intervals. Our developments yield better-calibrated prediction intervals for both full and sparse PCEs, achieving superior coverage over existing approaches, such as the bootstrap, while maintaining a moderate computational cost.



Autotune: fast, accurate, and automatic tuning parameter selection for Lasso

Sadhukhan, Tathagata, Wilms, Ines, Smeekes, Stephan, Basu, Sumanta

arXiv.org Machine Learning

Least absolute shrinkage and selection operator (Lasso), a popular method for high-dimensional regression, is now used widely for estimating high-dimensional time series models such as the vector autoregression (VAR). Selecting its tuning parameter efficiently and accurately remains a challenge, despite the abundance of available methods for doing so. We propose $\mathsf{autotune}$, a strategy for Lasso to automatically tune itself by optimizing a penalized Gaussian log-likelihood alternately over regression coefficients and noise standard deviation. Using extensive simulation experiments on regression and VAR models, we show that $\mathsf{autotune}$ is faster, and provides better generalization and model selection than established alternatives in low signal-to-noise regimes. In the process, $\mathsf{autotune}$ provides a new estimator of noise standard deviation that can be used for high-dimensional inference, and a new visual diagnostic procedure for checking the sparsity assumption on regression coefficients. Finally, we demonstrate the utility of $\mathsf{autotune}$ on a real-world financial data set. An R package based on C++ is also made publicly available on Github.


Distributionally Robust Markov Games with Average Reward

Roch, Zachary, Wang, Yue

arXiv.org Artificial Intelligence

We study distributionally robust Markov games (DR-MGs) with the average-reward criterion, a framework for multi-agent decision-making under uncertainty over extended horizons. In average reward DR-MGs, agents aim to maximize their worst-case infinite-horizon average reward, to ensure satisfactory performance under environment uncertainties and opponent actions. We first establish a connection between the best-response policies and the optimal policies for the induced single-agent problems. Under a standard irreducible assumption, we derive a correspondence between the optimal policies and the solutions of the robust Bellman equation, and derive the existence of stationary Nash Equilibrium (NE) based on these results. We further study DR-MGs under the weakly communicating setting, where we construct a set-valued map and show its value is a subset of the best-response policies, convex and upper hemi-continuous, and derive the existence of NE. We then explore algorithmic solutions, by first proposing a Robust Nash-Iteration algorithm and providing convergence guarantees under some additional assumptions and a NE computing oracle. We further develop a temporal-difference based algorithm for DR-MGs, and provide convergence guarantees without any additional oracle or assumptions. Finally, we connect average-reward robust NE to discounted ones, showing that the average reward robust NE can be approximated by the discounted ones under a large discount factor. Our studies provide a comprehensive theoretical and algorithmic foundation for decision-making in complex, uncertain, and long-running multi-player environments.


A Field Guide to Deploying AI Agents in Clinical Practice

Gallifant, Jack, Kellogg, Katherine C., Butler, Matt, Centi, Amanda, Chen, Shan, Doyle, Patrick F., Dutta, Sayon, Guo, Joyce, Hadfield, Matthew J., Kim, Esther H., Kozono, David E., Aerts, Hugo JWL, Landman, Adam B., Mak, Raymond H., Mishuris, Rebecca G., Nelson, Tanna L., Savova, Guergana K., Sharon, Elad, Silverman, Benjamin C., Topaloglu, Umit, Warner, Jeremy L., Bitterman, Danielle S.

arXiv.org Artificial Intelligence

Large language models (LLMs) integrated into agent-driven workflows hold immense promise for healthcare, yet a significant gap exists between their potential and practical implementation within clinical settings. To address this, we present a practitioner-oriented field manual for deploying generative agents that use electronic health record (EHR) data. This guide is informed by our experience deploying the "irAE-Agent", an automated system to detect immune-related adverse events from clinical notes at Mass General Brigham, and by structured interviews with 21 clinicians, engineers, and informatics leaders involved in the project. Our analysis reveals a critical misalignment in clinical AI development: less than 20% of our effort was dedicated to prompt engineering and model development, while over 80% was consumed by the sociotechnical work of implementation. We distill this effort into five "heavy lifts": data integration, model validation, ensuring economic value, managing system drift, and governance. By providing actionable solutions for each of these challenges, this field manual shifts the focus from algorithmic development to the essential infrastructure and implementation work required to bridge the "valley of death" and successfully translate generative AI from pilot projects into routine clinical care.


Eating two handfuls of a common snack daily improves memory in just four months

Daily Mail - Science & tech

Doctor and his wife are executed in garage of their $1.3m home... then body'connected to crime' is found in burning car 70 miles away Is this the END of Ozempic? Nashville neighbors can see what's REALLY going on with Nicole Kidman. Big Short investor mocks Elon Musk and calls Tesla'ridiculously overvalued' in blazing newsletter Mystery of Nikki Haley's son EXPOSED: Nepo baby explodes on to the scene as America First patriot. But here's what his mother really thinks... Mom who spent 10 years'gentle parenting' admits it was a mistake: 'My kids are anxious, insecure and entitled' Even I was once overweight. So trust me, this 30 DAY detox plan will get you thin WITHOUT Ozempic... but if you want to stay skinny, you'll have to make one major sacrifice: JILLIAN MICHAELS Tina Turner's husband, 69, finds love again with 60-year-old American widow as they're seen on designer shopping spree in Milan Record cold for 235 million Americans starting in just HOURS as polar vortex brings'most extreme cold on Earth' Worrying side-effect of creatine you aren't being warned about: Cheap supplement is hailed as a'miracle' - but here's how to tell if YOUR brand is doing more harm than good Anti-tourism backlash grows in popular Italian city as locals claim it's a'no-go zone' Nigel Lythgoe denies Paula Abdul's sexual assault allegations again almost a year after lawsuit was settled I thought everyone did this in bed... then I learned the earth-shattering truth: JANA HOCKING reveals what most women are too afraid to say Trader Joe's fans go wild for a product that has'finally' returned to stores... 'I dream about it' READ MORE: Top doctor reveals how just a few spoonfuls of popular'health' food per week could cause CANCER Eating a common snack daily may boost memory and brain blood flow in older adults, a new study has found.


Nonstabilizerness Estimation using Graph Neural Networks

Lipardi, Vincenzo, Dibenedetto, Domenica, Stamoulis, Georgios, van Nieuwenburg, Evert, Winands, Mark H. M.

arXiv.org Artificial Intelligence

This article proposes a Graph Neural Network (GNN) approach to estimate nonstabilizerness in quantum circuits, measured by the stabilizer Rényi entropy (SRE). Nonstabilizerness is a fundamental resource for quantum advantage, and efficient SRE estimations are highly beneficial in practical applications. We address the nonstabilizerness estimation problem through three supervised learning formulations starting from easier classification tasks to the more challenging regression task. Experimental results show that the proposed GNN manages to capture meaningful features from the graph-based circuit representation, resulting in robust generalization performances achieved across diverse scenarios. In classification tasks, the GNN is trained on product states and generalizes on circuits evolved under Clifford operations, entangled states, and circuits with higher number of qubits. In the regression task, the GNN significantly improves the SRE estimation on out-of-distribution circuits with higher number of qubits and gate counts compared to previous work, for both random quantum circuits and structured circuits derived from the transverse-field Ising model. Moreover, the graph representation of quantum circuits naturally integrates hardware-specific information. Simulations on noisy quantum hardware highlight the potential of the proposed GNN to predict the SRE measured on quantum devices.


Revenue Optimization with Approximate Bid Predictions Andres Munoz Medina Google Research 76 9th Ave New York, NY10011 Sergei V assilvitskii Google Research 76 9th Ave New York, NY10011

Neural Information Processing Systems

In the context of advertising auctions, finding good reserve prices is a notoriously challenging learning problem. This is due to the heterogeneity of ad opportunity types, and the non-convexity of the objective function. In this work, we show how to reduce reserve price optimization to the standard setting of prediction under squared loss, a well understood problem in the learning community. We further bound the gap between the expected bid and revenue in terms of the average loss of the predictor. This is the first result that formally relates the revenue gained to the quality of a standard machine learned model.