Plotting

Tech CEO promised AI but hired workers in the Philippines instead, FBI claims

Mashable

The former CEO of fintech app Nate has been charged with fraud for making misleading claims about the app's artificial intelligence technology -- or lack thereof. In a bizarre twist from the usual AI narrative, the FBI alleges that this time human beings were doing the work of AI, and not the other way around. According to a press release from the U.S. Attorney's Office, Southern District of New York, Albert Saniger has been indicted for a scheme to defraud investors. "As alleged, Albert Saniger misled investors by exploiting the promise and allure of AI technology to build a false narrative about innovation that never existed," Acting U.S. Attorney Matthew Podolsky said in the release. Government attorneys say Nate claimed to use AI technology to complete the e-commerce checkout process for customers.


Trump feels in 'good shape,' after physical, says he got 'every question right' on cognitive test

FOX News

President Trump's press secretary Karoline Leavitt touted him as "the most transparent and accessible president in American history," particularly compared to former President Biden. President Trump said on Friday that the first physical examination of his second term went well, and overall he feels he's in "very good shape." The president told reporters on board Air Force One while en route to his home in West Palm Beach Friday evening that the yearly presidential physical at Walter Reed Medical Center showed he has a "good heart, a good soul," and "overall, I think I'm in very – I felt I was in very good shape." He also took a cognitive test. "I don't know what to tell you other than I got every answer right," the president told reporters.


High dimensional online calibration in polynomial time

arXiv.org Machine Learning

In online (sequential) calibration, a forecaster predicts probability distributions over a finite outcome space $[d]$ over a sequence of $T$ days, with the goal of being calibrated. While asymptotically calibrated strategies are known to exist, they suffer from the curse of dimensionality: the best known algorithms require $\exp(d)$ days to achieve non-trivial calibration. In this work, we present the first asymptotically calibrated strategy that guarantees non-trivial calibration after a polynomial number of rounds. Specifically, for any desired accuracy $\epsilon > 0$, our forecaster becomes $\epsilon$-calibrated after $T = d^{O(1/\epsilon^2)}$ days. We complement this result with a lower bound, proving that at least $T = d^{\Omega(\log(1/\epsilon))}$ rounds are necessary to achieve $\epsilon$-calibration. Our results resolve the open questions posed by [Abernethy-Mannor'11, Hazan-Kakade'12]. Our algorithm is inspired by recent breakthroughs in swap regret minimization [Peng-Rubinstein'24, Dagan et al.'24]. Despite its strong theoretical guarantees, the approach is remarkably simple and intuitive: it randomly selects among a set of sub-forecasters, each of which predicts the empirical outcome frequency over recent time windows.


Adaptive Insurance Reserving with CVaR-Constrained Reinforcement Learning under Macroeconomic Regimes

arXiv.org Machine Learning

This paper proposes a reinforcement learning (RL) framework for insurance reserving that integrates tail-risk sensitivity, macroeconomic regime modeling, and regulatory compliance. The reserving problem is formulated as a finite-horizon Markov Decision Process (MDP), in which reserve adjustments are optimized using Proximal Policy Optimization (PPO) subject to Conditional Value-at-Risk (CVaR) constraints. To enhance policy robustness across varying economic conditions, the agent is trained using a regime-aware curriculum that progressively increases volatility exposure. The reward structure penalizes reserve shortfall, capital inefficiency, and solvency floor violations, with design elements informed by Solvency II and Own Risk and Solvency Assessment (ORSA) frameworks. Empirical evaluations on two industry datasets--Workers' Compensation, and Other Liability--demonstrate that the RL-CVaR agent achieves superior performance relative to classical reserving methods across multiple criteria, including tail-risk control (CVaR$_{0.95}$), capital efficiency, and regulatory violation rate. The framework also accommodates fixed-shock stress testing and regime-stratified analysis, providing a principled and extensible approach to reserving under uncertainty.


Dose-finding design based on level set estimation in phase I cancer clinical trials

arXiv.org Machine Learning

Dose-finding design based on level set estimation in phase I cancer clinical trials Keiichiro Seno 1 a, Kota Matsui 2b, Shogo Iwazaki 3, Yu Inatsu 4, Shion Takeno 5, 6 and Shigeyuki Matsui 2, 7 1 Department of Biostatistics, Nagoya University 2 Department of Biostatistics, Kyoto University 3 MI-6 Ltd. 4 Department of Computer Science, Nagoya Institute of Technology 5 Department of Mechanical Systems Engineering, Nagoya University 6 Center for Advanced Intelligence Project, RIKEN 7 Research Center for Medical and Health Data Science, The Institute of Statistical Mathematics Abstract The primary objective of phase I cancer clinical trials is to evaluate the safety of a new experimental treatment and to find the maximum tolerated dose (MTD). We show that the MTD estimation problem can be regarded as a level set estimation (LSE) problem whose objective is to determine the regions where an unknown function value is above or below a given threshold. Then, we propose a novel ...


No-Regret Generative Modeling via Parabolic Monge-Amp\`ere PDE

arXiv.org Machine Learning

We introduce a novel generative modeling framework based on a discretized parabolic Monge-Amp\`ere PDE, which emerges as a continuous limit of the Sinkhorn algorithm commonly used in optimal transport. Our method performs iterative refinement in the space of Brenier maps using a mirror gradient descent step. We establish theoretical guarantees for generative modeling through the lens of no-regret analysis, demonstrating that the iterates converge to the optimal Brenier map under a variety of step-size schedules. As a technical contribution, we derive a new Evolution Variational Inequality tailored to the parabolic Monge-Amp\`ere PDE, connecting geometry, transportation cost, and regret. Our framework accommodates non-log-concave target distributions, constructs an optimal sampling process via the Brenier map, and integrates favorable learning techniques from generative adversarial networks and score-based diffusion models. As direct applications, we illustrate how our theory paves new pathways for generative modeling and variational inference.


Neural Posterior Estimation on Exponential Random Graph Models: Evaluating Bias and Implementation Challenges

arXiv.org Machine Learning

Exponential random graph models (ERGMs) are flexible probabilistic frameworks to model statistical networks through a variety of network summary statistics. Conventional Bayesian estimation for ERGMs involves iteratively exchanging with an auxiliary variable due to the intractability of ERGMs, however, this approach lacks scalability to large-scale implementations. Neural posterior estimation (NPE) is a recent advancement in simulation-based inference, using a neural network based density estimator to infer the posterior for models with doubly intractable likelihoods for which simulations can be generated. While NPE has been successfully adopted in various fields such as cosmology, little research has investigated its use for ERGMs. Performing NPE on ERGM not only provides a differing angle of resolving estimation for the intractable ERGM likelihoods but also allows more efficient and scalable inference using the amortisation properties of NPE, and therefore, we investigate how NPE can be effectively implemented in ERGMs. In this study, we present the first systematic implementation of NPE for ERGMs, rigorously evaluating potential biases, interpreting the biases magnitudes, and comparing NPE fittings against conventional Bayesian ERGM fittings. More importantly, our work highlights ERGM-specific areas that may impose particular challenges for the adoption of NPE.


Regretful Decisions under Label Noise

arXiv.org Machine Learning

Machine learning models are routinely used to support decisions that affect individuals - be it to screen a patient for a serious illness or to gauge their response to treatment. In these tasks, we are limited to learning models from datasets with noisy labels. In this paper, we study the instance-level impact of learning under label noise. We introduce a notion of regret for this regime which measures the number of unforeseen mistakes due to noisy labels. We show that standard approaches to learning under label noise can return models that perform well at a population level while subjecting individuals to a lottery of mistakes . We present a versatile approach to estimate the likelihood of mistakes at the individual level from a noisy dataset by training models over plausible realizations of datasets without label noise. This is supported by a comprehensive empirical study of label noise in clinical prediction tasks. Our results reveal how failure to anticipate mistakes can compromise model reliability and adoption, and demonstrate how we can address these challenges by anticipating and avoiding regretful decisions. Machine learning models are routinely used to support or automate decisions that affect individuals - be it to screen a patient for a mental illness [47], or assess their risk for an adverse treatment response [3]. In such tasks, we train models with labels that reflect noisy observations of the true outcome we wish to predict. In practice, such noise may arise due to measurement error [e.g., 20, 35], human annotation [26], or inherent ambiguity [35]. In all these cases, label noise can have detrimental effects on model performance [10]. Over the past decade, these issues have led to extensive work on learning from noisy datasets [see e.g., 10, 28, 36, 39, 45].


Rethinking Remaining Useful Life Prediction with Scarce Time Series Data: Regression under Indirect Supervision

arXiv.org Machine Learning

Supervised time series prediction relies on directly measured target variables, but real-world use cases such as predicting remaining useful life (RUL) involve indirect supervision, where the target variable is labeled as a function of another dependent variable. Trending temporal regression techniques rely on sequential time series inputs to capture temporal patterns, requiring interpolation when dealing with sparsely and irregularly sampled covariates along the timeline. However, interpolation can introduce significant biases, particularly with highly scarce data. In this paper, we address the RUL prediction problem with data scarcity as time series regression under indirect supervision. We introduce a unified framework called parameterized static regression, which takes single data points as inputs for regression of target values, inherently handling data scarcity without requiring interpolation. The time dependency under indirect supervision is captured via a parametrical rectification (PR) process, approximating a parametric function during inference with historical posteriori estimates, following the same underlying distribution used for labeling during training. Additionally, we propose a novel batch training technique for tasks in indirect supervision to prevent overfitting and enhance efficiency. We evaluate our model on public benchmarks for RUL prediction with simulated data scarcity. Our method demonstrates competitive performance in prediction accuracy when dealing with highly scarce time series data.


OpenAI is retiring GPT-4 from ChatGPT

Mashable

According to ChatGPT's release notes (via TechCrunch), "GPT-4 will be retired from ChatGPT" on April 30. The model, which was released over two years ago, will still be available in the API, but recent updates to GPT-4o have rendered GPT-4 somewhat obsolete. "Recent upgrades have further improved GPT‑4o's instruction following, problem-solving, and conversational flow, making it a natural successor to GPT‑4," the note read. For those who have been following OpenAI and the AI industry, it puts the breakneck speed of the industry into sharp relief, while simultaneously pointing out that GPT-5 has yet to emerge. GPT-4, released in March 2023, was a notable step up from GPT-3.5, the previous model, which ushered in the ChatGPT explosive introduction to the world.