Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
High dimensional online calibration in polynomial time
In online (sequential) calibration, a forecaster predicts probability distributions over a finite outcome space $[d]$ over a sequence of $T$ days, with the goal of being calibrated. While asymptotically calibrated strategies are known to exist, they suffer from the curse of dimensionality: the best known algorithms require $\exp(d)$ days to achieve non-trivial calibration. In this work, we present the first asymptotically calibrated strategy that guarantees non-trivial calibration after a polynomial number of rounds. Specifically, for any desired accuracy $\epsilon > 0$, our forecaster becomes $\epsilon$-calibrated after $T = d^{O(1/\epsilon^2)}$ days. We complement this result with a lower bound, proving that at least $T = d^{\Omega(\log(1/\epsilon))}$ rounds are necessary to achieve $\epsilon$-calibration. Our results resolve the open questions posed by [Abernethy-Mannor'11, Hazan-Kakade'12]. Our algorithm is inspired by recent breakthroughs in swap regret minimization [Peng-Rubinstein'24, Dagan et al.'24]. Despite its strong theoretical guarantees, the approach is remarkably simple and intuitive: it randomly selects among a set of sub-forecasters, each of which predicts the empirical outcome frequency over recent time windows.
Adaptive Insurance Reserving with CVaR-Constrained Reinforcement Learning under Macroeconomic Regimes
Dong, Stella C., Finlay, James R.
This paper proposes a reinforcement learning (RL) framework for insurance reserving that integrates tail-risk sensitivity, macroeconomic regime modeling, and regulatory compliance. The reserving problem is formulated as a finite-horizon Markov Decision Process (MDP), in which reserve adjustments are optimized using Proximal Policy Optimization (PPO) subject to Conditional Value-at-Risk (CVaR) constraints. To enhance policy robustness across varying economic conditions, the agent is trained using a regime-aware curriculum that progressively increases volatility exposure. The reward structure penalizes reserve shortfall, capital inefficiency, and solvency floor violations, with design elements informed by Solvency II and Own Risk and Solvency Assessment (ORSA) frameworks. Empirical evaluations on two industry datasets--Workers' Compensation, and Other Liability--demonstrate that the RL-CVaR agent achieves superior performance relative to classical reserving methods across multiple criteria, including tail-risk control (CVaR$_{0.95}$), capital efficiency, and regulatory violation rate. The framework also accommodates fixed-shock stress testing and regime-stratified analysis, providing a principled and extensible approach to reserving under uncertainty.
Dose-finding design based on level set estimation in phase I cancer clinical trials
Seno, Keiichiro, Matsui, Kota, Iwazaki, Shogo, Inatsu, Yu, Takeno, Shion, Matsui, Shigeyuki
Dose-finding design based on level set estimation in phase I cancer clinical trials Keiichiro Seno 1 a, Kota Matsui 2b, Shogo Iwazaki 3, Yu Inatsu 4, Shion Takeno 5, 6 and Shigeyuki Matsui 2, 7 1 Department of Biostatistics, Nagoya University 2 Department of Biostatistics, Kyoto University 3 MI-6 Ltd. 4 Department of Computer Science, Nagoya Institute of Technology 5 Department of Mechanical Systems Engineering, Nagoya University 6 Center for Advanced Intelligence Project, RIKEN 7 Research Center for Medical and Health Data Science, The Institute of Statistical Mathematics Abstract The primary objective of phase I cancer clinical trials is to evaluate the safety of a new experimental treatment and to find the maximum tolerated dose (MTD). We show that the MTD estimation problem can be regarded as a level set estimation (LSE) problem whose objective is to determine the regions where an unknown function value is above or below a given threshold. Then, we propose a novel ...
No-Regret Generative Modeling via Parabolic Monge-Amp\`ere PDE
We introduce a novel generative modeling framework based on a discretized parabolic Monge-Amp\`ere PDE, which emerges as a continuous limit of the Sinkhorn algorithm commonly used in optimal transport. Our method performs iterative refinement in the space of Brenier maps using a mirror gradient descent step. We establish theoretical guarantees for generative modeling through the lens of no-regret analysis, demonstrating that the iterates converge to the optimal Brenier map under a variety of step-size schedules. As a technical contribution, we derive a new Evolution Variational Inequality tailored to the parabolic Monge-Amp\`ere PDE, connecting geometry, transportation cost, and regret. Our framework accommodates non-log-concave target distributions, constructs an optimal sampling process via the Brenier map, and integrates favorable learning techniques from generative adversarial networks and score-based diffusion models. As direct applications, we illustrate how our theory paves new pathways for generative modeling and variational inference.
Neural Posterior Estimation on Exponential Random Graph Models: Evaluating Bias and Implementation Challenges
Exponential random graph models (ERGMs) are flexible probabilistic frameworks to model statistical networks through a variety of network summary statistics. Conventional Bayesian estimation for ERGMs involves iteratively exchanging with an auxiliary variable due to the intractability of ERGMs, however, this approach lacks scalability to large-scale implementations. Neural posterior estimation (NPE) is a recent advancement in simulation-based inference, using a neural network based density estimator to infer the posterior for models with doubly intractable likelihoods for which simulations can be generated. While NPE has been successfully adopted in various fields such as cosmology, little research has investigated its use for ERGMs. Performing NPE on ERGM not only provides a differing angle of resolving estimation for the intractable ERGM likelihoods but also allows more efficient and scalable inference using the amortisation properties of NPE, and therefore, we investigate how NPE can be effectively implemented in ERGMs. In this study, we present the first systematic implementation of NPE for ERGMs, rigorously evaluating potential biases, interpreting the biases magnitudes, and comparing NPE fittings against conventional Bayesian ERGM fittings. More importantly, our work highlights ERGM-specific areas that may impose particular challenges for the adoption of NPE.
Regretful Decisions under Label Noise
Nagaraj, Sujay, Liu, Yang, Calmon, Flavio P., Ustun, Berk
Machine learning models are routinely used to support decisions that affect individuals - be it to screen a patient for a serious illness or to gauge their response to treatment. In these tasks, we are limited to learning models from datasets with noisy labels. In this paper, we study the instance-level impact of learning under label noise. We introduce a notion of regret for this regime which measures the number of unforeseen mistakes due to noisy labels. We show that standard approaches to learning under label noise can return models that perform well at a population level while subjecting individuals to a lottery of mistakes . We present a versatile approach to estimate the likelihood of mistakes at the individual level from a noisy dataset by training models over plausible realizations of datasets without label noise. This is supported by a comprehensive empirical study of label noise in clinical prediction tasks. Our results reveal how failure to anticipate mistakes can compromise model reliability and adoption, and demonstrate how we can address these challenges by anticipating and avoiding regretful decisions. Machine learning models are routinely used to support or automate decisions that affect individuals - be it to screen a patient for a mental illness [47], or assess their risk for an adverse treatment response [3]. In such tasks, we train models with labels that reflect noisy observations of the true outcome we wish to predict. In practice, such noise may arise due to measurement error [e.g., 20, 35], human annotation [26], or inherent ambiguity [35]. In all these cases, label noise can have detrimental effects on model performance [10]. Over the past decade, these issues have led to extensive work on learning from noisy datasets [see e.g., 10, 28, 36, 39, 45].
Rethinking Remaining Useful Life Prediction with Scarce Time Series Data: Regression under Indirect Supervision
Cheng, Jiaxiang, Pang, Yipeng, Hu, Guoqiang
Supervised time series prediction relies on directly measured target variables, but real-world use cases such as predicting remaining useful life (RUL) involve indirect supervision, where the target variable is labeled as a function of another dependent variable. Trending temporal regression techniques rely on sequential time series inputs to capture temporal patterns, requiring interpolation when dealing with sparsely and irregularly sampled covariates along the timeline. However, interpolation can introduce significant biases, particularly with highly scarce data. In this paper, we address the RUL prediction problem with data scarcity as time series regression under indirect supervision. We introduce a unified framework called parameterized static regression, which takes single data points as inputs for regression of target values, inherently handling data scarcity without requiring interpolation. The time dependency under indirect supervision is captured via a parametrical rectification (PR) process, approximating a parametric function during inference with historical posteriori estimates, following the same underlying distribution used for labeling during training. Additionally, we propose a novel batch training technique for tasks in indirect supervision to prevent overfitting and enhance efficiency. We evaluate our model on public benchmarks for RUL prediction with simulated data scarcity. Our method demonstrates competitive performance in prediction accuracy when dealing with highly scarce time series data.
OpenAI is retiring GPT-4 from ChatGPT
According to ChatGPT's release notes (via TechCrunch), "GPT-4 will be retired from ChatGPT" on April 30. The model, which was released over two years ago, will still be available in the API, but recent updates to GPT-4o have rendered GPT-4 somewhat obsolete. "Recent upgrades have further improved GPTโ4o's instruction following, problem-solving, and conversational flow, making it a natural successor to GPTโ4," the note read. For those who have been following OpenAI and the AI industry, it puts the breakneck speed of the industry into sharp relief, while simultaneously pointing out that GPT-5 has yet to emerge. GPT-4, released in March 2023, was a notable step up from GPT-3.5, the previous model, which ushered in the ChatGPT explosive introduction to the world.
Michael Cera and Michael Angarano break down the nostalgic wrestling scene from Sacramento
Michael Cera and Michael Angarano break down the'Sacramento' wrestling scene Mashable Tech Science Life Social Good Entertainment Deals Shopping Games Search Cancel * * Search Result Tech Apps & Software Artificial Intelligence Cybersecurity Cryptocurrency Mobile Smart Home Social Media Tech Industry Transportation All Tech Science Space Climate Change Environment All Science Life Digital Culture Family & Parenting Health & Wellness Sex, Dating & Relationships Sleep Careers Mental Health All Life Social Good Activism Gender LGBTQ Racial Justice Sustainability Politics All Social Good Entertainment Games Movies Podcasts TV Shows Watch Guides All Entertainment SHOP THE BEST Laptops Budget Laptops Dating Apps Sexting Apps Hookup Apps VPNs Robot Vaccuums Robot Vaccum & Mop Headphones Speakers Kindles Gift Guides Mashable Choice Mashable Selects All Sex, Dating & Relationships All Laptops All Headphones All Robot Vacuums All VPN All Shopping Games Product Reviews Adult Friend Finder Bumble Premium Tinder Platinum Kindle Paperwhite PS5 vs PS5 Slim All Reviews All Shopping Deals Newsletters VIDEOS Mashable Shows All Videos Home Entertainment Michael Cera and Michael Angarano break down the nostalgic wrestling scene from'Sacramento' 'It needed to feel very organic." By Mark Stetson on April 11, 2025 Share on Facebook Share on Twitter Share on Flipboard Watch Next'The Phoenician Scheme' trailer: Wes Anderson cooks up charming international intrigue'Dimension 20' heads to the wrestling ring in'Titan Takedown' trailer Exclusive'Dark Side of the Ring' clip: Watch Big Van Vader's wild start in pro wrestling 5:44 How'Dark Match' blends wrestling, horror, and satanic cults 4:29 Sacramento stars Michael Angarano and Michael Angarano dissect their throwback wrestling scene from the film. Sacramento is now in theaters. Topics Film Streaming Latest Videos'Love on the Spectrum' star discusses autism independence in new Waymo video series The autonomous vehicle company celebrates Autism Acceptance Month with Netflix star Connor Tomlinson. Loading... Subscribe These newsletters may contain advertising, deals, or affiliate links.
Palantir Is Helping DOGE With a Massive IRS Data Project
Palantir, the software company cofounded by Peter Thiel, is part of an effort by Elon Musk's so-called Department of Government Efficiency (DOGE) to build a new "mega API" for accessing Internal Revenue Service records, IRS sources tell WIRED. For the last three days, DOGE and a handful of Palantir representatives, along with dozens of career IRS engineers, have been collaborating to build a single API layer above all IRS databases at an event previously characterized to WIRED as a "hackathon," sources tell WIRED. Palantir representatives have been on-site at the event this week, a source with direct knowledge tells WIRED. APIs are application programming interfaces, which enable different applications to exchange data, and could be used to move IRS data to the cloud and access it there. DOGE has expressed an interest in the API project possibly touching all IRS data, which includes taxpayer names, addresses, social security numbers, tax returns, and employment data.