Goto

Collaborating Authors

 stochastic model





Stochastic Predictive Analytics for Stocks in the Newsvendor Problem

Pury, Pedro A.

arXiv.org Artificial Intelligence

The Newsvendor problem is a fundamental model in inventory management (Rossi, 2021) that accommodates both known (Dvoretzky et al., 1952a) and unknown (Dvoretzky et al., 1952b) demand distributions. Since its inception (Edgewort, 1888), it has been widely applied in inventory control and policy-making (Arrow et al., 1951), as well as various real-world situations (Choi, 2012; Chen et al., 2016). Its simplicity stems from considering a single product for sale, for which the optimal initial stock level must be determined to satisfy forecasted demand over a given period without restocking. The interplay among purchasing cost, selling price, and stock ordered at the beginning of the period determines the inventory management policies (Whitin, 1952; Rosenblatt, 1954; Petruzzi and Dada, 1999). The model has been extensively studied for single stock-keeping units (SKUs). Electronic marketplaces introduce an extra complication to the problem, as they need to manage a large number of SKUs at distribution centers alongside highly variable demand received through electronic platforms.


Reviewer-1

Neural Information Processing Systems

We thank the reviewers for their invested time and constructive criticism. In the following, the comments are addressed separately for each reviewer. (line 58). We will try to expand this part of the paper. Hence, we obtain unrealistic interpolations similar to Figure 1 (bottom).



Enhancing Q-Value Updates in Deep Q-Learning via Successor-State Prediction

Zu, Lipeng, Zhou, Hansong, Zhang, Xiaonan

arXiv.org Artificial Intelligence

Deep Q-Networks (DQNs) estimate future returns by learning from transitions sampled from a replay buffer. However, the target updates in DQN often rely on next states generated by actions from past, potentially suboptimal, policy. As a result, these states may not provide informative learning signals, causing high variance into the update process. This issue is exacerbated when the sampled transitions are poorly aligned with the agent's current policy. To address this limitation, we propose the Successor-state Aggregation Deep Q-Network (SADQ), which explicitly models environment dynamics using a stochastic transition model. SADQ integrates successor-state distributions into the Q-value estimation process, enabling more stable and policy-aligned value updates. Additionally, it explores a more efficient action selection strategy with the modeled transition structure. We provide theoretical guarantees that SADQ maintains unbiased value estimates while reducing training variance. Our extensive empirical results across standard RL benchmarks and real-world vector-based control tasks demonstrate that SADQ consistently outperforms DQN variants in both stability and learning efficiency.


Deep Learning-based Prediction of Clinical Trial Enrollment with Uncertainty Estimates

Do, Tien Huu, Masquelier, Antoine, Lee, Nae Eoun, Crowther, Jonathan

arXiv.org Artificial Intelligence

Clinical trials are a systematic endeavor to assess the safety and efficacy of new drugs or treatments. Conducting such trials typically demands significant financial investment and meticulous planning, highlighting the need for accurate predictions of trial outcomes. Accurately predicting patient enrollment, a key factor in trial success, is one of the primary challenges during the planning phase. In this work, we propose a novel deep learning-based method to address this critical challenge. Our method, implemented as a neural network model, leverages pre-trained language models (PLMs) to capture the complexities and nuances of clinical documents, transforming them into expressive representations. These representations are then combined with encoded tabular features via an attention mechanism. To account for uncertainties in enrollment prediction, we enhance the model with a probabilistic layer based on the Gamma distribution, which enables range estimation. We apply the proposed model to predict clinical trial duration, assuming site-level enrollment follows a Poisson-Gamma process. We carry out extensive experiments on real-world clinical trial data, and show that the proposed method can effectively predict the number of patients enrolled at a number of sites for a given clinical trial, outperforming established baseline models.


Trajectory learning for ensemble forecasts via the continuous ranked probability score: a Lorenz '96 case study

Ephrati, Sagy, Woodfield, James

arXiv.org Artificial Intelligence

This paper demonstrates the feasibility of trajectory learning for ensemble forecasts by employing the continuous ranked probability score (CRPS) as a loss function. Using the two-scale Lorenz '96 system as a case study, we develop and train both additive and multiplicative stochastic parametrizations to generate ensemble predictions. Results indicate that CRPS-based trajectory learning produces parametrizations that are both accurate and sharp. The resulting parametrizations are straightforward to calibrate and outperform derivative-fitting-based parametrizations in short-term forecasts. This approach is particularly promising for data assimilation applications due to its accuracy over short lead times.


Bayesian Optimization under Uncertainty for Training a Scale Parameter in Stochastic Models

Yadav, Akash, Zhang, Ruda

arXiv.org Machine Learning

Hyperparameter tuning is a challenging problem especially when the system itself involves uncertainty. Due to noisy function evaluations, optimization under uncertainty can be computationally expensive. In this paper, we present a novel Bayesian optimization framework tailored for hyperparameter tuning under uncertainty, with a focus on optimizing a scale- or precision-type parameter in stochastic models. The proposed method employs a statistical surrogate for the underlying random variable, enabling analytical evaluation of the expectation operator. Moreover, we derive a closed-form expression for the optimizer of the random acquisition function, which significantly reduces computational cost per iteration. Compared with a conventional one-dimensional Monte Carlo-based optimization scheme, the proposed approach requires 40 times fewer data points, resulting in up to a 40-fold reduction in computational cost. We demonstrate the effectiveness of the proposed method through two numerical examples in computational engineering.