probabilistic estimate
Prompt Engineering Large Language Models' Forecasting Capabilities
Schoenegger, Philipp, Jones, Cameron R., Tetlock, Philip E., Mellers, Barbara
Forecasting future events has significant decision-relevance, as having a well-calibrated probabilistic estimation of the risk of a future pandemic, a conflict, or an emerging technology is crucial in making decisions under uncertainty. Current best practices for forecasting rely on aggregating the judgemental forecasts of experienced forecasters (Tetlock & Gardner 2016), a process that is both lengthy and expensive, though it promises to produce valuable input into decision-making processes (Mellers et al, 2019; Tetlock et al. 2014). Recent work has applied frontier large language models (LLM) to forecasting, testing a variety of research questions, such as whether LLMs are able to match human forecasting performance, what determines their prediction capabilities, and how these capabilities may be increased. For example, previous work looked at retrieval-augmented systems (Halawi et al. 2024), aggregation of multiple models (Schoenegger et al. 2024), ranking-based context retrieval systems (Yan et al. 2024), or applications of reinforcement learning (Turtel et al. 2025b). While many of these approaches have resulted in increased forecasting performance, the current performance of frontier models still trails experienced forecaster aggregates on ForecastBench (Karger et al. 2024). Many such approaches have focused on specific aspects in designing forecasting pipelines such as effective news aggregation (Wang et al. 2025) or fine-tuning on model self-play output (Turtel et al. 2025).
- Asia > India (0.04)
- North America > United States > New York (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study > Negative Result (0.48)
- Government (0.46)
- Banking & Finance (0.46)
- Media (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.70)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
Gaussian Processes for Probabilistic Estimates of Earthquake Ground Shaking: A 1-D Proof-of-Concept
Scivier, Sam A., Nissen-Meyer, Tarje, Koelemeijer, Paula, Baydin, Atılım Güneş
Estimates of seismic wave speeds in the Earth (seismic velocity models) are key input parameters to earthquake simulations for ground motion prediction. Owing to the non-uniqueness of the seismic inverse problem, typically many velocity models exist for any given region. The arbitrary choice of which velocity model to use in earthquake simulations impacts ground motion predictions. However, current hazard analysis methods do not account for this source of uncertainty. We present a proof-of-concept ground motion prediction workflow for incorporating uncertainties arising from inconsistencies between existing seismic velocity models. Our analysis is based on the probabilistic fusion of overlapping seismic velocity models using scalable Gaussian process (GP) regression. Specifically, we fit a GP to two synthetic 1-D velocity profiles simultaneously, and show that the predictive uncertainty accounts for the differences between the models. We subsequently draw velocity model samples from the predictive distribution and estimate peak ground displacement using acoustic wave propagation through the velocity models. The resulting distribution of possible ground motion amplitudes is much wider than would be predicted by simulating shaking using only the two input velocity models. This proof-of-concept illustrates the importance of probabilistic methods for physics-based seismic hazard analysis.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.29)
- North America > United States > California (0.14)
- Europe > Spain (0.14)
- Europe > Germany (0.14)
In-Hand Manipulation of Unknown Objects with Tactile Sensing for Insertion
Pan, Chaoyi, Lepert, Marion, Yuan, Shenli, Antonova, Rika, Bohg, Jeannette
In this paper, we present a method to manipulate unknown objects in-hand using tactile sensing without relying on a known object model. In many cases, vision-only approaches may not be feasible; for example, due to occlusion in cluttered spaces. We address this limitation by introducing a method to reorient unknown objects using tactile sensing. It incrementally builds a probabilistic estimate of the object shape and pose during task-driven manipulation. Our approach uses Bayesian optimization to balance exploration of the global object shape with efficient task completion. To demonstrate the effectiveness of our method, we apply it to a simulated Tactile-Enabled Roller Grasper, a gripper that rolls objects in hand while collecting tactile data. We evaluate our method on an insertion task with randomly generated objects and find that it reliably reorients objects while significantly reducing the exploration time.
Probabilistic Gradient Boosting Machines for Large-Scale Probabilistic Regression
Sprangers, Olivier, Schelter, Sebastian, de Rijke, Maarten
Gradient Boosting Machines (GBM) are hugely popular for solving tabular data problems. However, practitioners are not only interested in point predictions, but also in probabilistic predictions in order to quantify the uncertainty of the predictions. Creating such probabilistic predictions is difficult with existing GBM-based solutions: they either require training multiple models or they become too computationally expensive to be useful for large-scale settings. We propose Probabilistic Gradient Boosting Machines (PGBM), a method to create probabilistic predictions with a single ensemble of decision trees in a computationally efficient manner. PGBM approximates the leaf weights in a decision tree as a random variable, and approximates the mean and variance of each sample in a dataset via stochastic tree ensemble update equations. These learned moments allow us to subsequently sample from a specified distribution after training. We empirically demonstrate the advantages of PGBM compared to existing state-of-the-art methods: (i) PGBM enables probabilistic estimates without compromising on point performance in a single model, (ii) PGBM learns probabilistic estimates via a single model only (and without requiring multi-parameter boosting), and thereby offers a speedup of up to several orders of magnitude over existing state-of-the-art methods on large datasets, and (iii) PGBM achieves accurate probabilistic estimates in tasks with complex differentiable loss functions, such as hierarchical time series problems, where we observed up to 10% improvement in point forecasting performance and up to 300% improvement in probabilistic forecasting performance.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.05)
- Asia > Singapore (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)