Goto

Collaborating Authors

 point prediction





Uncertainty-Adjusted Sorting for Asset Pricing with Machine Learning

Liu, Yan, Luo, Ye, Wang, Zigan, Zhang, Xiaowei

arXiv.org Machine Learning

A large and rapidly expanding literature demonstrates that machine learning (ML) methods substantially improve out-of-sample asset return prediction relative to conventional linear benchmarks, and that these statistical gains often translate into economically meaningful portfolio performance. Seminal contributions such as Gu et al. (2020) document large Sharpe ratio improvements from nonlinear learners in U.S. equities, while subsequent work extends these findings to stochastic discount factor estimation (Chen et al. 2024), international equity markets (Leippold et al. 2022), and bond return forecasting (Kelly et al. 2019, Bianchi et al. 2020). Collectively, this literature establishes ML as a powerful tool for extracting conditional expected returns in environments characterized by noisy signals, nonlinear interactions, and pervasive multicollinearity.


Forests of Uncertaint(r)ees: Using tree-based ensembles to estimate probability distributions of future conflict

Mittermaier, Daniel, Bohne, Tobias, Hofer, Martin, Racek, Daniel

arXiv.org Artificial Intelligence

Predictions of fatalities from violent conflict on the PRIO-GRID-month (pgm) level are characterized by high levels of uncertainty, limiting their usefulness in practical applications. We discuss the two main sources of uncertainty for this prediction task, the nature of violent conflict and data limitations, embedding this in the wider literature on uncertainty quantification in machine learning. We develop a strategy to quantify uncertainty in conflict forecasting, shifting from traditional point predictions to full predictive distributions. Our approach compares and combines multiple tree-based classifiers and distributional regressors in a custom auto-ML setup, estimating distributions for each pgm individually. We also test the integration of regional models in spatial ensembles as a potential avenue to reduce uncertainty. The models are able to consistently outperform a suite of benchmarks derived from conflict history in predictions up to one year in advance, with performance driven by regions where conflict was observed. With our evaluation, we emphasize the need to understand how a metric behaves for a given prediction problem, in our case characterized by extremely high zero-inflatedness. While not resulting in better predictions, the integration of smaller models does not decrease performance for this prediction task, opening avenues to integrate data sources with less spatial coverage in the future.


Self-Calibrating Conformal Prediction

Neural Information Processing Systems

In machine learning, model calibration and predictive inference are essential for producing reliable predictions and quantifying uncertainty to support decision-making.



Signal-Aware Workload Shifting Algorithms with Uncertainty-Quantified Predictors

Johnson, Ezra, Lechowicz, Adam, Hajiesmaili, Mohammad

arXiv.org Artificial Intelligence

A wide range of sustainability and grid-integration strategies depend on workload shifting, which aligns the timing of energy consumption with external signals such as grid curtailment events, carbon intensity, or time-of-use electricity prices. The main challenge lies in the online nature of the problem: operators must make real-time decisions (e.g., whether to consume energy now) without knowledge of the future. While forecasts of signal values are typically available, prior work on learning-augmented online algorithms has relied almost exclusively on simple point forecasts. In parallel, the forecasting research has made significant progress in uncertainty quantification (UQ), which provides richer and more fine-grained predictive information. In this paper, we study how online workload shifting can leverage UQ predictors to improve decision-making. We introduce $\texttt{UQ-Advice}$, a learning-augmented algorithm that systematically integrates UQ forecasts through a $\textit{decision uncertainty score}$ that measures how forecast uncertainty affects optimal future decisions. By introducing $\textit{UQ-robustness}$, a new metric that characterizes how performance degrades with forecast uncertainty, we establish theoretical performance guarantees for $\texttt{UQ-Advice}$. Finally, using trace-driven experiments on carbon intensity and electricity price data, we demonstrate that $\texttt{UQ-Advice}$ consistently outperforms robust baselines and existing learning-augmented methods that ignore uncertainty.



GuirlVG: Incentivize GUI Visual Grounding via Empirical Exploration on Reinforcement Learning

Kang, Weitai, Lei, Bin, Liu, Gaowen, Ding, Caiwen, Yan, Yan

arXiv.org Artificial Intelligence

Graphical user interface visual grounding (GUI-VG), a core capability for GUI agents, has primarily relied on supervised fine-tuning (SFT) of multimodal large language models (MLLMs), which demands extensive data curation and significant training costs. However, as MLLMs continue to advance and even cover GUI domains during pretraining, the necessity of exhaustive SFT post-training becomes increasingly questionable. Meanwhile, recent successes of rule-based reinforcement fine-tuning (RFT) suggest a more efficient alternative. Despite this promise, the optimal manner of applying RFT for GUI-VG remains unexplored. To bridge this gap, we introduce GuirlVG, a reinforcement learning-based GUI-VG method built on a systematic empirical study and a novel stabilization technique. We find that naive application of RFT underperforms the SFT baseline, motivating a deeper exploration. First, we decompose RFT into its core components and analyze the optimal formulation of each. Second, we propose a novel Adversarial KL Factor that dynamically stabilizes training to mitigate reward over-optimization. Third, we further explore the training configurations of RFT to enhance effectiveness. Extensive experiments show that GuirlVG, with only 5.2K training samples, outperforms SFT methods trained on over 10M samples, achieving a 7.7% improvement on ScreenSpot, a 17.2% improvement on ScreenSpotPro, and 91.9% accuracy on ScreenSpotV2.