corrector
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > China > Beijing > Beijing (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- (2 more...)
Causality-Inspired Safe Residual Correction for Multivariate Time Series
Xie, Jianxiang, Hua, Yuncheng, Cheng, Mingyue, Salim, Flora, Xue, Hao
While modern multivariate forecasters such as Transformers and GNNs achieve strong benchmark performance, they often suffer from systematic errors at specific variables or horizons and, critically, lack guarantees against performance degradation in deployment. Existing post-hoc residual correction methods attempt to fix these errors, but are inherently greedy: although they may improve average accuracy, they can also "help in the wrong way" by overcorrecting reliable predictions and causing local failures in unseen scenarios. To address this critical "safety gap," we propose CRC (Causality-inspired Safe Residual Correction), a plug-and-play framework explicitly designed to ensure non-degradation. CRC follows a divide-and-conquer philosophy: it employs a causality-inspired encoder to expose direction-aware structure by decoupling self- and cross-variable dynamics, and a hybrid corrector to model residual errors. Crucially, the correction process is governed by a strict four-fold safety mechanism that prevents harmful updates. Experiments across multiple datasets and forecasting backbones show that CRC consistently improves accuracy, while an in-depth ablation study confirms that its core safety mechanisms ensure exceptionally high non-degradation rates (NDR), making CRC a correction framework suited for safe and reliable deployment.
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- Asia > China > Hong Kong (0.04)
- (2 more...)
Neural CDEs as Correctors for Learned Time Series Models
Shahid, Muhammad Bilal, Koirla, Prajwal, Fleming, Cody
Learned time-series models, whether continuous-or discrete-time, are widely used to forecast the states of a dynamical system. Such models generate multi-step forecasts either directly, by predicting the full horizon at once, or iteratively, by feeding back their own predictions at each step. In both cases, the multi-step forecasts are prone to errors. To address this, we propose a Predictor-Corrector mechanism where the Predictor is any learned time-series model and the Corrector is a neural controlled differential equation. The Predictor forecasts, and the Corrector predicts the errors of the forecasts. Adding these errors to the forecasts improves forecast performance. The proposed Corrector works with irregularly sampled time series and continuous-and discrete-time Predictors. Additionally, we introduce two regularization strategies to improve the extrapolation performance of the Corrector with accelerated training. We evaluate our Corrector with diverse Predictors, e.g., neural ordinary differential equations, Contiformer, and DLinear, on synthetic, physics simulation, and real-world forecasting datasets. The experiments demonstrate that the Predictor-Corrector mechanism consistently improves the performance compared to Predictor alone. Learning time-series models from such datasets has applications ranging from energy demand forecasting, traffic and mobility prediction, weather prediction, anomaly detection, and decision-making in robotics (Zeng et al., 2022; Li et al., 2017; Stankeviciute et al., 2021; Xu et al., 2021; Chua et al., 2018). Several works focused on learning time-series models from data. There are at least two ways to train such models. Early studies focused on training the model to predict one step ahead (Basharat & Shah, 2009; Khansari-Zadeh & Billard, 2011).
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- North America > United States > Iowa (0.04)
- Europe > Switzerland (0.04)
- Asia > Middle East > Jordan (0.04)
Prompt Engineering Techniques for Context-dependent Text-to-SQL in Arabic
Almohaimeed, Saleh, Alsofyani, May, Almohaimeed, Saad, Ghanim, Mansour Al, Wang, Liqiang
In recent years, the task of cross-domain, context-dependent text-to-SQL has received significant attention. Enables users with no prior knowledge of SQL to have a conversation with databases using natural language. However, most of the available datasets and research have been conducted in English, along with some work in Chinese. To this date, no effort has been made to address this task in the Arabic language. In this paper, we introduce Ar-SParC, the first Arabic cross-domain, context-dependent text-to-SQL dataset. The dataset consists of 3,450 sequences of interrelated questions, each sequence containing an average of approximately three questions, which results in a total of 10225 questions along with their corresponding SQL queries. We conducted 40 experiments on the Ar-SParC dataset using two large language models, GPT-3.5-turbo and GPT-4.5-turbo, applying 10 different prompt engineering techniques, including four question representation methods and six in-context learning techniques. Furthermore, we developed a novel approach named GAT corrector, which enhanced the performance across all 40 experiments, yielding an average improvement of 1.9% in execution accuracy (EX) and 1.9% in interaction accuracy (IX) under zero-shot settings, and an average increase of 1.72% EX and 0.92% IX under in-context learning settings. Finally, we conducted an ablation study with two more experiments to explain why the GAT corrector outperformed the previous GAT verifier technique, particularly for the Arabic language.
- North America > United States > Florida > Orange County > Orlando (0.15)
- North America > United States > Pennsylvania (0.04)
- Asia > China (0.04)
MarsRL: Advancing Multi-Agent Reasoning System via Reinforcement Learning with Agentic Pipeline Parallelism
Liu, Shulin, Du, Dong, Yang, Tao, Li, Yang, Qiu, Boyu
Recent progress in large language models (LLMs) has been propelled by reinforcement learning with verifiable rewards (RLVR) and test-time scaling. However, the limited output length of LLMs constrains the depth of reasoning attainable in a single inference process. Multi-agent reasoning systems offer a promising alternative by employing multiple agents including Solver, Verifier, and Corrector, to iteratively refine solutions. While effective in closed-source models like Gemini 2.5 Pro, they struggle to generalize to open-source models due to insufficient critic and correction capabilities. To address this, we propose MarsRL, a novel reinforcement learning framework with agentic pipeline parallelism, designed to jointly optimize all agents in the system. MarsRL introduces agent-specific reward mechanisms to mitigate reward noise and employs pipeline-inspired training to enhance efficiency in handling long trajectories. Applied to Qwen3-30B-A3B-Thinking-2507, MarsRL improves AIME2025 accuracy from 86.5% to 93.3% and BeyondAIME from 64.9% to 73.8%, even surpassing Qwen3-235B-A22B-Thinking-2507. These findings highlight the potential of MarsRL to advance multi-agent reasoning systems and broaden their applicability across diverse reasoning tasks.
- North America > Mexico > Gulf of Mexico (0.04)
- South America > Peru > Ucayali Department (0.04)
- South America > Peru > Pasco Department (0.04)
- (5 more...)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.67)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > North Sea > Southern North Sea (0.04)
- North America > Canada > Alberta (0.04)
- Asia > Singapore (0.04)
43e4e6a6f341e00671e123714de019a8-AuthorFeedback.pdf
We appreciate the reviewer's valuable comments, and we were glad to read the positive comments regarding the We also appreciate the thorough feedback for further improvements. What is trained in the PRE-approach? Is there benefit in using the differentiable PDE solver? Do steps of a differentiable simulator correspond to time steps? Y es, in our text "step" typically means time step.
- North America > Mexico > Gulf of Mexico (0.04)
- South America > Peru > Ucayali Department (0.04)
- South America > Peru > Pasco Department (0.04)
- (5 more...)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.67)