Goto

Collaborating Authors

 cum


LRT-Diffusion: Calibrated Risk-Aware Guidance for Diffusion Policies

Sun, Ximan, Cheng, Xiang

arXiv.org Artificial Intelligence

Diffusion policies are competitive for offline reinforcement learning (RL) but are typically guided at sampling time by heuristics that lack a statistical notion of risk. We introduce LRT-Diffusion, a risk-aware sampling rule that treats each denoising step as a sequential hypothesis test between the unconditional prior and the state-conditional policy head. Concretely, we accumulate a log-likelihood ratio and gate the conditional mean with a logistic controller whose threshold tau is calibrated once under H0 to meet a user-specified Type-I level alpha. This turns guidance from a fixed push into an evidence-driven adjustment with a user-interpretable risk budget. Importantly, we deliberately leave training vanilla (two heads with standard epsilon-prediction) under the structure of DDPM. LRT guidance composes naturally with Q-gradients: critic-gradient updates can be taken at the unconditional mean, at the LRT-gated mean, or a blend, exposing a continuum from exploitation to conservatism. We standardize states and actions consistently at train and test time and report a state-conditional out-of-distribution (OOD) metric alongside return. On D4RL MuJoCo tasks, LRT-Diffusion improves the return-OOD trade-off over strong Q-guided baselines in our implementation while honoring the desired alpha. Theoretically, we establish level-alpha calibration, concise stability bounds, and a return comparison showing when LRT surpasses Q-guidance-especially when off-support errors dominate. Overall, LRT-Diffusion is a drop-in, inference-time method that adds principled, calibrated risk control to diffusion policies for offline RL.


Causal Discovery for Linear DAGs with Dependent Latent Variables via Higher-order Cumulants

Cai, Ming, Gao, Penggang, Hara, Hisayuki

arXiv.org Machine Learning

This paper addresses the problem of estimating causal directed acyclic graphs in linear non-Gaussian acyclic models with latent confounders (LvLiNGAM). Existing methods assume mutually independent latent confounders or cannot properly handle models with causal relationships among observed variables. We propose a novel algorithm that identifies causal DAGs in LvLiNGAM, allowing causal structures among latent variables, among observed variables, and between the two. The proposed method leverages higher-order cumu-lants of observed data to identify the causal structure. Extensive simulations and experiments with real-world data demonstrate the validity and practical utility of the proposed algorithm. Introduction Estimating causal directed acyclic graphs (DAGs) in the presence of latent confounders has been a major challenge in causal analysis. Conventional causal discovery methods, such as the Peter-Clark (PC) algorithm [1], Greedy Equivalence Search (GES) [2], and the Linear Non-Gaussian Acyclic Model (LiNGAM) [3, 4], focus solely on the causal model without latent confounders. Fast Causal Inference (FCI) [1] extends the PC algorithm to handle latent variables, recovering a partial ancestral graph (PAG) under the faithfulness assumption. Greedy Fast Causal Inference (GFCI) [6] hybridizes GES and FCI but inherits the limitation of FCI. The assumption of linearity and non-Gaussian disturbances in the causal model enables the identification of causal structures beyond the PAG. The linear non-Gaussian acyclic model with latent confounders (LvLiNGAM) is an extension of LiNGAM that incorporates latent confounders. Hoyer et al. [7] demonstrated that LvLiNGAM can be transformed into a canonical model in which all latent variables are mutually independent and causally precede the observed variables.


Guiding Giants: Lightweight Controllers for Weighted Activation Steering in LLMs

Hegazy, Amr, Elhoushi, Mostafa, Alanwar, Amr

arXiv.org Artificial Intelligence

Controlling undesirable Large Language Model (LLM) behaviors, such as the generation of unsafe content or failing to adhere to safety guidelines, often relies on costly fine-tuning. Activation steering provides an alternative for inference-time control, but existing methods typically lack fine-grained, adaptive mechanisms. We introduce a novel approach using a lightweight, trainable controller network integrated during inference. This controller network observes specific intermediate LLM activations and predicts both a global scaling factor and layer-specific weights. The predicted global scaling factor and layer-specific weights then dynamically modulate the intensity of a steering patch, derived from a pre-computed "refusal direction" vector, applied across the LLM's layers during generation. Trained on activations from both harmful and benign prompts, our controller learns to discriminatively apply nuanced, layer-aware interventions, activating steering primarily for harmful inputs. Experiments using safety benchmarks like ToxicChat & In-The-Wild Jailbreak Prompts demonstrate that our weighted steering controller significantly increases refusal rates compared to the base LLM, achieving targeted behavioral modification without altering the original model parameters. Our experiments with Llama-3.1-8B, Llama-3.2-1B & Mistral-7B show our approach outperforms existing methods, presenting an efficient and adaptive method for fine-grained control over LLM behavior at inference time.


Can Physics Informed Neural Operators Self Improve?

Majumdar, Ritam, Varhade, Amey, Karande, Shirish, Vig, Lovekesh

arXiv.org Artificial Intelligence

Self-training techniques have shown remarkable value across many deep learning models and tasks. However, such techniques remain largely unexplored when considered in the context of learning fast solvers for systems of partial differential equations (Eg: Neural Operators). In this work, we explore the use of self-training for Fourier Neural Operators (FNO). Neural Operators emerged as a data driven technique, however, data from experiments or traditional solvers is not always readily available. Physics Informed Neural Operators (PINO) overcome this constraint by utilizing a physics loss for the training, however the accuracy of PINO trained without data does not match the performance obtained by training with data. In this work we show that self-training can be used to close this gap in performance. We examine canonical examples, namely the 1D-Burgers and 2D-Darcy PDEs, to showcase the efficacy of self-training. Specifically, FNOs, when trained exclusively with physics loss through self-training, approach 1.07x for Burgers and 1.02x for Darcy, compared to FNOs trained with both data and physics loss. Furthermore, we discover that pseudo-labels can be used for self-training without necessarily training to convergence in each iteration. A consequence of this is that we are able to discover self-training schedules that improve upon the baseline performance of PINO in terms of accuracy as well as time.

  Country:
  Genre: Research Report (0.64)
  Industry: Health & Medicine (0.68)

Online Fairness-Aware Learning with Imbalanced Data Streams

Iosifidis, Vasileios, Zhang, Wenbin, Ntoutsi, Eirini

arXiv.org Artificial Intelligence

Data-driven learning algorithms are employed in many online applications, in which data become available over time, like network monitoring, stock price prediction, job applications, etc. The underlying data distribution might evolve over time calling for model adaptation as new instances arrive and old instances become obsolete. In such dynamic environments, the so-called data streams, fairness-aware learning cannot be considered as a one-off requirement, but rather it should comprise a continual requirement over the stream. Recent fairness-aware stream classifiers ignore the problem of class imbalance, which manifests in many real-life applications, and mitigate discrimination mainly because they "reject" minority instances at large due to their inability to effectively learn all classes. In this work, we propose \ours, an online fairness-aware approach that maintains a valid and fair classifier over the stream. \ours~is an online boosting approach that changes the training distribution in an online fashion by monitoring stream's class imbalance and tweaks its decision boundary to mitigate discriminatory outcomes over the stream. Experiments on 8 real-world and 1 synthetic datasets from different domains with varying class imbalance demonstrate the superiority of our method over state-of-the-art fairness-aware stream approaches with a range (relative) increase [11.2\%-14.2\%] in balanced accuracy, [22.6\%-31.8\%] in gmean, [42.5\%-49.6\%] in recall, [14.3\%-25.7\%] in kappa and [89.4\%-96.6\%] in statistical parity (fairness).


Probabilistic Multi-Step-Ahead Short-Term Water Demand Forecasting with Lasso

Kley-Holsteg, Jens, Ziel, Florian

arXiv.org Machine Learning

Water demand is a highly important variable for operational control and decision making. Hence, the development of accurate forecasts is a valuable field of research to further improve the efficiency of water utilities. Focusing on probabilistic multi-step-ahead forecasting, a time series model is introduced, to capture typical autoregressive, calendar and seasonal effects, to account for time-varying variance, and to quantify the uncertainty and path-dependency of the water demand process. To deal with the high complexity of the water demand process a high-dimensional feature space is applied, which is efficiently tuned by an automatic shrinkage and selection operator (lasso). It allows to obtain an accurate, simple interpretable and fast computable forecasting model, which is well suited for real-time applications. The complete probabilistic forecasting framework allows not only for simulating the mean and the marginal properties, but also the correlation structure between hours within the forecasting horizon. For practitioners, complete probabilistic multi-step-ahead forecasts are of considerable relevance as they provide additional information about the expected aggregated or cumulative water demand, so that a statement can be made about the probability with which a water storage capacity can guarantee the supply over a certain period of time. This information allows to better control storage capacities and to better ensure the smooth operation of pumps. To appropriately evaluate the forecasting performance of the considered models, the energy score (ES) as a strictly proper multidimensional evaluation criterion, is introduced. The methodology is applied to the hourly water demand data of a German water supplier.


Contextual-Bandit Based Personalized Recommendation with Time-Varying User Interests

Xu, Xiao, Dong, Fang, Li, Yanghua, He, Shaojian, Li, Xin

arXiv.org Artificial Intelligence

A contextual bandit problem is studied in a highly non-stationary environment, which is ubiquitous in various recommender systems due to the time-varying interests of users. Two models with disjoint and hybrid payoffs are considered to characterize the phenomenon that users' preferences towards different items vary differently over time. In the disjoint payoff model, the reward of playing an arm is determined by an arm-specific preference vector, which is piecewise-stationary with asynchronous and distinct changes across different arms. An efficient learning algorithm that is adaptive to abrupt reward changes is proposed and theoretical regret analysis is provided to show that a sublinear scaling of regret in the time length $T$ is achieved. The algorithm is further extended to a more general setting with hybrid payoffs where the reward of playing an arm is determined by both an arm-specific preference vector and a joint coefficient vector shared by all arms. Empirical experiments are conducted on real-world datasets to verify the advantages of the proposed learning algorithms against baseline ones in both settings.


Learning by Playing - Solving Sparse Reward Tasks from Scratch

Riedmiller, Martin, Hafner, Roland, Lampe, Thomas, Neunert, Michael, Degrave, Jonas, Van de Wiele, Tom, Mnih, Volodymyr, Heess, Nicolas, Springenberg, Jost Tobias

arXiv.org Machine Learning

We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in the context of Reinforcement Learning (RL). SAC-X enables learning of complex behaviors - from scratch - in the presence of multiple sparse reward signals. To this end, the agent is equipped with a set of general auxiliary tasks, that it attempts to learn simultaneously via off-policy RL. The key idea behind our method is that active (learned) scheduling and execution of auxiliary policies allows the agent to efficiently explore its environment - enabling it to excel at sparse reward RL. Our experiments in several challenging robotic manipulation settings demonstrate the power of our approach. A video of the rich set of learned behaviours can be found at https://youtu.be/mPKyvocNe M.