Goto

Collaborating Authors

 Power Industry


Scalable Constrained Policy Optimization for Safe Multi-agent Reinforcement Learning

Neural Information Processing Systems

A challenging problem in seeking to bring multi-agent reinforcement learning (MARL) techniques into real-world applications, such as autonomous driving and drone swarms, is how to control multiple agents safely and cooperatively to accomplish tasks.


Task-oriented Time Series Imputation Evaluation via Generalized Representers

Neural Information Processing Systems

Time series analysis is widely used in many fields such as power energy, economics, and transportation, including different tasks such as forecasting, anomaly detection, classification, etc. Missing values are widely observed in these tasks, and often leading to unpredictable negative effects on existing methods, hindering their further application. In response to this situation, existing time series imputation methods mainly focus on restoring sequences based on their data characteristics, while ignoring the performance of the restored sequences in downstream tasks. Considering different requirements of downstream tasks (e.g., forecasting), this paper proposes an efficient downstream task-oriented time series imputation evaluation approach. By combining time series imputation with neural network models used for downstream tasks, the gain of different imputation strategies on downstream tasks is estimated without retraining, and the most favorable imputation value for downstream tasks is given by combining different imputation strategies according to the estimated gain. The corresponding code can be found in the repository https://github.com/hkuedl/Task-Oriented-Imputation.


MetaCURL: Non-stationary Concave Utility Reinforcement Learning Bianca Marin Moreno Margaux Brégère Pierre Gaillard Nadia Oudjane Inria

Neural Information Processing Systems

We explore online learning in episodic Markov decision processes on non-stationary environments (changing losses and probability transitions). Our focus is on the Concave Utility Reinforcement Learning problem (CURL), an extension of classical RL for handling convex performance criteria in state-action distributions induced by agent policies. While various machine learning problems can be written as CURL, its non-linearity invalidates traditional Bellman equations.


GLinSAT: The General Linear Satisfiability Neural Network Layer By Accelerated Gradient Descent 2

Neural Information Processing Systems

Ensuring that the outputs of neural networks satisfy specific constraints is crucial for applying neural networks to real-life decision-making problems. In this paper, we consider making a batch of neural network outputs satisfy bounded and general linear constraints.


Conformalized Time Series with Semantic Features

Neural Information Processing Systems

Conformal prediction is a powerful tool for uncertainty quantification, but its application to time-series data is constrained by the violation of the exchangeability assumption. Current solutions for time-series prediction typically operate in the output space and rely on manually selected weights to address distribution drift, leading to overly conservative predictions. To enable dynamic weight learning in the semantically rich latent space, we introduce a novel approach called Conformalized Time Series with Semantic Features (CT-SSF). CT-SSF utilizes the inductive bias in deep representation learning to dynamically adjust weights, prioritizing semantic features relevant to the current prediction. Theoretically, we show that CT-SSF surpasses previous methods defined in the output space. Experiments on synthetic and benchmark datasets demonstrate that CT-SSF significantly outperforms existing state-of-the-art (SOTA) conformal prediction techniques in terms of prediction efficiency while maintaining a valid coverage guarantee.


ANT: Adaptive Noise Schedule for Time Series Diffusion Models

Neural Information Processing Systems

Advances in diffusion models for generative artificial intelligence have recently propagated to the time series (TS) domain, demonstrating state-of-the-art performance on various tasks. However, prior works on TS diffusion models often borrow the framework of existing works proposed in other domains without considering the characteristics of TS data, leading to suboptimal performance. In this work, we propose Adaptive Noise schedule for Time series diffusion models (ANT), which automatically predetermines proper noise schedules for given TS datasets based on their statistics representing non-stationarity. Our intuition is that an optimal noise schedule should satisfy the following desiderata: 1) It linearly reduces the non-stationarity of TS data so that all diffusion steps are equally meaningful, 2) the data is corrupted to the random noise at the final step, and 3) the number of steps is sufficiently large. The proposed method is practical for use in that it eliminates the necessity of finding the optimal noise schedule with a small additional cost to compute the statistics for given datasets, which can be done offline before training.


ElasTST: Towards Robust Varied-Horizon Forecasting with Elastic Time-Series Transformer Shun Zheng

Neural Information Processing Systems

Despite the recent strides in crafting specific architectures for time-series forecasting and developing pre-trained universal models, a comprehensive examination of their capability in accommodating variedhorizon forecasting during inference is still lacking.


C Access to PowerGraph Dataset C.1 Dataset documentation and intended uses

Neural Information Processing Systems

We use InMemoryDataset [27] class of Pytorch Geometric, which processes the raw data obtained from the Cascades [61] simulation. For each dataset UK, IEEE24, IEEE39, and IEEE118, we provide a folder containing the raw data organized in the following files for node-level tasks, i.e., power flow and optimal power flow analyses: edge_attr.mat: The dataset can be viewed and downloaded by the reviewers from https://figshare.com/articles/ dataset/PowerGraph/22820534 (node-level 1.08GB and graph-level 2.7GB, when uncompressed): Node-level data: #!/ bin / bash wget -O data. The authors state here that they bear all responsibility in case of violation of rights, etc., and confirm that this work is licensed under the CC BY 4.0 license. The code to obtain the PowerGraph dataset in the InMemoryDataset [27] format and to benchmark GNN and explainability methods is available as a public GitHub organization at https://github.com/


PowerGraph: A power grid benchmark dataset for graph neural networks

Neural Information Processing Systems

Power grids are critical infrastructures of paramount importance to modern society and, therefore, engineered to operate under diverse conditions and failures. The ongoing energy transition poses new challenges for the decision-makers and system operators. Therefore, developing grid analysis algorithms is important for supporting reliable operations. These key tools include power flow analysis and system security analysis, both needed for effective operational and strategic planning. The literature review shows a growing trend of machine learning (ML) models that perform these analyses effectively. In particular, Graph Neural Networks (GNNs) stand out in such applications because of the graph-based structure of power grids.


Search-Guided, Lightly-Supervised Training of Structured Prediction Energy Networks

Neural Information Processing Systems

In structured output prediction tasks, labeling ground-truth training output is often expensive. However, for many tasks, even when the true output is unknown, we can evaluate predictions using a scalar reward function, which may be easily assembled from human knowledge or non-differentiable pipelines. But searching through the entire output space to find the best output with respect to this reward function is typically intractable. In this paper, we instead use efficient truncated randomized search in this reward function to train structured prediction energy networks (SPENs), which provide efficient test-time inference using gradientbased search on a smooth, learned representation of the score landscape, and have previously yielded state-of-the-art results in structured prediction. In particular, this truncated randomized search in the reward function yields previously unknown local improvements, providing effective supervision to SPENs, avoiding their traditional need for labeled training data.