Goto

Collaborating Authors

 voltage violation


DiffOPF: Diffusion Solver for Optimal Power Flow

Hoseinpour, Milad, Dvorkin, Vladimir

arXiv.org Machine Learning

The optimal power flow (OPF) is a multi-valued, non-convex mapping from loads to dispatch setpoints. The variability of system parameters (e.g., admittances, topology) further contributes to the multiplicity of dispatch setpoints for a given load. Existing deep learning OPF solvers are single-valued and thus fail to capture the variability of system parameters unless fully represented in the feature space, which is prohibitive. To solve this problem, we introduce a diffusion-based OPF solver, termed \textit{DiffOPF}, that treats OPF as a conditional sampling problem. The solver learns the joint distribution of loads and dispatch setpoints from operational history, and returns the marginal dispatch distributions conditioned on loads. Unlike single-valued solvers, DiffOPF enables sampling statistically credible warm starts with favorable cost and constraint satisfaction trade-offs. We explore the sample complexity of DiffOPF to ensure the OPF solution within a prescribed distance from the optimization-based solution, and verify this experimentally on power system benchmarks.


Robust Deep Reinforcement Learning for Inverter-based Volt-Var Control in Partially Observable Distribution Networks

Liu, Qiong, Guo, Ye, Xu, Tong

arXiv.org Artificial Intelligence

Inverter-based volt-var control is studied in this paper. One key issue in DRL-based approaches is the limited measurement deployment in active distribution networks, which leads to problems of a partially observable state and unknown reward. To address those problems, this paper proposes a robust DRL approach with a conservative critic and a surrogate reward. The conservative critic utilizes the quantile regression technology to estimate conservative state-action value function based on the partially observable state, which helps to train a robust policy; the surrogate rewards of power loss and voltage violation are designed that can be calculated from the limited measurements. The proposed approach optimizes the power loss of the whole network and the voltage profile of buses with measurable voltages while indirectly improving the voltage profile of other buses. Extensive simulations verify the effectiveness of the robust DRL approach in different limited measurement conditions, even when only the active power injection of the root bus and less than 10% of bus voltages are measurable.


Hierarchical Decoupling Capacitor Optimization for Power Distribution Network of 2.5D ICs with Co-Analysis of Frequency and Time Domains Based on Deep Reinforcement Learning

Duan, Yuanyuan, Feng, Haiyang, Yu, Zhiping, Wu, Hanming, Shao, Leilai, Zhu, Xiaolei

arXiv.org Artificial Intelligence

With the growing need for higher memory bandwidth and computation density, 2.5D design, which involves integrating multiple chiplets onto an interposer, emerges as a promising solution. However, this integration introduces significant challenges due to increasing data rates and a large number of I/Os, necessitating advanced optimization of the power distribution networks (PDNs) both on-chip and on-interposer to mitigate the small signal noise and simultaneous switching noise (SSN). Traditional PDN optimization strategies in 2.5D systems primarily focus on reducing impedance by integrating decoupling capacitors (decaps) to lessen small signal noises. Unfortunately, relying solely on frequency-domain analysis has been proven inadequate for addressing coupled SSN, as indicated by our experimental results. In this work, we introduce a novel two-phase optimization flow using deep reinforcement learning to tackle both the on-chip small signal noise and SSN. Initially, we optimize the impedance in the frequency domain to maintain the small signal noise within acceptable limits while avoiding over-design. Subsequently, in the time domain, we refine the PDN to minimize the voltage violation integral (VVI), a more accurate measure of SSN severity. To the best of our knowledge, this is the first dual-domain optimization strategy that simultaneously addresses both the small signal noise and SSN propagation through strategic decap placement in on-chip and on-interposer PDNs, offering a significant step forward in the design of robust PDNs for 2.5D integrated systems.


Safety-Aware Reinforcement Learning for Electric Vehicle Charging Station Management in Distribution Network

Fan, Jiarong, Liebman, Ariel, Wang, Hao

arXiv.org Artificial Intelligence

The increasing integration of electric vehicles (EVs) into the grid can pose a significant risk to the distribution system operation in the absence of coordination. In response to the need for effective coordination of EVs within the distribution network, this paper presents a safety-aware reinforcement learning (RL) algorithm designed to manage EV charging stations while ensuring the satisfaction of system constraints. Unlike existing methods, our proposed algorithm does not rely on explicit penalties for constraint violations, eliminating the need for penalty coefficient tuning. Furthermore, managing EV charging stations is further complicated by multiple uncertainties, notably the variability in solar energy generation and energy prices. To address this challenge, we develop an off-policy RL algorithm to efficiently utilize data to learn patterns in such uncertain environments. Our algorithm also incorporates a maximum entropy framework to enhance the RL algorithm's exploratory process, preventing convergence to local optimal solutions. Simulation results demonstrate that our algorithm outperforms traditional RL algorithms in managing EV charging in the distribution network.


Controlling Smart Inverters using Proxies: A Chance-Constrained DNN-based Approach

Gupta, Sarthak, Kekatos, Vassilis, Jin, Ming

arXiv.org Artificial Intelligence

Coordinating inverters at scale under uncertainty is the desideratum for integrating renewables in distribution grids. Unless load demands and solar generation are telemetered frequently, controlling inverters given approximate grid conditions or proxies thereof becomes a key specification. Although deep neural networks (DNNs) can learn optimal inverter schedules, guaranteeing feasibility is largely elusive. Rather than training DNNs to imitate already computed optimal power flow (OPF) solutions, this work integrates DNN-based inverter policies into the OPF. The proposed DNNs are trained through two OPF alternatives that confine voltage deviations on the average and as a convex restriction of chance constraints. The trained DNNs can be driven by partial, noisy, or proxy descriptors of the current grid conditions. This is important when OPF has to be solved for an unobservable feeder. DNN weights are trained via back-propagation and upon differentiating the AC power flow equations assuming the network model is known. Otherwise, a gradient-free variant is put forth. The latter is relevant when inverters are controlled by an aggregator having access only to a power flow solver or a digital twin of the feeder. Numerical tests compare the DNN-based inverter control schemes with the optimal inverter setpoints in terms of optimality and feasibility.


PowerGym: A Reinforcement Learning Environment for Volt-Var Control in Power Distribution Systems

Fan, Ting-Han, Lee, Xian Yeow, Wang, Yubo

arXiv.org Artificial Intelligence

Volt-Var control refers to the control of voltage (Volt) and reactive power (Var) in power distribution systems to achieve healthy operation of the systems. By optimally dispatching voltage regulators, switchable capacitors, and controllable batteries, Volt-Var control helps to flatten voltage profiles and reduce power losses across the power distribution systems. It is hence rated as the most desired function for power distribution systems [Borozan et al., 2001]. The center of the Volt-Var control is an optimization for voltage profiles and power losses governed by networked constraints. Represent a power distribution system as a tree graph (N, ξ), where N is the set of nodes or buses and ξ is the set of edges or lines and transformers.


Rethink AI-based Power Grid Control: Diving Into Algorithm Design

Zhou, Xiren, Wang, Siqi, Diao, Ruisheng, Bian, Desong, Duan, Jiahui, Shi, Di

arXiv.org Artificial Intelligence

Recently, deep reinforcement learning (DRL)-based approach has shown promise in solving complex decision and control problems in power engineering domain. In this paper, we present an in-depth analysis of DRL-based voltage control from aspects of algorithm selection, state space representation, and reward engineering. To resolve observed issues, we propose a novel imitation learning-based approach to directly map power grid operating points to effective actions without any interim reinforcement learning process. The performance results demonstrate that the proposed approach has strong generalization ability with much less training time. The agent trained by imitation learning is effective and robust to solve voltage control problem and outperforms the former RL agents.