Reinforcement Learning for Resilient Power Grids
Zhao, Zhenting, Chen, Po-Yen, Jin, Yucheng
–arXiv.org Artificial Intelligence
Traditional power grid systems have become obsolete under more frequent and extreme natural disasters. Reinforcement learning (RL) has been a promising solution for resilience given its successful history of power grid control. However, most power grid simulators and RL interfaces do not support simulation of power grid under large-scale blackouts or when the network is divided into sub-networks. In this study, we proposed an updated power grid simulator built on Grid2Op, an existing simulator and RL interface, and experimented on limiting the action and observation spaces of Grid2Op. By testing with DDQN and SliceRDQN algorithms, we found that reduced action spaces significantly improve training performance and efficiency. In addition, we investigated a low-rank neural network regularization method for deep Q-learning, one of the most widely used RL algorithms, in this power grid control scenario. As a result, the experiment demonstrated that in the power grid simulation environment, adopting this method will significantly increase the performance of RL agents.
arXiv.org Artificial Intelligence
Dec-7-2022
- Country:
- Africa > Ethiopia
- Addis Ababa > Addis Ababa (0.04)
- Asia > Middle East
- Republic of Türkiye > Karaman Province > Karaman (0.04)
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- California > Alameda County
- Berkeley (0.30)
- Colorado > Denver County
- Denver (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Massachusetts > Suffolk County
- Boston (0.04)
- Nevada > Clark County
- Las Vegas (0.04)
- Texas > Travis County
- Austin (0.04)
- California > Alameda County
- Canada > Ontario
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Africa > Ethiopia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Energy > Power Industry (1.00)
- Leisure & Entertainment > Games
- Computer Games (0.49)
- Technology: