Goto

Collaborating Authors

 citylearn


A Novel Bifurcation Method for Observation Perturbation Attacks on Reinforcement Learning Agents: Load Altering Attacks on a Cyber Physical Power System

Broda-Milian, Kiernan, Al-Mallah, Ranwa, Dagdougui, Hanane

arXiv.org Artificial Intelligence

Components of cyber physical systems, which affect real-world processes, are often exposed to the internet. Replacing conventional control methods with Deep Reinforcement Learning (DRL) in energy systems is an active area of research, as these systems become increasingly complex with the advent of renewable energy sources and the desire to improve their efficiency. Artificial Neural Networks (ANN) are vulnerable to specific perturbations of their inputs or features, called adversarial examples. These perturbations are difficult to detect when properly regularized, but have significant effects on the ANN's output. Because DRL uses ANN to map optimal actions to observations, they are similarly vulnerable to adversarial examples. This work proposes a novel attack technique for continuous control using Group Difference Logits loss with a bifurcation layer. By combining aspects of targeted and untargeted attacks, the attack significantly increases the impact compared to an untargeted attack, with drastically smaller distortions than an optimally targeted attack. We demonstrate the impacts of powerful gradient-based attacks in a realistic smart energy environment, show how the impacts change with different DRL agents and training procedures, and use statistical and time-series analysis to evaluate attacks' stealth. The results show that adversarial attacks can have significant impacts on DRL controllers, and constraining an attack's perturbations makes it difficult to detect. However, certain DRL architectures are far more robust, and robust training methods can further reduce the impact.


CityLearn v2: Energy-flexible, resilient, occupant-centric, and carbon-aware management of grid-interactive communities

Nweye, Kingsley, Kaspar, Kathryn, Buscemi, Giacomo, Fonseca, Tiago, Pinto, Giuseppe, Ghose, Dipanjan, Duddukuru, Satvik, Pratapa, Pavani, Li, Han, Mohammadi, Javad, Ferreira, Luis Lino, Hong, Tianzhen, Ouf, Mohamed, Capozzoli, Alfonso, Nagy, Zoltan

arXiv.org Artificial Intelligence

As more distributed energy resources become part of the demand-side infrastructure, it is important to quantify the energy flexibility they provide on a community scale, particularly to understand the impact of geographic, climatic, and occupant behavioral differences on their effectiveness, as well as identify the best control strategies to accelerate their real-world adoption. CityLearn provides an environment for benchmarking simple and advanced distributed energy resource control algorithms including rule-based, model-predictive, and reinforcement learning control. CityLearn v2 presented here extends CityLearn v1 by providing a simulation environment that leverages the End-Use Load Profiles for the U.S. Building Stock dataset to create virtual grid-interactive communities for resilient, multi-agent distributed energy resources and objective control with dynamic occupant feedback. This work details the v2 environment design and provides application examples that utilize reinforcement learning to manage battery energy storage system charging/discharging cycles, vehicle-to-grid control, and thermal comfort during heat pump power modulation.


EVLearn: Extending the CityLearn Framework with Electric Vehicle Simulation

Fonseca, Tiago, Ferreira, Luis, Cabral, Bernardo, Severino, Ricardo, Nweye, Kingsley, Ghose, Dipanjan, Nagy, Zoltan

arXiv.org Artificial Intelligence

Intelligent energy management strategies, such as Vehicle-to-Grid (V2G) and Grid-to-Vehicle (G2V) emerge as a potential solution to the Electric Vehicles' (EVs) integration into the energy grid. These strategies promise enhanced grid resilience and economic benefits for both vehicle owners and grid operators. Despite the announced prospective, the adoption of these strategies is still hindered by an array of operational problems. Key among these is the lack of a simulation platform that allows to validate and refine V2G and G2V strategies. Including the development, training, and testing in the context of Energy Communities (ECs) incorporating multiple flexible energy assets. Addressing this gap, first we introduce the EVLearn, a simulation module for researching in both V2G and G2V energy management strategies, that models EVs, their charging infrastructure and associated energy flexibility dynamics; second, this paper integrates EVLearn with the existing CityLearn framework, providing V2G and G2V simulation capabilities into the study of broader energy management strategies. Results validated EVLearn and its integration into CityLearn, where the impact of these strategies is highlighted through a comparative simulation scenario.


CityLearn: Standardizing Research in Multi-Agent Reinforcement Learning for Demand Response and Urban Energy Management

Vazquez-Canteli, Jose R, Dey, Sourav, Henze, Gregor, Nagy, Zoltan

arXiv.org Artificial Intelligence

Rapid urbanization, increasing integration of distributed renewable energy resources, energy storage, and electric vehicles introduce new challenges for the power grid. In the US, buildings represent about 70% of the total electricity demand and demand response has the potential for reducing peaks of electricity by about 20%. Unlocking this potential requires control systems that operate on distributed systems, ideally data-driven and model-free. For this, reinforcement learning (RL) algorithms have gained increased interest in the past years. However, research in RL for demand response has been lacking the level of standardization that propelled the enormous progress in RL research in the computer science community. To remedy this, we created CityLearn, an OpenAI Gym Environment which allows researchers to implement, share, replicate, and compare their implementations of RL for demand response. Here, we discuss this environment and The CityLearn Challenge, a RL competition we organized to propel further progress in this field.


A Centralised Soft Actor Critic Deep Reinforcement Learning Approach to District Demand Side Management through CityLearn

Kathirgamanathan, Anjukan, Twardowski, Kacper, Mangina, Eleni, Finn, Donal

arXiv.org Machine Learning

Reinforcement learning is a promising model-free and adaptive controller for demand side management, as part of the future smart grid, at the district level. This paper presents the results of the algorithm that was submitted for the CityLearn Challenge, which was hosted in early 2020 with the aim of designing and tuning a reinforcement learning agent to flatten and smooth the aggregated curve of electrical demand of a district of diverse buildings. The proposed solution secured second place in the challenge using a centralised 'Soft Actor Critic' deep reinforcement learning agent that was able to handle continuous action spaces. The controller was able to achieve an averaged score of 0.967 on the challenge dataset comprising of different buildings and climates. This highlights the potential application of deep reinforcement learning as a plug-and-play style controller, that is capable of handling different climates and a heterogenous building stock, for district demand side management of buildings.