Goto

Collaborating Authors

 hvac control


Reinforcement Learning (RL) Meets Urban Climate Modeling: Investigating the Efficacy and Impacts of RL-Based HVAC Control

Yu, Junjie, Schreck, John S., Gagne, David John, Oleson, Keith W., Li, Jie, Liang, Yongtu, Liao, Qi, Sun, Mingfei, Topping, David O., Zheng, Zhonghua

arXiv.org Artificial Intelligence

Reinforcement learning (RL)-based heating, ventilation, and air conditioning (HVAC) control has emerged as a promising technology for reducing building energy consumption while maintaining indoor thermal comfort. However, the efficacy of such strategies is influenced by the background climate and their implementation may potentially alter both the indoor climate and local urban climate. This study proposes an integrated framework combining RL with an urban climate model that incorporates a building energy model, aiming to evaluate the efficacy of RL-based HVAC control across different background climates, impacts of RL strategies on indoor climate and local urban climate, and the transferability of RL strategies across cities. Our findings reveal that the reward (defined as a weighted combination of energy consumption and thermal comfort) and the impacts of RL strategies on indoor climate and local urban climate exhibit marked variability across cities with different background climates. The sensitivity of reward weights and the transferability of RL strategies are also strongly influenced by the background climate. Cities in hot climates tend to achieve higher rewards across most reward weight configurations that balance energy consumption and thermal comfort, and those cities with more varying atmospheric temperatures demonstrate greater RL strategy transferability. These findings underscore the importance of thoroughly evaluating RL-based HVAC control strategies in diverse climatic contexts. This study also provides a new insight that city-to-city learning will potentially aid the deployment of RL-based HVAC control.


HVAC-DPT: A Decision Pretrained Transformer for HVAC Control

Berkes, Anaïs

arXiv.org Artificial Intelligence

Building operations consume approximately 40% of global energy, with Heating, Ventilation, and Air Conditioning (HVAC) systems responsible for up to 50% of this consumption. As HVAC energy demands are expected to rise, optimising system efficiency is crucial for reducing future energy use and mitigating climate change. Existing control strategies lack generalisation and require extensive training and data, limiting their rapid deployment across diverse buildings. This paper introduces HVAC-DPT, a Decision-Pretrained Transformer using in-context Reinforcement Learning (RL) for multi-zone HVAC control. HVAC-DPT frames HVAC control as a sequential prediction task, training a causal transformer on interaction histories generated by diverse RL agents. This approach enables HVAC-DPT to refine its policy in-context, without modifying network parameters, allowing for deployment across different buildings without the need for additional training or data collection. HVAC-DPT reduces energy consumption in unseen buildings by 45% compared to the baseline controller, offering a scalable and effective approach to mitigating the increasing environmental impact of HVAC systems.


An experimental evaluation of Deep Reinforcement Learning algorithms for HVAC control

Manjavacas, Antonio, Campoy-Nieves, Alejandro, Jiménez-Raboso, Javier, Molina-Solana, Miguel, Gómez-Romero, Juan

arXiv.org Artificial Intelligence

Heating, Ventilation, and Air Conditioning (HVAC) systems are a major driver of energy consumption in commercial and residential buildings. Recent studies have shown that Deep Reinforcement Learning (DRL) algorithms can outperform traditional reactive controllers. However, DRL-based solutions are generally designed for ad hoc setups and lack standardization for comparison. To fill this gap, this paper provides a critical and reproducible evaluation, in terms of comfort and energy consumption, of several state-of-the-art DRL algorithms for HVAC control. The study examines the controllers' robustness, adaptability, and trade-off between optimization goals by using the Sinergym framework. The results obtained confirm the potential of DRL algorithms, such as SAC and TD3, in complex scenarios and reveal several challenges related to generalization and incremental learning.


Enhancing personalised thermal comfort models with Active Learning for improved HVAC controls

Tekler, Zeynep Duygu, Lei, Yue, Dai, Xilei, Chong, Adrian

arXiv.org Artificial Intelligence

Developing personalised thermal comfort models to inform occupant-centric controls (OCC) in buildings requires collecting large amounts of real-time occupant preference data. This process can be highly intrusive and labour-intensive for large-scale implementations, limiting the practicality of real-world OCC implementations. To address this issue, this study proposes a thermal preference-based HVAC control framework enhanced with Active Learning (AL) to address the data challenges related to real-world implementations of such OCC systems. The proposed AL approach proactively identifies the most informative thermal conditions for human annotation and iteratively updates a supervised thermal comfort model. The resulting model is subsequently used to predict the occupants' thermal preferences under different thermal conditions, which are integrated into the building's HVAC controls. The feasibility of our proposed AL-enabled OCC was demonstrated in an EnergyPlus simulation of a real-world testbed supplemented with the thermal preference data of 58 study occupants. The preliminary results indicated a significant reduction in overall labelling effort (i.e., 31.0%) between our AL-enabled OCC and conventional OCC while still achieving a slight increase in energy savings (i.e., 1.3%) and thermal satisfaction levels above 98%. This result demonstrates the potential for deploying such systems in future real-world implementations, enabling personalised comfort and energy-efficient building operations.


A Comparison of Classical and Deep Reinforcement Learning Methods for HVAC Control

Wang, Marshall, Willes, John, Jiralerspong, Thomas, Moezzi, Matin

arXiv.org Artificial Intelligence

Reinforcement learning (RL) is a promising approach for optimizing HVAC control. RL offers a framework for improving system performance, reducing energy consumption, and enhancing cost efficiency. We benchmark two popular classical and deep RL methods (Q-Learning and Deep-Q-Networks) across multiple HVAC environments and explore the practical consideration of model hyper-parameter selection and reward tuning. The findings provide insight for configuring RL agents in HVAC systems, promoting energy-efficient and cost-effective operation.


One for Many: Transfer Learning for Building HVAC Control

Xu, Shichao, Wang, Yixuan, Wang, Yanzhi, O'Neill, Zheng, Zhu, Qi

arXiv.org Artificial Intelligence

The design of building heating, ventilation, and air conditioning (HVAC) system is critically important, as it accounts for around half of building energy consumption and directly affects occupant comfort, productivity, and health. Traditional HVAC control methods are typically based on creating explicit physical models for building thermal dynamics, which often require significant effort to develop and are difficult to achieve sufficient accuracy and efficiency for runtime building control and scalability for field implementations. Recently, deep reinforcement learning (DRL) has emerged as a promising data-driven method that provides good control performance without analyzing physical models at runtime. However, a major challenge to DRL (and many other data-driven learning methods) is the long training time it takes to reach the desired performance. In this work, we present a novel transfer learning based approach to overcome this challenge. Our approach can effectively transfer a DRL-based HVAC controller trained for the source building to a controller for the target building with minimal effort and improved performance, by decomposing the design of neural network controller into a transferable front-end network that captures building-agnostic behavior and a back-end network that can be efficiently trained for each specific building. We conducted experiments on a variety of transfer scenarios between buildings with different sizes, numbers of thermal zones, materials and layouts, air conditioner types, and ambient weather conditions. The experimental results demonstrated the effectiveness of our approach in significantly reducing the training time, energy cost, and temperature violations.


The Future of HVAC Lies in AI and IoT

#artificialintelligence

Various horizontal and vertical approaches exist for entering the IoT market. The debate about IoT market strategies will continue because of the bold projections for the IoT market. Unfortunately, hype leads to myth, and myth leads to confusion. Moving forward means taking a step back to look for clues about how the IoT market could evolve. Let's dive into a practical example from the Smart Home market to see how the future of HVAC systems is intertwined with AI and IoT.


Did You Know that the Future of HVAC is AI and IoT? - Senseware

#artificialintelligence

AI will play a large role in the era of Big Data. We have no doubt because the future of HVAC reveals AI and IoT. The debate about IoT market strategies will continue because of the expansive, even wild projections for the IoT market. Unfortunately, hype leads to myth, and myth leads to confusion. Moving forward means taking a step back to look for clues about how the IoT market could evolve.


HVAC-Aware Occupancy Scheduling (Extended Abstract)

Lim, Boon-Ping (NICTA and Australian National University)

AAAI Conferences

My research focuses on developing innovative ways to control Heating, Ventilation, and Air Conditioning (HVAC) and schedule occupancy flows in smart buildings to reduce our ecological footprint (and energy bills). We look at the potential for integrating building operations with room booking and meeting scheduling. Specifically, we improve on the effectiveness of energy-aware room-booking and occupancy scheduling approaches, by allowing the scheduling decisions to rely on an explicit model of the building's occupancy-based HVAC control. From computational standpoint, this is a challenging topic as HVAC models are inherently non-linear non-convex, and occupancy scheduling models additionally introduce discrete variables capturing the time slot and location at which each activity is scheduled. The mechanism needs to tradeoff minimizing energy cost against addressing occupancy thermal comfort and control feasibility in a highly dynamic and uncertain system.


HVAC-Aware Occupancy Scheduling

Lim, BoonPing (NICTA and Australian National University) | Briel, Menkes van den (NICTA and Australian National University) | Thiebaux, Sylvie (NICTA and Australian National University) | Backhaus, Scott (Los Alamos National Laboratory) | Bent, Russell (Los Alamos National Laboratory)

AAAI Conferences

Energy consumption in commercial and educational buildings is impacted by group activities such as meetings, workshops, classes and exams, and can be reduced by scheduling these activities to take place at times and locations that are favorable from an energy standpoint. This paper improves on the effectiveness of energy-aware room-booking and occupancy scheduling approaches, by allowing the scheduling decisions to rely on an explicit model of the building's occupancy-based HVAC control. The core component of our approach is a mixed-integer linear programming (MILP) model which optimally solves the joint occupancy scheduling and occupancy-based HVAC control problem. To scale up to realistic problem sizes, we embed this MILP model into a large neighbourhood search (LNS). We obtain substantial energy reduction in comparison with occupancy-based HVAC control using arbitrary schedules or using schedules obtained by existing heuristic energy-aware scheduling approaches.