This paper proposes a novel reinforcement learning (RL) architecture for the efficient scheduling and control of the heating, ventilation and air conditioning (HVAC) system in a commercial building while harnessing its demand response (DR) potentials. With advances in automated building management systems, this can be achieved seamlessly by a smart autonomous RL agent which takes the best action, for example, a change in HVAC temperature set point, necessary to change the electricity usage pattern of a building in response to demand response signals, and with minimal thermal comfort impact to customers. Previous research in this area has tackled only individual aspects of the problem using RL. Specifically, due to the challenges in implementing demand response with whole-building models, simpler analytical models which poorly capture reality have been used instead. And where whole-building models are applied, RL is used for HVAC control mainly to achieve energy efficiency goals while demand response is neglected. Thus, in this research, we implement a holistic framework by designing an efficient RL controller for a whole-building model which learns to optimise and control the HVAC system for improved energy efficiency and thermal comfort levels in addition to achieving demand response goals. Our simulation results show that by applying reinforcement learning for normal HVAC operation, a maximum weekly energy reduction of up to 22% can be achieved compared to a handcrafted baseline controller. Furthermore, by employing a DR-aware RL controller during demand response periods, average power reductions or increases of up to 50% can be achieved on a weekly basis compared to the default RL controller, while keeping occupant thermal comfort levels within acceptable bounds.
- Demand response
- Reinforcement learning
- Whole-building HVAC control
- Distributed energy resources
- Optimal HVAC energy scheduling