Abstract
Renewable energy resources (RERs) have been increasingly integrated into distribution networks (DNs) for decarbonization. However, the variable nature of RERs introduces uncertainties to DNs, frequently resulting in voltage fluctuations that threaten system security and hamper the further adoption of RERs. To incentivize more RER penetration, we propose a deep reinforcement learning (DRL)-based strategy to dynamically balance the trade-off between voltage fluctuation control and renewable accommodation. To further extract multi-time-scale spatial-temporal (ST) graphical information of a DN, our strategy draws on a multi-grained attention-based spatial-temporal graph convolution network (MG-ASTGCN), consisting of ST attention mechanism and ST convolution to explore the node correlations in the spatial and temporal views. The continuous decision-making process of balancing such a trade-off can be modeled as a Markov decision process optimized by the deep deterministic policy gradient (DDPG) algorithm with the help of the derived ST information. We validate our strategy on the modified IEEE 33, 69, and 118-bus radial distribution systems, with outcomes significantly outperforming the optimization-based benchmarks. Simulations also reveal that our developed MG-ASTGCN can to a great extent accelerate the convergence speed of DDPG and improve its performance in stabilizing node voltage in an RER-rich DN. Moreover, our method improves the DN's robustness in the presence of generator failures.
Original language | English |
---|---|
Pages (from-to) | 249-262 |
Number of pages | 14 |
Journal | IEEE Transactions on Sustainable Energy |
Volume | 15 |
Issue number | 1 |
DOIs | |
Publication status | Published - Jan 2024 |
Keywords
- attention mechanism
- Costs
- deep reinforcement learning (DRL)
- Fluctuations
- Generators
- graph convolution
- Optimization
- renewable accommodation
- Renewable energy resources (RERs)
- Renewable energy sources
- voltage control
- Voltage control
- Voltage fluctuations