Projects per year
Abstract
The increasing penetration of distributed energy resources and a large volume of unprecedented data from smart metering infrastructure can help consumers transit to an active role in the smart grid. In this article, we propose a human-machine reinforcement learning (RL) framework in the smart grid context to formulate an energy management strategy for electric vehicles and thermostatically controlled loads aggregators. The proposed model-free method accelerates the decision-making speed by substituting the conventional optimization process, and it is more capable of coping with the diverse system environment via online learning. The human intervention is coordinated with machine learning to: 1) prevent the huge loss during the learning process; 2) realize emergency control; and 3) find preferable control policy. The performance of the proposed human-machine RL framework is verified in case studies. It can be concluded that our proposed method performs better than the conventional deep Q-learning and deep deterministic policy gradient in terms of convergence capability and preferable result exploration. Besides, the proposed method can better deal with emergent events, such as a sudden drop of photovoltaic (PV) output. Compared with the conventional model-based method, there are slight deviations between our method and the optimal solution, but the decision-making time is significantly reduced.
Original language | English |
---|---|
Pages (from-to) | 2974-2985 |
Number of pages | 12 |
Journal | IEEE Transactions on Industrial Informatics |
Volume | 18 |
Issue number | 5 |
DOIs | |
Publication status | Published - May 2022 |
Externally published | Yes |
Keywords
- Electric vehicles (EVs)
- energy management
- human-machine
- reinforcement learning (RL)
- thermostatically controlled loads (TCLs)
Projects
- 1 Active
-
ARC Training Centre for The Global Hydrogen Economy - UNSW
Amal, R. (Primary Chief Investigator (PCI)), Aguey-Zinsou, K.-F. (Chief Investigator (CI)), Moghtaderi, B. (Chief Investigator (CI)), MacGill, I. (Chief Investigator (CI)), Ashworth, P. (Chief Investigator (CI)), Zhu, J. (Chief Investigator (CI)), Buckley, C. E. (Chief Investigator (CI)), Zhao, C. (Chief Investigator (CI)), Scott, J. (Chief Investigator (CI)), Daiyan, R. (Chief Investigator (CI)), Simonov, A. (Chief Investigator (CI)), Cazorla, C. (Chief Investigator (CI)), Lovell, E. (Chief Investigator (CI)), Paskevicius, M. (Chief Investigator (CI)), Kara, S. (Chief Investigator (CI)), Qiu, J. (Chief Investigator (CI)), Lu, X. (Chief Investigator (CI)), Shen, Y. (Chief Investigator (CI)), Doroodchi, E. (Chief Investigator (CI)), Witt, K. (Chief Investigator (CI)), Haque, N. (Partner Investigator (PI)), Kudo, A. (Partner Investigator (PI)), Yun, J. (Partner Investigator (PI)), Matsumoto, H. (Partner Investigator (PI)), Wang, M. (Partner Investigator (PI)), Yu, A. (Partner Investigator (PI)), Gillespie, R. (Partner Investigator (PI)), Dannock, J. (Partner Investigator (PI)), Zheng, Y. (Partner Investigator (PI)), Ariyaka, S. (Partner Investigator (PI)), Cuevas, F. (Partner Investigator (PI)), Chen, K. (Partner Investigator (PI)), Bonnette, L. (Partner Investigator (PI)), Preston, B. (Partner Investigator (PI)), Owens, L. (Partner Investigator (PI)), Addo, E. (Partner Investigator (PI)) & Yoshino, Y. (Partner Investigator (PI))
2/06/21 → 14/06/26
Project: Research