|Authors||L. Lin, X. Guan, Y. Peng, N. Wang, S. Maharjan and T. Ohtsuki|
|Title||Deep Reinforcement Learning for Economic Dispatch of Virtual Power Plant in Internet of Energy|
|Project(s)||Simula Metropolitan Center for Digital Engineering, The Center for Resilient Networks and Applications|
|Publication Type||Journal Article|
|Year of Publication||2020|
|Journal||IEEE Internet of Things Journal (Early Access)|
With high penetration of large scale distributed renewable energy generation, the power system is facing enormous challenges in terms of the inherent uncertainty of power generation of renewable energy resources. In this regard, virtual power plants (VPPs) can play a crucial role in integrating a large number of distributed generation units (DGs) more effectively to improve the stability of the power systems. Due to the uncertainty and nonlinear characteristics of DGs, reliable economic dispatch in VPPs requires timely and reliable communication between DGs, and between the generation side and the load side. The online economic dispatch optimizes the cost of VPPs. In this paper, we propose a deep reinforcement learning (DRL) algorithm for the optimal online economic dispatch strategy in VPPs. By utilizing DRL, our proposed algorithm reduced the computational complexity while also incorporating large and continuous state space due to the stochastic characteristics of distributed power generation. We further design an edge computing framework to handle the stochastic and large-state space characteristics of VPPs. The DRL based real time economic dispatch algorithm is executed online. We utilize real meteorological and load data to analyze and validate the performance of our proposed algorithm. The experimental results show that our proposed DRL based algorithm can successfully learn the characteristics of DGs and industrial user demands. It can learn to choose actions to minimize the cost of VPPs. Compared with DPG and DDPG, our proposed method has lower time complexity.