|Authors||H. Chung, S. Maharjan, Y. Zhang and F. Eliassen|
|Title||Distributed Deep Reinforcement Learning for Intelligent Load Scheduling in Residential Smart Grid|
|Project(s)||The Center for Resilient Networks and Applications, Simula Metropolitan Center for Digital Engineering|
|Publication Type||Journal Article|
|Year of Publication||2020|
|Journal||IEEE Transactions on Industrial Informatics|
The power consumption of households has been constantly growing over the years. To cope with this growth, intelligent management of the consumption profile of the households is necessary, such that the households can save the electricity bills, and the stress to the power grid during peak hours can be reduced. However, implementing such a method is challenging due to the existence of randomness in the electricity price and the consumption of the appliances. To address this challenge, we employ a model-free method for the households which works with limited information about the uncertain factors. More specifically, the interactions between households and the power grid can be modeled as a non-cooperative stochastic game, where the electricity price is viewed as a stochastic variable. To search for the Nash equilibrium (NE) of the game, we adopt a method based on distributed deep reinforcement learning. Also, the proposed method can preserve the privacy of the households. We then utilize real-world data from Pecan Street Inc., which contains the power consumption profile of more than 1; 000 households, to evaluate the performance of the proposed method. In average, the results reveal that we can achieve around 12% reduction on peak-to-average ratio (PAR) and 11% reduction on load variance. With this approach, the operation cost of the power grid and the electricity cost of the households can be reduced.