|Authors||X. Huang, S. Leng, S. Maharjan and Y. Zhang|
|Title||Multi-Agent Deep Reinforcement Learning for Computation Offloading and Interference Coordination in Small Cell Networks|
|Project(s)||The Center for Resilient Networks and Applications, Simula Metropolitan Center for Digital Engineering|
|Publication Type||Journal Article|
|Year of Publication||2021|
|Journal||IEEE Transactions on Vehicular Technology|
Integrating mobile edge computing (MEC) with small cell networks has been conceived as a promising solution to provide pervasive computing services. However, the interactions among small cells due to inter-cell interference, the diverse application-specific requirements, as well as the highly dynamic wireless environment make it challenging to design an optimal computation offloading scheme. In this paper, we focus on the joint design of computation offloading and interference coordination for edge intelligence empowered small cell networks. To this end, we propose a distributed multi-agent deep reinforcement learning (DRL) scheme with the objective of minimizing the overall energy consumption while ensuring the latency requirements. Specifically, we exploit the collaboration among small cell base station (SBS) agents to adaptively adjust their strategies, considering computation offloading, channel allocation, power control, and computation resource allocation. Further, to decrease the computation complexity and signaling overhead of the training process, we design a federated DRL scheme which only requires SBS agents to share their model parameters instead of local training data. Numerical results demonstrate that our proposed schemes can significantly reduce the energy consumption and effectively guarantee the latency requirements compared with the benchmark schemes.