AuthorsX. Huang, S. Leng, S. Maharjan and Y. Zhang
TitleMulti-Agent Deep Reinforcement Learning for Computation Offloading and Interference Coordination in Small Cell Networks
AfilliationCommunication Systems
Project(s)The Center for Resilient Networks and Applications, Simula Metropolitan Center for Digital Engineering
StatusPublished
Publication TypeJournal Article
Year of Publication2021
JournalIEEE Transactions on Vehicular Technology
Volume70
Issue9
Date Published09/2021
Publisher IEEE
Abstract

Integrating mobile edge computing (MEC) with small cell networks has been conceived as a promising solution to provide pervasive computing services. However, the interactions among small cells due to inter-cell interference, the diverse application-specific requirements, as well as the highly dynamic wireless environment make it challenging to design an optimal computation offloading scheme. In this paper, we focus on the joint design of computation offloading and interference coordination for edge intelligence empowered small cell networks. To this end, we propose a distributed multi-agent deep reinforcement learning (DRL) scheme with the objective of minimizing the overall energy consumption while ensuring the latency requirements. Specifically, we exploit the collaboration among small cell base station (SBS) agents to adaptively adjust their strategies, considering computation offloading, channel allocation, power control, and computation resource allocation. Further, to decrease the computation complexity and signaling overhead of the training process, we design a federated DRL scheme which only requires SBS agents to share their model parameters instead of local training data. Numerical results demonstrate that our proposed schemes can significantly reduce the energy consumption and effectively guarantee the latency requirements compared with the benchmark schemes.

Citation Key28079