AuthorsK. Zhang, J. Cao, H. Liu, S. Maharjan and Y. Zhang
TitleDeep Reinforcement Learning for Social-Aware Edge Computing and Caching in Urban Informatics
AfilliationCommunication Systems
Project(s)Simula Metropolitan Center for Digital Engineering, The Center for Resilient Networks and Applications
StatusPublished
Publication TypeJournal Article
Year of Publication2020
Journal IEEE Transactions on Industrial Informatics
Volume16
Issue8
Pagination5467 - 5477
Publisher IEEE
Abstract

Empowered with urban informatics, transportation industry has witnessed a paradigm shift. These developments lead to the need of content processing and sharing between vehicles under strict delay constraints. Mobile edge services can help meet these demands through computation offloading and edge caching empowered transmission, while cache-enabled smart vehicles may also work as carriers for content dispatch. However, diverse capacities of edge servers and smart vehicles, as well as unpredictable vehicle routes, make efficient content distribution a challenge. To cope with this challenge, in this article we develop a social-aware nobile edge computing and caching mechanism by exploiting the relation between vehicles and roadside units. By leveraging a deep reinforcement learning approach, we propose optimal content processing and caching schemes that maximize the dispatch utility in an urban environment with diverse vehicular social characteristics.

Citation Key27381