|Authors||K. Zhang, J. Cao, H. Liu, S. Maharjan and Y. Zhang|
|Title||Deep Reinforcement Learning for Social-Aware Edge Computing and Caching in Urban Informatics|
|Project(s)||Simula Metropolitan Center for Digital Engineering, The Center for Resilient Networks and Applications|
|Publication Type||Journal Article|
|Year of Publication||2020|
|Journal||IEEE Transactions on Industrial Informatics|
|Pagination||5467 - 5477|
Empowered with urban informatics, transportation industry has witnessed a paradigm shift. These developments lead to the need of content processing and sharing between vehicles under strict delay constraints. Mobile edge services can help meet these demands through computation offloading and edge caching empowered transmission, while cache-enabled smart vehicles may also work as carriers for content dispatch. However, diverse capacities of edge servers and smart vehicles, as well as unpredictable vehicle routes, make efficient content distribution a challenge. To cope with this challenge, in this article we develop a social-aware nobile edge computing and caching mechanism by exploiting the relation between vehicles and roadside units. By leveraging a deep reinforcement learning approach, we propose optimal content processing and caching schemes that maximize the dispatch utility in an urban environment with diverse vehicular social characteristics.