AuthorsG. Qiao, S. Leng, S. Maharjan, Y. Zhang and N. Ansari
TitleDeep Reinforcement Learning for Cooperative Content Caching in Vehicular Edge Computing and Networks
AfilliationCommunication Systems
Project(s)Simula Metropolitan Center for Digital Engineering, The Center for Resilient Networks and Applications
StatusPublished
Publication TypeJournal Article
Year of Publication2019
JournalIEEE Internet of Things Journal (Early Access)
Date Published10/2019
Publisher IEEE
Abstract

In this paper, we propose a cooperative edge caching scheme, a new paradigm to jointly optimize content placement and content delivery in vehicular edge computing and networks, with the aid of the flexible trilateral cooperations among a macro-cell station, roadside units and smart vehicles. We formulate the joint optimization problem as a double time-scale Markov decision process (DTS-MDP), based on the fact that the time-scale of content timeliness changes less frequently as compared to the vehicle mobility and network states during the content delivery process. At the beginning of the large time-scale, the content placement/updating decision can be obtained according to the content popularity, vehicle driving paths and resource availability. On the small time-scale, the joint vehicle scheduling and bandwidth allocation scheme is designed to minimize the content access cost while satisfying the constraint on content delivery latency. To solve the long-term mixed integer linear programming (LT-MILP) problem, we propose a nature-inspired method based on the deep deterministic policy gradient (DDPG) framework to obtain a suboptimal solution with a low computation complexity. Simulation results demonstrate that the proposed cooperative caching system can reduce the system cost, as well as the content delivery latency, and improve content hit ratio, as compared to the non-cooperative and random edge caching schemes.

Citation Key26980