Deep Reinforcement Learning for Social-Aware Edge Computing and Caching in Urban Informatics

Ke Zhang, Jiayu Cao, Hong Liu, Sabita Maharjan, Yan Zhang

Research output: Contribution to journalArticlepeer-review

55 Scopus citations

Abstract

Empowered with urban informatics, transportation industry has witnessed a paradigm shift. These developments lead to the need of content processing and sharing between vehicles under strict delay constraints. Mobile edge services can help meet these demands through computation offloading and edge caching empowered transmission, while cache-enabled smart vehicles may also work as carriers for content dispatch. However, diverse capacities of edge servers and smart vehicles, as well as unpredictable vehicle routes, make efficient content distribution a challenge. To cope with this challenge, in this article we develop a social-aware nobile edge computing and caching mechanism by exploiting the relation between vehicles and roadside units. By leveraging a deep reinforcement learning approach, we propose optimal content processing and caching schemes that maximize the dispatch utility in an urban environment with diverse vehicular social characteristics. Numerical results based on real urban traffic datasets demonstrate the efficiency of our proposed schemes.

Original languageEnglish
Article number8896914
Pages (from-to)5467-5477
Number of pages11
JournalIEEE Transactions on Industrial Informatics
Volume16
Issue number8
DOIs
StatePublished - Aug 2020
Externally publishedYes

Keywords

  • Deep reinforcement learning
  • social aware
  • vehicular edge computing

Fingerprint

Dive into the research topics of 'Deep Reinforcement Learning for Social-Aware Edge Computing and Caching in Urban Informatics'. Together they form a unique fingerprint.

Cite this