Asynchronous hybrid reinforcement learning for latency and reliability optimization in the metaverse over wireless communications

Technology advancements in wireless communications and high-performance Extended Reality (XR) have empowered the developments of the Metaverse. The demand for the Metaverse applications and hence, real-time digital twinning of real-world scenes is increasing. Nevertheless, the replication of 2D phys...

Full description

Saved in:
Bibliographic Details
Main Authors: Yu, Wenhan, Chua, Terence Jie, Zhao, Jun
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172579
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Technology advancements in wireless communications and high-performance Extended Reality (XR) have empowered the developments of the Metaverse. The demand for the Metaverse applications and hence, real-time digital twinning of real-world scenes is increasing. Nevertheless, the replication of 2D physical world images into 3D virtual objects is computationally intensive and requires computation offloading. The disparity in transmitted object dimension (2D as opposed to 3D) leads to asymmetric data sizes in uplink (UL) and downlink (DL). To ensure the reliability and low latency of the system, we consider an asynchronous joint UL-DL scenario where in the UL stage, the smaller data size of the physical world images captured by multiple extended reality users (XUs) will be uploaded to the Metaverse Console (MC) to be construed and rendered. In the DL stage, the larger-size 3D virtual objects need to be transmitted back to the XUs. We design a novel multi-agent reinforcement learning algorithm structure, namely Asynchronous Actors Hybrid Critic (AAHC), to optimize the decisions pertaining to computation offloading and channel assignment in the UL stage and optimize the DL transmission power in the DL stage. Extensive experiments demonstrate that compared to proposed baselines, AAHC obtains better solutions with satisfactory training time.