Reinforcement learning-based intelligent resource allocation for integrated VLCP systems

In this letter, an intelligent resource allocation framework based on model-free reinforcement learning (RL) is first presented for multi-user integrated visible light communication and positioning (VLCP) systems, in order to maximize the sum rate of users while guaranteeing the users' minimum...

Full description

Saved in:
Bibliographic Details
Main Authors: Yang, Helin, Du, Pengfei, Zhong, Wen-De, Chen, Chen, Alphones, Arokiaswami, Zhang, Sheng
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/142886
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In this letter, an intelligent resource allocation framework based on model-free reinforcement learning (RL) is first presented for multi-user integrated visible light communication and positioning (VLCP) systems, in order to maximize the sum rate of users while guaranteeing the users' minimum data rates and positioning accuracy constraints. The learning framework can learn the optimal policy under unknown environment's dynamics and the continuous-valued space, and a reward function is proposed to take into account the strict communication and positioning constraints. Moreover, a modified experience replay actor-critic (MERAC) RL approach is proposed to improve the learning efficiency and convergence speed, which efficiently collects the reliable experience and utilizes the most useful knowledge from the memory. Numerical results show that the MERAC approach can effectively learn to satisfy the strict constraints and achieve the fast convergence speed.