Reinforcement learning-based intelligent resource allocation for integrated VLCP systems

In this letter, an intelligent resource allocation framework based on model-free reinforcement learning (RL) is first presented for multi-user integrated visible light communication and positioning (VLCP) systems, in order to maximize the sum rate of users while guaranteeing the users' minimum...

全面介紹

Saved in:
書目詳細資料
Main Authors: Yang, Helin, Du, Pengfei, Zhong, Wen-De, Chen, Chen, Alphones, Arokiaswami, Zhang, Sheng
其他作者: School of Electrical and Electronic Engineering
格式: Article
語言:English
出版: 2020
主題:
在線閱讀:https://hdl.handle.net/10356/142886
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:In this letter, an intelligent resource allocation framework based on model-free reinforcement learning (RL) is first presented for multi-user integrated visible light communication and positioning (VLCP) systems, in order to maximize the sum rate of users while guaranteeing the users' minimum data rates and positioning accuracy constraints. The learning framework can learn the optimal policy under unknown environment's dynamics and the continuous-valued space, and a reward function is proposed to take into account the strict communication and positioning constraints. Moreover, a modified experience replay actor-critic (MERAC) RL approach is proposed to improve the learning efficiency and convergence speed, which efficiently collects the reliable experience and utilizes the most useful knowledge from the memory. Numerical results show that the MERAC approach can effectively learn to satisfy the strict constraints and achieve the fast convergence speed.