Reinforcement learning-based intelligent resource allocation for integrated VLCP systems
In this letter, an intelligent resource allocation framework based on model-free reinforcement learning (RL) is first presented for multi-user integrated visible light communication and positioning (VLCP) systems, in order to maximize the sum rate of users while guaranteeing the users' minimum...
Saved in:
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/142886 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-142886 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1428862020-07-07T04:34:40Z Reinforcement learning-based intelligent resource allocation for integrated VLCP systems Yang, Helin Du, Pengfei Zhong, Wen-De Chen, Chen Alphones, Arokiaswami Zhang, Sheng School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Visible Light Communication and Positioning Intelligent Resource Allocation In this letter, an intelligent resource allocation framework based on model-free reinforcement learning (RL) is first presented for multi-user integrated visible light communication and positioning (VLCP) systems, in order to maximize the sum rate of users while guaranteeing the users' minimum data rates and positioning accuracy constraints. The learning framework can learn the optimal policy under unknown environment's dynamics and the continuous-valued space, and a reward function is proposed to take into account the strict communication and positioning constraints. Moreover, a modified experience replay actor-critic (MERAC) RL approach is proposed to improve the learning efficiency and convergence speed, which efficiently collects the reliable experience and utilizes the most useful knowledge from the memory. Numerical results show that the MERAC approach can effectively learn to satisfy the strict constraints and achieve the fast convergence speed. NRF (Natl Research Foundation, S’pore) Accepted version 2020-07-07T03:20:17Z 2020-07-07T03:20:17Z 2019 Journal Article Yang, H., Du, P., Zhong, W.-D., Chen, C., Alphones, A., & Zhang, S. (2019). Reinforcement learning-based intelligent resource allocation for integrated VLCP systems. IEEE Wireless Communications Letters, 8(4), 1204-1207. doi:10.1109/lwc.2019.2911682 2162-2337 https://hdl.handle.net/10356/142886 10.1109/lwc.2019.2911682 4 8 1204 1207 en SMA-RP6 IEEE Wireless Communications Letters © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/lwc.2019.2911682 application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering Visible Light Communication and Positioning Intelligent Resource Allocation |
spellingShingle |
Engineering::Electrical and electronic engineering Visible Light Communication and Positioning Intelligent Resource Allocation Yang, Helin Du, Pengfei Zhong, Wen-De Chen, Chen Alphones, Arokiaswami Zhang, Sheng Reinforcement learning-based intelligent resource allocation for integrated VLCP systems |
description |
In this letter, an intelligent resource allocation framework based on model-free reinforcement learning (RL) is first presented for multi-user integrated visible light communication and positioning (VLCP) systems, in order to maximize the sum rate of users while guaranteeing the users' minimum data rates and positioning accuracy constraints. The learning framework can learn the optimal policy under unknown environment's dynamics and the continuous-valued space, and a reward function is proposed to take into account the strict communication and positioning constraints. Moreover, a modified experience replay actor-critic (MERAC) RL approach is proposed to improve the learning efficiency and convergence speed, which efficiently collects the reliable experience and utilizes the most useful knowledge from the memory. Numerical results show that the MERAC approach can effectively learn to satisfy the strict constraints and achieve the fast convergence speed. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Yang, Helin Du, Pengfei Zhong, Wen-De Chen, Chen Alphones, Arokiaswami Zhang, Sheng |
format |
Article |
author |
Yang, Helin Du, Pengfei Zhong, Wen-De Chen, Chen Alphones, Arokiaswami Zhang, Sheng |
author_sort |
Yang, Helin |
title |
Reinforcement learning-based intelligent resource allocation for integrated VLCP systems |
title_short |
Reinforcement learning-based intelligent resource allocation for integrated VLCP systems |
title_full |
Reinforcement learning-based intelligent resource allocation for integrated VLCP systems |
title_fullStr |
Reinforcement learning-based intelligent resource allocation for integrated VLCP systems |
title_full_unstemmed |
Reinforcement learning-based intelligent resource allocation for integrated VLCP systems |
title_sort |
reinforcement learning-based intelligent resource allocation for integrated vlcp systems |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/142886 |
_version_ |
1681058759069138944 |