Deep reinforcement learning for secrecy energy efficiency maximization in RIS-assisted networks
This paper investigates the deep reinforcement learning (DRL) for maximization of the secrecy energy efficiency (SEE) in reconfigurable intelligent surface (RIS)-assisted networks. An SEE maximization problem is formulated under constraints of the rate requirement of each (legitimate) user, the powe...
Saved in:
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/170813 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-170813 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1708132023-10-03T05:11:44Z Deep reinforcement learning for secrecy energy efficiency maximization in RIS-assisted networks Zhang, Yichi Lu, Yang Zhang, Ruichen Ai, Bo Niyato, Dusit School of Computer Science and Engineering Engineering::Computer science and engineering Array Signal Processing Energy Efficiency This paper investigates the deep reinforcement learning (DRL) for maximization of the secrecy energy efficiency (SEE) in reconfigurable intelligent surface (RIS)-assisted networks. An SEE maximization problem is formulated under constraints of the rate requirement of each (legitimate) user, the power budget of the transmitter and the discrete phase shift coefficient of each reflecting element at the RIS by jointly optimizing the beamforming vectors for users and the artificial noise vectors for eavesdroppers as well as the phase shift matrix. The considered problem is first reformulated into a Markov decision process with the designed state space, action space and reward function, and then solved under a proximal policy optimization (PPO) framework. Numerical results are provided to evaluate the optimality, the generalization performance and the running time of proposed PPO-based algorithm. Info-communications Media Development Authority (IMDA) National Research Foundation (NRF) This work was supported in part by the Fundamental Research Funds for the Central Universities under Grant 2021RC204, in part by the National Natural Science Foundation of China (NSFC) under Grants 62101025 and 62221001, in part by the China Postdoctoral Science Foundation under Grants BX2021031 and 2021M690342, and in part by Beijing Nova Program under Grant Z211100002121139. The work of Dusit Niyato was supported in part by the National Research Foundation, Singapore and Infocomm Media Development Authority through the Future Communications Research Development Programme, and in part by the DSO National Laboratories through AI Singapore Programme under AISG Award AISG2-RP-2020-019), under Energy Research Test-Bed and Industry Partnership Funding Initiative, part of the Energy Grid (EG) 2.0 Programme, and under DesCartes and the Campus for Research Excellence and Technological Enterprise (CREATE) programme. 2023-10-03T05:11:44Z 2023-10-03T05:11:44Z 2023 Journal Article Zhang, Y., Lu, Y., Zhang, R., Ai, B. & Niyato, D. (2023). Deep reinforcement learning for secrecy energy efficiency maximization in RIS-assisted networks. IEEE Transactions On Vehicular Technology, 72(9), 12413-12418. https://dx.doi.org/10.1109/TVT.2023.3269805 0018-9545 https://hdl.handle.net/10356/170813 10.1109/TVT.2023.3269805 2-s2.0-85159669688 9 72 12413 12418 en AISG2-RP-2020-019 IEEE Transactions on Vehicular Technology © 2023 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Array Signal Processing Energy Efficiency |
spellingShingle |
Engineering::Computer science and engineering Array Signal Processing Energy Efficiency Zhang, Yichi Lu, Yang Zhang, Ruichen Ai, Bo Niyato, Dusit Deep reinforcement learning for secrecy energy efficiency maximization in RIS-assisted networks |
description |
This paper investigates the deep reinforcement learning (DRL) for maximization of the secrecy energy efficiency (SEE) in reconfigurable intelligent surface (RIS)-assisted networks. An SEE maximization problem is formulated under constraints of the rate requirement of each (legitimate) user, the power budget of the transmitter and the discrete phase shift coefficient of each reflecting element at the RIS by jointly optimizing the beamforming vectors for users and the artificial noise vectors for eavesdroppers as well as the phase shift matrix. The considered problem is first reformulated into a Markov decision process with the designed state space, action space and reward function, and then solved under a proximal policy optimization (PPO) framework. Numerical results are provided to evaluate the optimality, the generalization performance and the running time of proposed PPO-based algorithm. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Zhang, Yichi Lu, Yang Zhang, Ruichen Ai, Bo Niyato, Dusit |
format |
Article |
author |
Zhang, Yichi Lu, Yang Zhang, Ruichen Ai, Bo Niyato, Dusit |
author_sort |
Zhang, Yichi |
title |
Deep reinforcement learning for secrecy energy efficiency maximization in RIS-assisted networks |
title_short |
Deep reinforcement learning for secrecy energy efficiency maximization in RIS-assisted networks |
title_full |
Deep reinforcement learning for secrecy energy efficiency maximization in RIS-assisted networks |
title_fullStr |
Deep reinforcement learning for secrecy energy efficiency maximization in RIS-assisted networks |
title_full_unstemmed |
Deep reinforcement learning for secrecy energy efficiency maximization in RIS-assisted networks |
title_sort |
deep reinforcement learning for secrecy energy efficiency maximization in ris-assisted networks |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/170813 |
_version_ |
1779156370298241024 |