DRL-based contract incentive for wireless-powered and UAV-assisted backscattering MEC system

Mobile edge computing (MEC) is viewed as a promising technology to address the challenges of intensive computing demands in hotspots (HSs). In this article, we consider a unmanned aerial vehicle (UAV)-assisted backscattering MEC system. The UAVs can fly from parking aprons to HSs, providing energy t...

Full description

Saved in:
Bibliographic Details
Main Authors: Chen, Che, Gong, Shimin, Zhang, Wenjie, Zheng, Yifeng, Kiat, Yeo Chai
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/178293
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-178293
record_format dspace
spelling sg-ntu-dr.10356-1782932024-06-11T01:38:06Z DRL-based contract incentive for wireless-powered and UAV-assisted backscattering MEC system Chen, Che Gong, Shimin Zhang, Wenjie Zheng, Yifeng Kiat, Yeo Chai School of Computer Science and Engineering Computer and Information Science Mobile edge computing Contract incentive Mobile edge computing (MEC) is viewed as a promising technology to address the challenges of intensive computing demands in hotspots (HSs). In this article, we consider a unmanned aerial vehicle (UAV)-assisted backscattering MEC system. The UAVs can fly from parking aprons to HSs, providing energy to HSs via RF beamforming and collecting data from wireless users in HSs through backscattering. We aim to maximize the long-term utility of all HSs, subject to the stability of the HSs' energy queues. This problem is a joint optimization of the data offloading decision and contract design that should be adaptive to the users' random task demands and the time-varying wireless channel conditions. A deep reinforcement learning based contract incentive (DRLCI) strategy is proposed to solve this problem in two steps. First, we use deep Q-network (DQN) algorithm to update the HSs' offloading decisions according to the changing network environment. Second, to motivate the UAVs to participate in resource sharing, a contract specific to each type of UAVs has been designed, utilizing Lagrangian multiplier method to approach the optimal contract. Simulation results show the feasibility and efficiency of the proposed strategy, demonstrating a better performance than the natural DQN and Double-DQN algorithms. This work was supported in part by National Natural Science Foundation of China under Grant 62372488, in part by the Shenzhen Fundamental Research Program under Grant JCYJ20220818103201004, in part by Fujian Province Undergraduate Education and Teaching Research Project under Grant FBJY20230170, and in part by High level cultivation projects of Minnan Normal University under Grant MSGJB2021007). 2024-06-11T01:38:06Z 2024-06-11T01:38:06Z 2024 Journal Article Chen, C., Gong, S., Zhang, W., Zheng, Y. & Kiat, Y. C. (2024). DRL-based contract incentive for wireless-powered and UAV-assisted backscattering MEC system. IEEE Transactions On Cloud Computing, 12(1), 264-276. https://dx.doi.org/10.1109/TCC.2024.3360443 2168-7161 https://hdl.handle.net/10356/178293 10.1109/TCC.2024.3360443 2-s2.0-85184332511 1 12 264 276 en IEEE Transactions on Cloud Computing © 2024 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Mobile edge computing
Contract incentive
spellingShingle Computer and Information Science
Mobile edge computing
Contract incentive
Chen, Che
Gong, Shimin
Zhang, Wenjie
Zheng, Yifeng
Kiat, Yeo Chai
DRL-based contract incentive for wireless-powered and UAV-assisted backscattering MEC system
description Mobile edge computing (MEC) is viewed as a promising technology to address the challenges of intensive computing demands in hotspots (HSs). In this article, we consider a unmanned aerial vehicle (UAV)-assisted backscattering MEC system. The UAVs can fly from parking aprons to HSs, providing energy to HSs via RF beamforming and collecting data from wireless users in HSs through backscattering. We aim to maximize the long-term utility of all HSs, subject to the stability of the HSs' energy queues. This problem is a joint optimization of the data offloading decision and contract design that should be adaptive to the users' random task demands and the time-varying wireless channel conditions. A deep reinforcement learning based contract incentive (DRLCI) strategy is proposed to solve this problem in two steps. First, we use deep Q-network (DQN) algorithm to update the HSs' offloading decisions according to the changing network environment. Second, to motivate the UAVs to participate in resource sharing, a contract specific to each type of UAVs has been designed, utilizing Lagrangian multiplier method to approach the optimal contract. Simulation results show the feasibility and efficiency of the proposed strategy, demonstrating a better performance than the natural DQN and Double-DQN algorithms.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Chen, Che
Gong, Shimin
Zhang, Wenjie
Zheng, Yifeng
Kiat, Yeo Chai
format Article
author Chen, Che
Gong, Shimin
Zhang, Wenjie
Zheng, Yifeng
Kiat, Yeo Chai
author_sort Chen, Che
title DRL-based contract incentive for wireless-powered and UAV-assisted backscattering MEC system
title_short DRL-based contract incentive for wireless-powered and UAV-assisted backscattering MEC system
title_full DRL-based contract incentive for wireless-powered and UAV-assisted backscattering MEC system
title_fullStr DRL-based contract incentive for wireless-powered and UAV-assisted backscattering MEC system
title_full_unstemmed DRL-based contract incentive for wireless-powered and UAV-assisted backscattering MEC system
title_sort drl-based contract incentive for wireless-powered and uav-assisted backscattering mec system
publishDate 2024
url https://hdl.handle.net/10356/178293
_version_ 1811609735114260480