Decentralized edge intelligence : a dynamic resource allocation framework for hierarchical federated learning
To enable the large scale and efficient deployment of Artificial Intelligence (AI), the confluence of AI and Edge Computing has given rise to Edge Intelligence, which leverages on the computation and communication capabilities of end devices and edge servers to process data closer to where it is pro...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/156035 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-156035 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Federated Learning Edge Intelligence |
spellingShingle |
Engineering::Computer science and engineering Federated Learning Edge Intelligence Lim, Bryan Wei Yang Ng, Jer Shyuan Xiong, Zehui Jin, Jiangming Zhang, Yang Niyato, Dusit Leung, Cyril Miao, Chunyan Decentralized edge intelligence : a dynamic resource allocation framework for hierarchical federated learning |
description |
To enable the large scale and efficient deployment of Artificial Intelligence (AI), the confluence of AI and Edge Computing has given rise to Edge Intelligence, which leverages on the computation and communication capabilities of end devices and edge servers to process data closer to where it is produced. One of the enabling technologies of Edge Intelligence is the privacy preserving machine learning paradigm known as Federated Learning (FL), which enables data owners to conduct model training without having to transmit their raw data to third-party servers. However, the FL network is envisioned to involve thousands of heterogeneous distributed devices. As a result, communication inefficiency remains a key bottleneck. To reduce node failures and device dropouts, the Hierarchical Federated Learning (HFL) framework has been proposed whereby cluster heads are designated to support the data owners through intermediate model aggregation. This decentralized learning approach reduces the reliance on a central controller, e.g., the model owner. However, the issues of resource allocation and incentive design are not well-studied in the HFL framework. In this article, we consider a two-level resource allocation and incentive mechanism design problem. In the lower level, the cluster heads offer rewards in exchange for the data owners' participation, and the data owners are free to choose which cluster to join. Specifically, we apply the evolutionary game theory to model the dynamics of the cluster selection process. In the upper level, each cluster head can choose to serve a model owner, whereas the model owners have to compete amongst each other for the services of the cluster heads. As such, we propose a deep learning based auction mechanism to derive the valuation of each cluster head's services. The performance evaluation shows the uniqueness and stability of our proposed evolutionary game, as well as the revenue maximizing properties of the deep learning based auction. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Lim, Bryan Wei Yang Ng, Jer Shyuan Xiong, Zehui Jin, Jiangming Zhang, Yang Niyato, Dusit Leung, Cyril Miao, Chunyan |
format |
Article |
author |
Lim, Bryan Wei Yang Ng, Jer Shyuan Xiong, Zehui Jin, Jiangming Zhang, Yang Niyato, Dusit Leung, Cyril Miao, Chunyan |
author_sort |
Lim, Bryan Wei Yang |
title |
Decentralized edge intelligence : a dynamic resource allocation framework for hierarchical federated learning |
title_short |
Decentralized edge intelligence : a dynamic resource allocation framework for hierarchical federated learning |
title_full |
Decentralized edge intelligence : a dynamic resource allocation framework for hierarchical federated learning |
title_fullStr |
Decentralized edge intelligence : a dynamic resource allocation framework for hierarchical federated learning |
title_full_unstemmed |
Decentralized edge intelligence : a dynamic resource allocation framework for hierarchical federated learning |
title_sort |
decentralized edge intelligence : a dynamic resource allocation framework for hierarchical federated learning |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/156035 |
_version_ |
1739837386908499968 |
spelling |
sg-ntu-dr.10356-1560352022-07-22T06:05:55Z Decentralized edge intelligence : a dynamic resource allocation framework for hierarchical federated learning Lim, Bryan Wei Yang Ng, Jer Shyuan Xiong, Zehui Jin, Jiangming Zhang, Yang Niyato, Dusit Leung, Cyril Miao, Chunyan School of Computer Science and Engineering Alibaba-NTU Joint Research Institute Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY) Engineering::Computer science and engineering Federated Learning Edge Intelligence To enable the large scale and efficient deployment of Artificial Intelligence (AI), the confluence of AI and Edge Computing has given rise to Edge Intelligence, which leverages on the computation and communication capabilities of end devices and edge servers to process data closer to where it is produced. One of the enabling technologies of Edge Intelligence is the privacy preserving machine learning paradigm known as Federated Learning (FL), which enables data owners to conduct model training without having to transmit their raw data to third-party servers. However, the FL network is envisioned to involve thousands of heterogeneous distributed devices. As a result, communication inefficiency remains a key bottleneck. To reduce node failures and device dropouts, the Hierarchical Federated Learning (HFL) framework has been proposed whereby cluster heads are designated to support the data owners through intermediate model aggregation. This decentralized learning approach reduces the reliance on a central controller, e.g., the model owner. However, the issues of resource allocation and incentive design are not well-studied in the HFL framework. In this article, we consider a two-level resource allocation and incentive mechanism design problem. In the lower level, the cluster heads offer rewards in exchange for the data owners' participation, and the data owners are free to choose which cluster to join. Specifically, we apply the evolutionary game theory to model the dynamics of the cluster selection process. In the upper level, each cluster head can choose to serve a model owner, whereas the model owners have to compete amongst each other for the services of the cluster heads. As such, we propose a deep learning based auction mechanism to derive the valuation of each cluster head's services. The performance evaluation shows the uniqueness and stability of our proposed evolutionary game, as well as the revenue maximizing properties of the deep learning based auction. AI Singapore Ministry of Education (MOE) Nanyang Technological University National Research Foundation (NRF) Submitted/Accepted version This work was supported in part by Alibaba Group through Alibaba Innovative Research (AIR) Program and AlibabaNTU Singapore Joint Research Institute (JRI), by National Research Foundation, Singapore, under its AI Singapore Programme under AISG awards AISG2-RP-2020-019 and AISGGC-2019-003, in part by WASP/NTU under Grant M4082187 (4080), in part by the Singapore Ministry of Education (MOE) Tier 1 (RG16/20), in part by the National Natural Science Foundation of China under Grant 62071343, and in part by SUTD under Grant SRG-ISTD-2021-165. 2022-04-01T05:23:38Z 2022-04-01T05:23:38Z 2021 Journal Article Lim, B. W. Y., Ng, J. S., Xiong, Z., Jin, J., Zhang, Y., Niyato, D., Leung, C. & Miao, C. (2021). Decentralized edge intelligence : a dynamic resource allocation framework for hierarchical federated learning. IEEE Transactions On Parallel and Distributed Systems, 33(3), 536-550. https://dx.doi.org/10.1109/TPDS.2021.3096076 1045-9219 https://hdl.handle.net/10356/156035 10.1109/TPDS.2021.3096076 2-s2.0-85113709454 3 33 536 550 en AISG2-RP-2020-019 AISGGC-2019-003 M4082187 (4080) RG16/20 IEEE Transactions on Parallel and Distributed Systems © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/TPDS.2021.3096076. application/pdf |