Optimization strategies for federated learning

Federated Learning (FL) has emerged as a prominent approach for training collaborative machine learning models within wireless communication networks. FL offers significant privacy advantages since sensitive data remains on the devices to reduce the risk of data breaches. Additionally, FL can improv...

Full description

Saved in:
Bibliographic Details
Main Author: Zhang, Tinghao
Other Authors: Lam Kwok Yan
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2025
Subjects:
Online Access:https://hdl.handle.net/10356/182243
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-182243
record_format dspace
spelling sg-ntu-dr.10356-1822432025-01-23T03:21:32Z Optimization strategies for federated learning Zhang, Tinghao Lam Kwok Yan College of Computing and Data Science kwokyan.lam@ntu.edu.sg Computer and Information Science Federated Learning (FL) has emerged as a prominent approach for training collaborative machine learning models within wireless communication networks. FL offers significant privacy advantages since sensitive data remains on the devices to reduce the risk of data breaches. Additionally, FL can improve the speed of model training as it allows for parallel training on multiple local devices without transferring large volumes of data to a central server. However, the practical deployment of FL faces challenges due to the limited bandwidth resources of remote servers and the constrained computational capabilities of wireless devices. Therefore, optimization strategies are necessary to enhance the efficiency of FL. Device scheduling has become as a critical aspect of optimization strategies for FL. It focuses on selecting a subset of devices to alleviate network congestion by considering factors such as device heterogeneity, channel conditions, and learning efficiency. Along with device scheduling, resource allocation can improve FL efficiency by distributing communication and computation resources among local devices to minimize the time delay or the energy consumption for FL training. However, due to intractable interaction among multiple variables, stringent constraints, and the necessity to optimize multiple objectives concurrently, developing effective device scheduling and resource allocation algorithms for FL is challenging. This thesis proposes three frameworks to effectively handle the optimization aspect of FL. The major contributions of this thesis include: Firstly, address the challenge of device scheduling within the framework of spectrum allocation, we propose a weight-divergence-based device selection method coupled with an energy-efficient spectrum allocation optimization technique. Experiments demonstrate that these approaches significantly accelerate FL training and improve convergence compared to benchmark methods. The second contribution lies in the domain of device scheduling for bandwidth allocation. We achieve this through a deep reinforcement learning-based scheduling strategy and an optimized bandwidth allocation method, enabling FL to achieve target accuracy with reduced system costs. Lastly, to further explores device scheduling in hierarchical Federated Learning (HFL), we propose an HFL framework integrates effective device scheduling and assignment techniques, which expedite convergence and minimize costs, making FL more efficient and practical for real-world deployment. Together, these contributions form a cohesive strategy to advance FL by addressing its key challenges in efficiency, scalability, and resource management. Doctor of Philosophy 2025-01-23T03:21:32Z 2025-01-23T03:21:32Z 2025 Thesis-Doctor of Philosophy Zhang, T. (2025). Optimization strategies for federated learning. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/182243 https://hdl.handle.net/10356/182243 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
spellingShingle Computer and Information Science
Zhang, Tinghao
Optimization strategies for federated learning
description Federated Learning (FL) has emerged as a prominent approach for training collaborative machine learning models within wireless communication networks. FL offers significant privacy advantages since sensitive data remains on the devices to reduce the risk of data breaches. Additionally, FL can improve the speed of model training as it allows for parallel training on multiple local devices without transferring large volumes of data to a central server. However, the practical deployment of FL faces challenges due to the limited bandwidth resources of remote servers and the constrained computational capabilities of wireless devices. Therefore, optimization strategies are necessary to enhance the efficiency of FL. Device scheduling has become as a critical aspect of optimization strategies for FL. It focuses on selecting a subset of devices to alleviate network congestion by considering factors such as device heterogeneity, channel conditions, and learning efficiency. Along with device scheduling, resource allocation can improve FL efficiency by distributing communication and computation resources among local devices to minimize the time delay or the energy consumption for FL training. However, due to intractable interaction among multiple variables, stringent constraints, and the necessity to optimize multiple objectives concurrently, developing effective device scheduling and resource allocation algorithms for FL is challenging. This thesis proposes three frameworks to effectively handle the optimization aspect of FL. The major contributions of this thesis include: Firstly, address the challenge of device scheduling within the framework of spectrum allocation, we propose a weight-divergence-based device selection method coupled with an energy-efficient spectrum allocation optimization technique. Experiments demonstrate that these approaches significantly accelerate FL training and improve convergence compared to benchmark methods. The second contribution lies in the domain of device scheduling for bandwidth allocation. We achieve this through a deep reinforcement learning-based scheduling strategy and an optimized bandwidth allocation method, enabling FL to achieve target accuracy with reduced system costs. Lastly, to further explores device scheduling in hierarchical Federated Learning (HFL), we propose an HFL framework integrates effective device scheduling and assignment techniques, which expedite convergence and minimize costs, making FL more efficient and practical for real-world deployment. Together, these contributions form a cohesive strategy to advance FL by addressing its key challenges in efficiency, scalability, and resource management.
author2 Lam Kwok Yan
author_facet Lam Kwok Yan
Zhang, Tinghao
format Thesis-Doctor of Philosophy
author Zhang, Tinghao
author_sort Zhang, Tinghao
title Optimization strategies for federated learning
title_short Optimization strategies for federated learning
title_full Optimization strategies for federated learning
title_fullStr Optimization strategies for federated learning
title_full_unstemmed Optimization strategies for federated learning
title_sort optimization strategies for federated learning
publisher Nanyang Technological University
publishDate 2025
url https://hdl.handle.net/10356/182243
_version_ 1823108724432044032