Optimization strategies for federated learning
Federated Learning (FL) has emerged as a prominent approach for training collaborative machine learning models within wireless communication networks. FL offers significant privacy advantages since sensitive data remains on the devices to reduce the risk of data breaches. Additionally, FL can improv...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182243 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Federated Learning (FL) has emerged as a prominent approach for training collaborative machine learning models within wireless communication networks. FL offers significant privacy advantages since sensitive data remains on the devices to reduce the risk of data breaches. Additionally, FL can improve the speed of model training as it allows for parallel training on multiple local devices without transferring large volumes of data to a central server. However, the practical deployment of FL faces challenges due to the limited bandwidth resources of remote servers and the constrained computational capabilities of wireless devices. Therefore, optimization strategies are necessary to enhance the efficiency of FL. Device scheduling has become as a critical aspect of optimization strategies for FL. It focuses on selecting a subset of devices to alleviate network congestion by considering factors such as device heterogeneity, channel conditions, and learning efficiency. Along with device scheduling, resource allocation can improve FL efficiency by distributing communication and computation resources among local devices to minimize the time delay or the energy consumption for FL training. However, due to intractable interaction among multiple variables, stringent constraints, and the necessity to optimize multiple objectives concurrently, developing effective device scheduling and resource allocation algorithms for FL is challenging. This thesis proposes three frameworks to effectively handle the optimization aspect of FL.
The major contributions of this thesis include: Firstly, address the challenge of device scheduling within the framework of spectrum allocation, we propose a weight-divergence-based device selection method coupled with an energy-efficient spectrum allocation optimization technique. Experiments demonstrate that these approaches significantly accelerate FL training and improve convergence compared to benchmark methods. The second contribution lies in the domain of device scheduling for bandwidth allocation. We achieve this through a deep reinforcement learning-based scheduling strategy and an optimized bandwidth allocation method, enabling FL to achieve target accuracy with reduced system costs. Lastly, to further explores device scheduling in hierarchical Federated Learning (HFL), we propose an HFL framework integrates effective device scheduling and assignment techniques, which expedite convergence and minimize costs, making FL more efficient and practical for real-world deployment. Together, these contributions form a cohesive strategy to advance FL by addressing its key challenges in efficiency, scalability, and resource management. |
---|