Distribution-balanced federated learning for fault identification of power lines
The state-of-the-art centralized machine learning applied to fault identification trains the collected data from edge devices on the cloud server due to the limitation of computing resources on edge. However, data leakage possibility increases considerably when sharing data with other devices on the...
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/172727 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | The state-of-the-art centralized machine learning applied to fault identification trains the collected data from edge devices on the cloud server due to the limitation of computing resources on edge. However, data leakage possibility increases considerably when sharing data with other devices on the cloud server, while training performance may degrade without data sharing. The study proposes a federated fault identification scheme, named DBFed-LSTM, by combining the distribution-balanced federated learning with the attention-based bidirectional long short-term memory, which can efficiently transfer training processes from the cloud server to edge devices. Under data privacy protections, local devices and the cloud server are specialized for storage and calculation as well as for updating the global model of learning vital time-frequency characteristics, respectively. Given that different device data for monitoring a small probability event are generally non-independent identically distributed (non-IID), a global-model pre-training method and improved focal loss are accordingly proposed. It is verified by the case study that the DBFed-LSTM can be effectively implemented to challenge centralized training with data sharing while preserving privacy and alleviating cloud server computation pressure even for non-IID data. Furthermore, it represents a much preferable performance and robust model to centralized training without data sharing. |
---|