Statistical graph signal processing
This study provides an insight into the application of different filters in Graph Signal Processing (GSP) on different datasets. First, a comprehensive overview of GSP-related concepts is given, including the derivation of graph signals, the computation of Laplace matrices, and their application in...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/169047 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This study provides an insight into the application of different filters in Graph Signal Processing (GSP) on different datasets. First, a comprehensive overview of GSP-related concepts is given, including the derivation of graph signals, the computation of Laplace matrices, and their application in various practical scenarios. Next, we discuss in detail the latest GSP filter design methods, covering various linear and non-linear methods. We review graph filtering, graph signal sampling, graph signal compression/reconstruction, graph neural networks, etc., and compare and analyse the advantages and limitations of each type of filter from a theoretical perspective.
Four models, linear regression (LR), linear regression graph (LRG), kernel regression (KR) and kernel regression graph (KRG), were selected to process different data sets and the effects of training sample variation and noise interference on the prediction accuracy (i.e., normalised mean square error) of the models were investigated in depth. Through simulation experiments, we found that the performance of all models improved as the number of training samples increased. In some cases, the KR and KRG models outperformed the LR and LRG models, which only capture linear relationships, due to their ability to capture non-linear relationships in the data. In contrast, in noisy environments, the LRG and KRG models have higher robustness in handling additive white Gaussian noise due to the superiority of their Gaussian noise model design. In addition, we observed that the normalised mean square error of all models were approximately stable when the training sample size reached a certain level, which may mean that the performance of the models had reached their limits and further increasing the training data size may not significantly improve the performance. |
---|