Compact and interpretable convolutional neural network architecture for electroencephalogram based motor imagery decoding

Recently, due to the popularity of deep learning, the applicability of deep Neural Networks (DNN) algorithms such as the convolutional neural networks (CNN) has been explored in decoding electroencephalogram (EEG) for Brain-Computer Interface (BCI) applications. This allows decoding of the EEG signa...

Full description

Saved in:
Bibliographic Details
Main Author: Ahmad Izzuddin, Tarmizi
Format: Thesis
Language:English
Published: 2022
Subjects:
Online Access:http://eprints.utm.my/id/eprint/101969/1/TarmiziAhmadIzzuddinPSKE2022.pdf.pdf
http://eprints.utm.my/id/eprint/101969/
http://dms.library.utm.my:8080/vital/access/manager/Repository/vital:149285
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Teknologi Malaysia
Language: English
Description
Summary:Recently, due to the popularity of deep learning, the applicability of deep Neural Networks (DNN) algorithms such as the convolutional neural networks (CNN) has been explored in decoding electroencephalogram (EEG) for Brain-Computer Interface (BCI) applications. This allows decoding of the EEG signals end-to-end, eliminating the tedious process of manually tuning each process in the decoding pipeline. However, the current DNN architectures, consisting of multiple hidden layers and numerous parameters, are not developed for EEG decoding and classification tasks, making them underperform when decoding EEG signals. Apart from this, a DNN is typically treated as a black box and interpreting what the network learns in solving the classification task is difficult, hindering from performing neurophysiological validation of the network. This thesis proposes an improved and compact CNN architecture for motor imagery decoding based on the adaptation of SincNet, which was initially developed for speaker recognition task from the raw audio input. Such adaptation allows for a very compact end-to-end neural network with state-of-the-art (SOTA) performances and enables network interpretability for neurophysiological validation in terms of cortical rhythms and spatial analysis. In order to validate the performance of proposed algorithms, two datasets were used; the first is the publicly available BCI Competition IV dataset 2a, which is often used as a benchmark in validating motor imagery (MI) classification algorithms, and a primary data that was initially collected to study the difference between motor imagery and mental rotation task associated motor imagery (MI+MR) BCI. The latter was also used in this study to test the plausibility of the proposed algorithm in highlighting the differences in cortical rhythms. In both datasets, the proposed Sinc adapted CNN algorithms show competitive decoding performance in comparisons with SOTA CNN models, where up to 87% decoding accuracy was achieved in BCI Competition IV dataset 2a and up to 91% decoding accuracy when using the primary MI+MR data. Such decoding performance was achieved with the lowest number of trainable parameters (26.5% - 34.1% reduction in the number of parameters compared to its non-Sinc counterpart). In addition, it was shown that the proposed architecture performs a cleaner band-pass, highlighting the necessary frequency bands that focus on important cortical rhythms during task execution, thus allowing for the development of the proposed Spatial Filter Visualization algorithm. Such characteristic was crucial for the neurophysiological interpretation of the learned spatial features and was not previously established with the benchmarked SOTA methods.