Parallel spatial-temporal self-attention CNN-based motor imagery classification for BCI

Motor imagery (MI) electroencephalography (EEG) classification is an important part of the brain-computer interface (BCI), allowing people with mobility problems to communicate with the outside world via assistive devices. However, EEG decoding is a challenging task because of its complexity, dynami...

全面介紹

Saved in:
書目詳細資料
Main Authors: Liu, Xiuling, Shen, Yonglong, Liu, Jing, Yang, Jianli, Xiong, Peng, Lin, Feng
其他作者: School of Computer Science and Engineering
格式: Article
語言:English
出版: 2021
主題:
EEG
在線閱讀:https://hdl.handle.net/10356/146014
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
id sg-ntu-dr.10356-146014
record_format dspace
spelling sg-ntu-dr.10356-1460142021-01-21T03:12:34Z Parallel spatial-temporal self-attention CNN-based motor imagery classification for BCI Liu, Xiuling Shen, Yonglong Liu, Jing Yang, Jianli Xiong, Peng Lin, Feng School of Computer Science and Engineering Engineering::Computer science and engineering Motor Imagery EEG Motor imagery (MI) electroencephalography (EEG) classification is an important part of the brain-computer interface (BCI), allowing people with mobility problems to communicate with the outside world via assistive devices. However, EEG decoding is a challenging task because of its complexity, dynamic nature, and low signal-to-noise ratio. Designing an end-to-end framework that fully extracts the high-level features of EEG signals remains a challenge. In this study, we present a parallel spatial-temporal self-attention-based convolutional neural network for four-class MI EEG signal classification. This study is the first to define a new spatial-temporal representation of raw EEG signals that uses the self-attention mechanism to extract distinguishable spatial-temporal features. Specifically, we use the spatial self-attention module to capture the spatial dependencies between the channels of MI EEG signals. This module updates each channel by aggregating features over all channels with a weighted summation, thus improving the classification accuracy and eliminating the artifacts caused by manual channel selection. Furthermore, the temporal self-attention module encodes the global temporal information into features for each sampling time step, so that the high-level temporal features of the MI EEG signals can be extracted in the time domain. Quantitative analysis shows that our method outperforms state-of-the-art methods for intra-subject and inter-subject classification, demonstrating its robustness and effectiveness. In terms of qualitative analysis, we perform a visual inspection of the new spatial-temporal representation estimated from the learned architecture. Finally, the proposed method is employed to realize control of drones based on EEG signal, verifying its feasibility in real-time applications. Published version 2021-01-21T03:12:34Z 2021-01-21T03:12:34Z 2020 Journal Article Liu, X., Shen, Y., Liu, J., Yang, J., Xiong, P., & Lin, F. (2020). Parallel Spatial–Temporal Self-Attention CNN-Based Motor Imagery Classification for BCI. Frontiers in Neuroscience, 14, 587520-. doi:10.3389/fnins.2020.587520 1662-4548 https://hdl.handle.net/10356/146014 10.3389/fnins.2020.587520 33362458 2-s2.0-85098221192 14 en Frontiers in neuroscience © 2020 Liu, Shen, Liu, Yang, Xiong and Lin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Motor Imagery
EEG
spellingShingle Engineering::Computer science and engineering
Motor Imagery
EEG
Liu, Xiuling
Shen, Yonglong
Liu, Jing
Yang, Jianli
Xiong, Peng
Lin, Feng
Parallel spatial-temporal self-attention CNN-based motor imagery classification for BCI
description Motor imagery (MI) electroencephalography (EEG) classification is an important part of the brain-computer interface (BCI), allowing people with mobility problems to communicate with the outside world via assistive devices. However, EEG decoding is a challenging task because of its complexity, dynamic nature, and low signal-to-noise ratio. Designing an end-to-end framework that fully extracts the high-level features of EEG signals remains a challenge. In this study, we present a parallel spatial-temporal self-attention-based convolutional neural network for four-class MI EEG signal classification. This study is the first to define a new spatial-temporal representation of raw EEG signals that uses the self-attention mechanism to extract distinguishable spatial-temporal features. Specifically, we use the spatial self-attention module to capture the spatial dependencies between the channels of MI EEG signals. This module updates each channel by aggregating features over all channels with a weighted summation, thus improving the classification accuracy and eliminating the artifacts caused by manual channel selection. Furthermore, the temporal self-attention module encodes the global temporal information into features for each sampling time step, so that the high-level temporal features of the MI EEG signals can be extracted in the time domain. Quantitative analysis shows that our method outperforms state-of-the-art methods for intra-subject and inter-subject classification, demonstrating its robustness and effectiveness. In terms of qualitative analysis, we perform a visual inspection of the new spatial-temporal representation estimated from the learned architecture. Finally, the proposed method is employed to realize control of drones based on EEG signal, verifying its feasibility in real-time applications.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Liu, Xiuling
Shen, Yonglong
Liu, Jing
Yang, Jianli
Xiong, Peng
Lin, Feng
format Article
author Liu, Xiuling
Shen, Yonglong
Liu, Jing
Yang, Jianli
Xiong, Peng
Lin, Feng
author_sort Liu, Xiuling
title Parallel spatial-temporal self-attention CNN-based motor imagery classification for BCI
title_short Parallel spatial-temporal self-attention CNN-based motor imagery classification for BCI
title_full Parallel spatial-temporal self-attention CNN-based motor imagery classification for BCI
title_fullStr Parallel spatial-temporal self-attention CNN-based motor imagery classification for BCI
title_full_unstemmed Parallel spatial-temporal self-attention CNN-based motor imagery classification for BCI
title_sort parallel spatial-temporal self-attention cnn-based motor imagery classification for bci
publishDate 2021
url https://hdl.handle.net/10356/146014
_version_ 1690658409222766592