Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention

Brain-computer interfaces (BCIs) built based on motor imagery paradigm have found extensive utilization in motor rehabilitation and the control of assistive applications. However, traditional MI-BCI systems often exhibit suboptimal classification performance and require significant time for new user...

Full description

Saved in:
Bibliographic Details
Main Authors: Sun, Hao, Ding, Yi, Bao, Jianzhu, Qin, Ke, Tong, Chengxuan, Jin, Jing, Guan, Cuntai
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180824
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-180824
record_format dspace
spelling sg-ntu-dr.10356-1808242024-10-29T01:30:18Z Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention Sun, Hao Ding, Yi Bao, Jianzhu Qin, Ke Tong, Chengxuan Jin, Jing Guan, Cuntai School of Computer Science and Engineering Computer and Information Science Motor imagery Self-attention Brain-computer interfaces (BCIs) built based on motor imagery paradigm have found extensive utilization in motor rehabilitation and the control of assistive applications. However, traditional MI-BCI systems often exhibit suboptimal classification performance and require significant time for new users to collect subject-specific training data. This limitation diminishes the user-friendliness of BCIs and presents significant challenges in developing effective subject-independent models. In response to these challenges, we propose a novel subject-independent framework for learning temporal dependency for motor imagery BCIs by Contrastive Learning and Self-attention (CLS). In CLS model, we incorporate self-attention mechanism and supervised contrastive learning into a deep neural network to extract important information from electroencephalography (EEG) signals as features. We evaluate the CLS model using two large public datasets encompassing numerous subjects in a subject-independent experiment condition. The results demonstrate that CLS outperforms six baseline algorithms, achieving a mean classification accuracy improvement of 1.3 % and 4.71 % than the best algorithm on the Giga dataset and OpenBMI dataset, respectively. Our findings demonstrate that CLS can effectively learn invariant discriminative features from training data obtained from non-target subjects, thus showcasing its potential for building models for new users without the need for calibration. Agency for Science, Technology and Research (A*STAR) This work was supported by the China Scholarship Council (CSC) 202206740012. This work was also supported by the RIE2020 AME Programmatic Fund, Singapore (No. A20G8b0102), in part by STI 2030- major projects 2022ZD0208900 and the Grant National Natural Science Foundation of China under Grant 62176090; in part by Shanghai Municipal Science and Technology Major Project under Grant 2021SHZDZX, in part by the Program of Introducing Talents of Discipline to Universities through the 111 Project under Grant B17017. This research was also supported by National Government Guided Special Funds for Local Science and Technology Development (Shenzhen, China) (No. 2021Szvup043) and by Project of Jiangsu Province Science and Technology Plan Special Fund in 2022 under Grant BE2022064-1. 2024-10-29T01:30:18Z 2024-10-29T01:30:18Z 2024 Journal Article Sun, H., Ding, Y., Bao, J., Qin, K., Tong, C., Jin, J. & Guan, C. (2024). Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention. Neural Networks, 178, 106470-. https://dx.doi.org/10.1016/j.neunet.2024.106470 0893-6080 https://hdl.handle.net/10356/180824 10.1016/j.neunet.2024.106470 38943861 2-s2.0-85196953895 178 106470 en A20G8b0102 Neural Networks © 2024 Published by Elsevier Ltd. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Motor imagery
Self-attention
spellingShingle Computer and Information Science
Motor imagery
Self-attention
Sun, Hao
Ding, Yi
Bao, Jianzhu
Qin, Ke
Tong, Chengxuan
Jin, Jing
Guan, Cuntai
Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention
description Brain-computer interfaces (BCIs) built based on motor imagery paradigm have found extensive utilization in motor rehabilitation and the control of assistive applications. However, traditional MI-BCI systems often exhibit suboptimal classification performance and require significant time for new users to collect subject-specific training data. This limitation diminishes the user-friendliness of BCIs and presents significant challenges in developing effective subject-independent models. In response to these challenges, we propose a novel subject-independent framework for learning temporal dependency for motor imagery BCIs by Contrastive Learning and Self-attention (CLS). In CLS model, we incorporate self-attention mechanism and supervised contrastive learning into a deep neural network to extract important information from electroencephalography (EEG) signals as features. We evaluate the CLS model using two large public datasets encompassing numerous subjects in a subject-independent experiment condition. The results demonstrate that CLS outperforms six baseline algorithms, achieving a mean classification accuracy improvement of 1.3 % and 4.71 % than the best algorithm on the Giga dataset and OpenBMI dataset, respectively. Our findings demonstrate that CLS can effectively learn invariant discriminative features from training data obtained from non-target subjects, thus showcasing its potential for building models for new users without the need for calibration.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Sun, Hao
Ding, Yi
Bao, Jianzhu
Qin, Ke
Tong, Chengxuan
Jin, Jing
Guan, Cuntai
format Article
author Sun, Hao
Ding, Yi
Bao, Jianzhu
Qin, Ke
Tong, Chengxuan
Jin, Jing
Guan, Cuntai
author_sort Sun, Hao
title Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention
title_short Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention
title_full Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention
title_fullStr Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention
title_full_unstemmed Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention
title_sort leveraging temporal dependency for cross-subject-mi bcis by contrastive learning and self-attention
publishDate 2024
url https://hdl.handle.net/10356/180824
_version_ 1814777819739193344