A novel transformer for attention decoding using EEG

Electroencephalography (EEG) attention classification plays a crucial role in brain-computer interface (BCI) applications. This paper introduces EEG-PatchFormer, a novel deep learning model leveraging transformers to achieve superior EEG attention decoding. We posit that transformers’ strength in ca...

Full description

Saved in:
Bibliographic Details
Main Author: Lee, Joon Hei
Other Authors: Guan Cuntai
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
EEG
Online Access:https://hdl.handle.net/10356/175057
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-175057
record_format dspace
spelling sg-ntu-dr.10356-1750572024-04-19T15:45:41Z A novel transformer for attention decoding using EEG Lee, Joon Hei Guan Cuntai School of Computer Science and Engineering CTGuan@ntu.edu.sg Computer and Information Science EEG Deep learning Attention Brain-computer interface Electroencephalography (EEG) attention classification plays a crucial role in brain-computer interface (BCI) applications. This paper introduces EEG-PatchFormer, a novel deep learning model leveraging transformers to achieve superior EEG attention decoding. We posit that transformers’ strength in capturing long-range temporal dependencies, coupled with their recent success on spatial data, makes them ideally suited for processing EEG signals. We begin by outlining a pilot study investigating the impact of various patching strategies on the classification accuracy of a transformer-based network. This study revealed significant performance variations across patching methods, emphasising the importance of optimal patching for model efficacy. We then showcase the proposed EEG-PatchFormer architecture. Key modules include a temporal convolutional neural network (CNN), a pointwise convolutional layer, and separate patching modules to handle global and local spatial features, as well as temporal features. The model then features a transformer module, and culminates in a fully-connected classifier. Finally, EEG-PatchFormer’s performance across various evaluation experiments is discussed. Extensive evaluation on a publicly available cognitive attention dataset demonstrated that EEG-PatchFormer surpasses existing state-of-the-art benchmarks in terms of mean classification accuracy, area under the ROC curve (AUC), and macro-F1 score. Hyperparameter tuning and ablation studies were carried out to further optimise, and understand the contribution of, individual components. Overall, this project establishes EEG-PatchFormer as a state-of-the-art model for EEG attention decoding, with promising applications for BCI. Bachelor's degree 2024-04-19T01:36:34Z 2024-04-19T01:36:34Z 2024 Final Year Project (FYP) Lee, J. H. (2024). A novel transformer for attention decoding using EEG. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175057 https://hdl.handle.net/10356/175057 en SCSE23-0162 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
EEG
Deep learning
Attention
Brain-computer interface
spellingShingle Computer and Information Science
EEG
Deep learning
Attention
Brain-computer interface
Lee, Joon Hei
A novel transformer for attention decoding using EEG
description Electroencephalography (EEG) attention classification plays a crucial role in brain-computer interface (BCI) applications. This paper introduces EEG-PatchFormer, a novel deep learning model leveraging transformers to achieve superior EEG attention decoding. We posit that transformers’ strength in capturing long-range temporal dependencies, coupled with their recent success on spatial data, makes them ideally suited for processing EEG signals. We begin by outlining a pilot study investigating the impact of various patching strategies on the classification accuracy of a transformer-based network. This study revealed significant performance variations across patching methods, emphasising the importance of optimal patching for model efficacy. We then showcase the proposed EEG-PatchFormer architecture. Key modules include a temporal convolutional neural network (CNN), a pointwise convolutional layer, and separate patching modules to handle global and local spatial features, as well as temporal features. The model then features a transformer module, and culminates in a fully-connected classifier. Finally, EEG-PatchFormer’s performance across various evaluation experiments is discussed. Extensive evaluation on a publicly available cognitive attention dataset demonstrated that EEG-PatchFormer surpasses existing state-of-the-art benchmarks in terms of mean classification accuracy, area under the ROC curve (AUC), and macro-F1 score. Hyperparameter tuning and ablation studies were carried out to further optimise, and understand the contribution of, individual components. Overall, this project establishes EEG-PatchFormer as a state-of-the-art model for EEG attention decoding, with promising applications for BCI.
author2 Guan Cuntai
author_facet Guan Cuntai
Lee, Joon Hei
format Final Year Project
author Lee, Joon Hei
author_sort Lee, Joon Hei
title A novel transformer for attention decoding using EEG
title_short A novel transformer for attention decoding using EEG
title_full A novel transformer for attention decoding using EEG
title_fullStr A novel transformer for attention decoding using EEG
title_full_unstemmed A novel transformer for attention decoding using EEG
title_sort novel transformer for attention decoding using eeg
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175057
_version_ 1800916378035683328