Chinese Journal on Internet of Things ›› 2023, Vol. 7 ›› Issue (1): 118-128.doi: 10.11959/j.issn.2096-3750.2023.00310

• Theory and Technology • Previous Articles     Next Articles

Research on EEG signal classification of motor imagery based on AE and Transformer

Rui JIANG1, Liuting SUN1, Xiaoming WANG1, Dapeng LI1, Youyun XU1,2   

  1. 1 School of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
    2 National Engineering Research Center for Communication and Network Technology, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
  • Revised:2022-11-06 Online:2023-03-30 Published:2023-03-01
  • Supported by:
    The National Natural Science Foundation of China(61971241);The National Natural Science Foundation of China(62071245)

Abstract:

The motor imagery brain-computer interface has always been the focus of scholars.But traditional system cannot accurately extract significant signals and has low classification accuracy.To overcome such difficulty, a new Transformer model was proposed based on the auto-encoder (AE).The filter bank common spatial pattern (FBCSP) was used to extract the features of multiple frequency bands, and the AE was exploited to obtain the dimensionality-reduced feature matrix.Finally, it considered the influence of the global signal features by the position encoding of the Transformer model and considered the internal correlation of the feature matrix by using the multi-head self-attention mechanism.By comparison with the traditional K-nearest neighbors (KNN) system based on linear discriminant analysis (LDA), the experimental results validates that the classification effect of AE+Transformer model is better than that of LDA+KNN system.It shows that the improved algorithm is suitable for the binary classification of motor imagery.

Key words: motor imagery, deep learning, auto-encoder, attention module, Transformer model

CLC Number: 

No Suggested Reading articles found!