物联网学报 ›› 2023, Vol. 7 ›› Issue (1): 118-128.doi: 10.11959/j.issn.2096-3750.2023.00310

• 理论与技术 • 上一篇    下一篇

基于AE和Transformer的运动想象脑电信号分类研究

蒋锐1, 孙刘婷1, 王小明1, 李大鹏1, 徐友云1,2   

  1. 1 南京邮电大学通信与信息工程学院,江苏 南京 210003
    2 南京邮电大学通信与网络技术国家工程研究中心,江苏 南京 210003
  • 修回日期:2022-11-06 出版日期:2023-03-30 发布日期:2023-03-01
  • 作者简介:蒋锐(1985− ),男,博士,南京邮电大学副教授,主要研究方向为雷达信号处理、移动通信系统、脑波信号处理等
    孙刘婷(1997− ),女,南京邮电大学通信与信息工程学院硕士生,主要研究方向为脑波信号处理等
    王小明(1986− ),男,博士,南京邮电大学副教授,主要研究方向为通信资源协调分配、深度学习等
    李大鹏(1982− ),男,博士,南京邮电大学教授,主要研究方向为移动通信系统等
    徐友云(1966− ),男,博士,南京邮电大学教授,主要研究方向为移动通信系统、5G通信等
  • 基金资助:
    国家自然科学基金资助项目(61971241);国家自然科学基金资助项目(62071245)

Research on EEG signal classification of motor imagery based on AE and Transformer

Rui JIANG1, Liuting SUN1, Xiaoming WANG1, Dapeng LI1, Youyun XU1,2   

  1. 1 School of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
    2 National Engineering Research Center for Communication and Network Technology, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
  • Revised:2022-11-06 Online:2023-03-30 Published:2023-03-01
  • Supported by:
    The National Natural Science Foundation of China(61971241);The National Natural Science Foundation of China(62071245)

摘要:

基于运动想象的脑机接口系统一直是海内外研究学者的关注对象。针对传统运动想象脑电识别系统不能精准提取显著特征、分类识别准确率低等问题,提出一种新的基于自编码器(AE, auto-encoder)降维的 Transformer分类识别模型。该方法使用滤波器组共空间模式(FBCSP, filter bank common spatial pattern)对数据进行多个频段的特征提取,并利用AE获得降维后的特征矩阵。同时借助Transformer模型的位置编码考虑全局信号特征影响并利用多头自注意力机制考虑特征矩阵的内部关联性,提升系统分类识别效果。与传统的基于线性判别分析(LDA, linear discriminant analysis)的K-近邻(KNN, K-nearest neighbors)法分类识别系统进行对比,实验表明AE+Transformer模型的分类识别效果优于LDA+KNN系统,说明这种改进后的算法适用于运动想象的二分类。

关键词: 运动想象, 深度学习, 自编码器, 注意力模块, Transformer模型

Abstract:

The motor imagery brain-computer interface has always been the focus of scholars.But traditional system cannot accurately extract significant signals and has low classification accuracy.To overcome such difficulty, a new Transformer model was proposed based on the auto-encoder (AE).The filter bank common spatial pattern (FBCSP) was used to extract the features of multiple frequency bands, and the AE was exploited to obtain the dimensionality-reduced feature matrix.Finally, it considered the influence of the global signal features by the position encoding of the Transformer model and considered the internal correlation of the feature matrix by using the multi-head self-attention mechanism.By comparison with the traditional K-nearest neighbors (KNN) system based on linear discriminant analysis (LDA), the experimental results validates that the classification effect of AE+Transformer model is better than that of LDA+KNN system.It shows that the improved algorithm is suitable for the binary classification of motor imagery.

Key words: motor imagery, deep learning, auto-encoder, attention module, Transformer model

中图分类号: 

No Suggested Reading articles found!