Learning attentional temporal cues of brainwaves with spatial embedding for motion intent detection

No Thumbnail Available
File version
Author(s)
Zhang, D
Chen, K
Jian, D
Yao, L
Wang, S
Li, P
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2019
Size
File type(s)
Location

Beijing, China

License
Abstract

As brain dynamics fluctuate considerably across different subjects, it is challenging to design effective handcrafted features based on prior knowledge. Regarding this gap, this paper proposes a Graph-based Convolutional Recurrent Attention Model (G-CRAM) to explore EEG features across different subjects for movement intention recognition. A graph structure is first developed to embed the positioning information of EEG nodes, and then a convolutional recurrent attention model learns EEG features from both spatial and temporal dimensions and adaptively emphasizes on the most distinguishable temporal periods. The proposed approach is validated on two public movement intention EEG datasets. The results show that the GCRAM achieves superior performance to state-of-the-art methods regarding recognition accuracy and ROC-AUC. Furthermore, model interpreting studies reveal the learning process of different neural network components and demonstrate that the proposed model can extract detailed features efficiently.

Journal Title
Conference Title

Proceedings - IEEE International Conference on Data Mining, ICDM

Book Title
Edition
Volume

2019-November

Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject

Artificial intelligence

Persistent link to this record
Citation

Zhang, D; Chen, K; Jian, D; Yao, L; Wang, S; Li, P, Learning attentional temporal cues of brainwaves with spatial embedding for motion intent detection, Proceedings - IEEE International Conference on Data Mining, ICDM, 2019, 2019-November, pp. 1450-1455