Research

A custom-designed mental task-based brain-computer interface

Description
A custom-designed mental task-based brain-computer interface
Categories
Published
of 4
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  A CUSTOM-DESIGNED MENTAL TASK-BASED BRAIN–COMPUTER INTERFACE  Farhad Faradji 1  , Rabab K. Ward  1  , and Gary E. Birch 1, 2   1 Department of Electrical & Computer Engineering, University of British Columbia, Canada 2  Neil Squire Society, Canada {farhadf, rababw}@ece.ubc.ca, garyb@neilsquire.ca ABSTRACT At present, brain–computer interfaces cannot be used in real-life applications mainly because of their high false activation rates. To achieve a zero false positive rate, a mental task-based brain–computer interface custom designed for each subject and each task is proposed. The most discriminatory mental task is determined for each subject. We used the EEG signals of four subjects recorded while they were performing five different mental tasks. Autoregressive modeling and stationary wavelet transform are used in the process of feature extraction. Classification is based on quadratic discriminant analysis. For the most discriminatory mental task of each subject, we achieved a false positive rate of zero value while the true positive rate obtained was above 60%.  Index Terms— Brain–computer interface, mental task, autoregressive model, stationary wavelet transform 1. INTRODUCTION Brain–computer interfaces (BCIs) allow people with motor disabilities to interact with their environment. Two states are usually considered for BCIs: the intentional control (IC) state and the no control (NC) state. IC is the state in which the BCI is being controlled by the user, while NC is the state in which the BCI output is inactive. Based on these two states, two evaluation measures are defined: the true  positive rate (TPR) and the false positive rate (FPR). TPR is calculated as the percentage of the correctly classified IC states. FPR is the proportion of the NC states that are erroneously classified as the IC state. Various neurological phenomena (specific features in  brain signals that are time locked to brain activities) have  been used in different types of BCIs. The mental task-based BCI is one of them. For a review of the field, see [1]-[2]. In this study, we propose a mental task-based BCI that is custom designed for each subject and each task. The EEG signals of 5 mental tasks collected by Keirn and Aunon [3] are used in our work, as they have been used in many other studies. Some recent studies are [4]-[13]. Most of these studies reported the classification rate. Only a few papers reported the confusion matrix or the false positive rate [5]-[8] even though these are of great importance in real-life BCI applications. We consider false positive rates as well as true positive rates in our investigation. In this study, our aim is to obtain a zero FPR. We show that our goal is achieved through custom designing the BCI. This paper is organized as follows. The design of our mental task-based BCI is proposed in section 2. Section 3  presents the results. We also discuss the results in this section. The conclusions and some suggestions for future research are given in section 4. 2. METHODS 2.1. Data The data used have been collected by Keirn and Aunon [3]. The EEG signals of seven subjects, during five different mental tasks are recorded. These five mental tasks are the  baseline, computing a nontrivial multiplication, mentally composing a letter, mentally rotating a 3D object, and the visualization of a sequence of numbers being written on a  blackboard. The subjects should not have gestured or vocalized in any way during the recordings. Each recording session containing five trials of each mental task was  performed on a different day. The length of each trial is 10 seconds. Subject 5 completed three sessions, but we used only the EEG signals of his first two sessions. Subjects 2 and 7 completed only one session. Since the EEG signals of subject 4 contain some missing data, they were not used. We used the data of those subjects who completed 10 or more trials (subjects 1, 3, 5, 6). New numbers are assigned to subjects used in this study. The number of completed trials for subjects and their numbers in the srcinal and the  present studies are shown in Table 1. The EEG signals of these subjects while seated in a sound controlled room with dim lighting were recorded from six electrodes placed on C3, C4, P3, P4, O1, and O2  based on the International 10-20 System. Two electrically linked mastoids, A1 and A2 were the references. Fig. 1 shows the positions of the electrodes. During recording, the impedance of the electrodes was kept below 5 k   . A bank of amplifiers (Grass 7P511) with 529978-1-4244-2354-5/09/$25.00 ©2009 IEEE ICASSP 2009  the band-pass filters set at 0.1-100 Hz was connected to the electrodes. Lab Master 12-bit A/D converter was used. To calibrate the system, a known voltage was used before each session. The sampling frequency was 250 Hz. To detect ocular artifacts, two electrodes were placed at the corner and below the left eye. 2.2. Feature Extraction The EEG signals are first decomposed using the stationary wavelet transform. Then the autoregressive models of the decomposed signals are used as the features. SW 5-level Decomposition Wavelet analysis is proven to be effective in time-frequency characterization of signals. The stationary wavelet transform (SWT), the shift-invariant type of wavelet transforms, is more applicable to this goal [14]. The EEG signals of different mental tasks are decomposed into 5 levels using SWT. Each level of decomposition yields two types of components: the approximations component which is the low-frequency high-scale component of the signal and the details component, which is the high-frequency low-scale component. The resultant approximations component of each level is decomposed iteratively. In this way, the srcinal signal is broken down into many lower resolution components. As the sampling frequency of the dataset is 250 Hz, it is supposed that the EEG signal has a maximum frequency of 125 Hz. At the first level of decomposition, the details component has the 62.5-125 Hz frequency band and the frequency range of the approximations component is 0-62.5 Hz. For the second level, the frequency range of the details and approximations components are 31.25-62.5 Hz and 0-31.25 Hz, respectively, and so on. Thus, at the fifth level, the details component has the frequency range 3.91-7.81 Hz and the frequency band of the approximations component is 0-3.91 Hz as shown in Table 2.  Autoregressive Model The autoregressive (AR) model of order Q  for the one-dimensional signal  y [ n ] is written as: ][][][ 1 numn yan y Qmm      (1) where a m  represents the AR coefficients and u [ n ] is the error assumed to be a random process independent of previous values of the signal and with zero mean and a finite variance. Estimating the values of a m ’s from the finite samples of the signal  y  is the goal. The EEG signals are not fully stationary. To mitigate the stationary problem, we choose short and largely overlapping segments. Thus, each segment which is almost 1 second long overlaps approximately by 80% with the adjacent segment. To determine the correct model order, there is no straightforward way. If the selected model order is too low, the whole signal cannot be captured and the remainder is considered as noise. If the selected order of the model is too high, the whole signal is captured but some portions of the noise can also be included in the model. In this study, for each subject and each task, we varied the AR model order from 2 to 6. The best model order is selected based on the TPR and FPR values of the system.  Feature Vector Generation As mentioned before, each trial is 10 seconds long and the sampling frequency is 250 Hz; hence 2500 samples exist in each trial. Trials are broken into 45 256-sapmle segments. Each segment overlaps by 206 samples with the next segment. 10 trials exist for each mental task and each subject. Therefore there are 450 segments for each mental task of each subject. Each segment is decomposed into 5 levels by SWT. Different wavelet families are used: Haar wavelet (‘ db1 ’), Daubechies (‘ db2 ’, ‘ db3 ’, ‘ db4 ’, ‘ db5 ’, ‘ db6  ’, ‘ db7  ’, ‘ db8 ’, ‘ db9 ’, ‘ db10 ’), Biorthogonal (‘ bior1.3 ’, ‘ bior1.5 ’, ‘ bior2.2 ’, ‘ bior2.4 ’, ‘ bior2.6  ’, ‘ bior2.8 ’, ‘ bior3.1 ’, ‘ bior3.3 ’, ‘ bior3.5 ’, ‘ bior3.7  ’, ‘ bior3.9 ’, ‘ bior4.4 ’, ‘ bior5.5 ’, ‘ bior6.8 ’), Coiflets (‘ coif1 ’, ‘ coif2 ’, ‘ coif3 ’, ‘ coif4 ’, ‘ coif5 ’), Symlets (‘  sym2 ’, ‘  sym3 ’, ‘  sym4 ’, ‘  sym5 ’, ‘  sym6  ’, ‘  sym7  ’, ‘  sym8 ’). We worked with the frequency ranges 0-3.91, 3.91-7.81, 7.81-15.63, 15.63-31.25, and 31.25-62.5 Hz. Because of low-pass filtering to prevent aliasing, there is no significant signal component around 125 Hz; hence we did not use the frequency range 62.5-125 Hz. TABLE   1 T HE NUMBER OF COMPLETED TRIALS FOR EACH SUBJECT   Subject number in srcinal study Subject number in this study Number of completed trials 1 1 10 2 --- 5 3 2 10 4 --- 10 5 3 15 6 4 10 7 --- 5 C3C4P4P3O2O1A2A1  Fig. 1. Electrode positions based on International 10-20 System. TABLE   2 F REQUENCY R  ANGES OF D ECOMPOSED L EVELS (H Z ) LevelDetails Approximations 1 62.5-125 0-62.5 2 31.25-62.5 0-31.25 3 15.63-31.25 0-15.63 4 7.81-15.63 0-7.81 5 3.91-7.81 0-3.91 It is supposed that the srcinal signal has the frequency band 0-125 Hz. 530  For each channel, the segments are decomposed using SWT. For each of the 5 frequency ranges, AR coefficients of the stationary wavelet coefficients are estimated using the Burg algorithm [15]. The AR coefficients are set into a vector that forms the feature vector of that channel. The feature vectors of 6 channels are concatenated to generate the final feature vector. This feature vector is used for classifying the segments of the five mental tasks, i.e. for every mental task of every subject the EEG signal is represented by a feature vector. Fig. 2 is the schematic  presentation of the feature vector generation. For each subject and each mental task, the best type of the wavelet and the optimal AR order are selected via cross-validation.  2.3. Classification Quadratic Discriminant Analysis (QDA) is used for classification. QDA assumes the classes have normal distributions. For a 2-class problem, the quadratic discriminant function is defined as: )ln()ˆˆˆˆˆˆ(21)ˆˆln(21)ˆˆˆˆ()ˆˆ(21)( 1212212121112112111211 2121           C C  x x x xqdf  T T T T T      (2) where  x  is the vector to be classified, 21 ˆ,ˆ      are the estimated mean vectors of classes 1 and 2, 21 ˆ,ˆ   are the estimated covariance matrices of the 2 classes, 21 ,     are the  prior probabilities of the classes, 12 C   is the cost of misclassifying a member of class 1 as class 2, and 21 C   is the cost due to misclassifying a member of class 2 as class 1. The decision rule is: 0)(0)( 020010   xqdf if  x  xqdf if  x     (3) where 21 ,     represent the 2 classes. In this work, the same value for the costs of 12 C   and 21 C   in equation (2) was considered. The a-priori  probabilities of all classes were also assumed to be equal. 3. RESULTS AND DISCUSSION The BCI is custom designed for each subject. BCIs customized for each subject has been shown to yield better results than general BCIs intended for all subjects [16]. In this work, for each subject and each mental task, the BCI may have a different wavelet and a different AR order based on customization. For each subject, the 450 segments of each mental task are divided into training, validation and test sets. The training set is used to train the classifiers. The validation set is used to select the best wavelet and AR model order. The test set and the selected wavelet and AR order are used for testing the classifier. We use a 5×5 cross-validation for finding the optimal wavelet and the optimal AR model order and for evaluating the performance of our BCI system. Table 3 shows the results of cross-validation. For each subject and each mental task, the best configuration of wavelet type and AR order is selected. The best configuration is the configuration which results in higher TPR while FPR is zero. The results of evaluating the system performance on the test set with the best configuration selected in validation are also presented in Table 3. The results of testing the system are robust and in line with the cross-validation results, this is with the exception of two cases: the baseline of subject 3, and the multiplication task of subject 4. In these two cases FPR does not reach zero in testing the system. The Symlets family of wavelets is never selected as the best. The AR model order 4 is always selected, except for three cases: the baseline and the letter composing task of subject 1, and the baseline of subject 2. The worst case in terms of TPR is for the baseline of subject 2 with TPR mean and standard deviation of 35.11% and 7.10%, respectively in the testing results. The  best case in terms of TPR is for the baseline of subject 1 with TPR mean and standard deviation of 71.33% and 3.28%, respectively in the testing results. The most discriminatory task of subject 1 is the baseline followed by the letter composing. Since a BCI activated by the baseline task is not applicable, the letter composing task is selected as the most discriminatory task for this subject. The most discriminatory task of subject 2 is the letter composing task followed by the counting task. The most discriminatory task for subject 3 is the rotation task followed by the letter composing task. The most discriminatory task of subject 4 is the rotation task followed by the counting task. Different wavelets are selected and different tasks are considered as the most discriminatory. This is in accordance with the idea of customizing the BCI for each subject. 4. CONCLUSION A custom-designed mental task-based BCI was proposed. The autoregressive modeling and the wavelet were used in    S   W    5  -   l  e  v  e   l   D  e  c  o  m  p  o  s   i   t   i  o  n 31.25-62.5 15.63-31.25 0-3.91 3.91-7.81 7.81-15.63 AR Coefficients AR Coefficients AR Coefficients AR Coefficients AR Coefficients    C   h  a  n  n  e   l              i    S  e  g  m  e  n   t  s   C   h  a  n  n  e   l              i    F  e  a   t  u  r  e   V  e  c   t  o  r   (a) Channel 1 Feature Vector Final Feature Vector Channel 2 Feature Vector Channel 6 Feature Vector   (b) Fig. 2. Feature vector generation, (a) Generating the feature vector for each channel, (b) Generating the final feature vector. 531  the feature extraction process. The AR model order and the wavelet type were determined for each subject and task. For the BCIs based on the most discriminatory tasks, the TPR values obtained were above 60% while the FPRs were zeros. The performance of the system shows great promise. For our future work, we plan on collecting the data during different mental tasks but in a self-paced paradigm. We also desire to implement our proposed BCI system in an online or real-time experiment. 5. REFERENCES [1]J.R. Wolpaw, “Brain-computer interfaces (BCIs) for communication and control: a mini-review,” Supplements to Clin. Neurophysiol. , vol. 57, pp. 607–613, 2004. [2]A. Bashashati, M. Fatourechi, R.K. Ward, and G.E. Birch, “A survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals,”  J. Neural Eng. , vol. 4, no. 2, pp. R35-57, Jun. 2007. [3]Z.A. Keirn and J.I. Aunon, “A new mode of communication  between man and his surroundings,”  IEEE Trans. Biomed.  Eng. , vol. 37, no. 12, pp. 1209–1214, Dec. 1990. [4]R. Palaniappan, “Utilizing gamma band to improve mental task based brain-computer interface design,”  IEEE Trans.  Neural Syst. and Rehabil. Eng. , vol. 14, no. 3, pp. 299-303, Sep. 2006. [5]K. Nakayama and K. Inagaki, “A brain computer interface  based on neural network with efficient pre-processing,”  Proc. Int. Symp. Intelligent Signal Processing and Communication Systems (ISPACS) , pp. 673-676, Dec. 2006. [6]C.W. Anderson, J.N. Knight, T. O'Connor, M.J. Kirby, and A. Sokolov, “Geometric subspace methods and time-delay embedding for EEG artifact removal and classification,”  IEEE Trans. Neural Syst. and Rehabil. Eng. , vol. 14, no. 2,  pp. 142-146, 2006. [7]D.-M. Dobrea and M.-C. Dobrea, “An EEG (bio) technological system for assisting the disabled people,”  Proc.5 th  IEEE Int. Conf. on Comput. Cyber. , pp. 191-196, Oct. 2007. [8]D.-M. Dobrea, M.-C. Dobrea, and M. Costin, “An EEG coherence based method used for mental tasks classification,”  Proc. 5 th  IEEE Int. Conf. on Comput. Cyber. , pp. 185-190, Oct. 2007. [9]K. Nakayama, Y. Kaneda, and A. Hirano, “A brain computer interface based on FFT and multilayer neural network - feature extraction and generalization,”  Proc. Int. Symp.  Intelligent Signal Processing and Communication Systems (ISPACS) , pp. 826-829, Nov. 2007. [10]L. Zhiwei and S. Minfen, “Classification of Mental Task EEG Signals Using Wavelet Packet Entropy and SVM,”  Proc. 8 th  Int. Conf. Electronic Measurement and Instruments (ICEMI) ,  pp. 3-906-3-909, Aug. 2007. [11]B.T. Skinner, H.T. Nguyen, and D.K. Liu, “Classification of EEG signals using a genetic-based machine learning classifier,”  Proc. 29 th  Int. Conf. of the IEEE Engineering in  Medicine and Biology Society , pp. 3120-3123, Aug. 2007. [12]F. Abdollahi, S.K. Setarehdan, and A.M. Nasrabadi, “Locating Information Maximization Time in EEG Signals Recorded During Mental Tasks,”  Proc. 5 th  Int. Symp. on  Image and Signal Processing and Analysis (ISPA) , pp. 238-241, Sep. 2007. [13]C.R. Hema, M.P. Paulraj, R. Nagarajan, S. Yaacob, and A.H. Adom, “Fuzzy based classification of EEG mental tasks for a  brain machine interface,”  Proc. 3 rd   Inter. Conf. on Inter.  Infor. Hiding and Multimedia Signal Processing (IIH-MSP) , vol. 1, pp. 53-56, Nov. 2007. [14]G.P. Nason and B.W. Silverman, “The stationary wavelet transform and some statistical applications,” Wavelets and Statistics , vol. 103, pp. 281 -299, 1995. [15]Jr. S.L. Marple,  Digital Spectral Analysis with Applications , Prentice Hall, Englewood Cliffs, 1987. [16]A. Bashashati, M. Fatourechi, R.K. Ward, and G.E. Birch, “User customization of the feature generator of an asynchronous brain interface,” Ann. Biomed. Eng., vol. 34, no. 6, pp. 1051-1060, Jun. 2006.   TABLE   3 C ROSS - VALIDATION ( AT TOP )  AND T ESTING ( AT BOTTOM )   R  ESULTS FOR D IFFERENT S UBJECTS AND T ASKS  ( THE MOST DISCRIMINATORY TASK FOR EACH SUBJECT IS SHADED .) Baseline Multiplication Letter Composing Rotation Counting TPR FPR TPRFPRTPRFPRTPRFPRTPR FPR    S  u   b   j  e  c   t   W  a  v  e   l  e   t   A   R   O  r   d  e  r   M  e  a  n   S   D   M  e  a  n   S   D   W  a  v  e   l  e   t   A   R   O  r   d  e  r   M  e  a  n   S   D   M  e  a  n   S   D   W  a  v  e   l  e   t   A   R   O  r   d  e  r   M  e  a  n   S   D   M  e  a  n   S   D   W  a  v  e   l  e   t   A   R   O  r   d  e  r   M  e  a  n   S   D   M  e  a  n   S   D   W  a  v  e   l  e   t   A   R   O  r   d  e  r   M  e  a  n   S   D   M  e  a  n   S   D   6   9 .   3   9   0 .   9   8   0 .   0   0   0 .   0   0   6   1 .   6   7   2 .   7   2   0 .   0   0   0 .   0   0   6   2 .   6   7   1 .   9   1   0 .   0   0   0 .   0   0   5   4 .   3   9   1 .   8   0   0 .   0   0   0 .   0   0   5   2 .   4   4   0 .   6   9   0 .   0   0   0 .   0   0 1    b   i  o  r   3 .   3   3   7   1 .   3   3   3 .   2   8   0 .   0   0   0 .   0   0   b   i  o  r   1 .   3   4   6   3 .   5   6   6 .   1   6   0 .   0   0   0 .   0   0   d   b   2   3   6   4 .   2   2   5 .   7   4   0 .   0   0   0 .   0   0   d   b   2   4   5   4 .   6   7   4 .   9   3   0 .   0   0   0 .   0   0   b   i  o  r   2 .   2   4   5   4 .   2   2   3 .   7   2   0 .   0   0   0 .   0   0   3   2 .   3   9   1 .   1   7   0 .   0   0   0 .   0   0   5   1 .   8   9   1 .   4   5   0 .   0   0   0 .   0   0   6   0 .   6   7   1 .   8   7   0 .   0   0   0 .   0   0   4   9 .   9   4   0 .   9   6   0 .   0   0   0 .   0   0   5   3 .   8   3   2 .   4   3   0 .   0   0   0 .   0   0 2    d   b   1   6   3   5 .   1   1   7 .   1   0   0 .   0   0   0 .   0   0   b   i  o  r   3 .   1   4   5   0 .   8   9   2 .   5   3   0 .   0   0   0 .   0   0   b   i  o  r   3 .   1   4   6   3 .   1   1   4 .   6   1   0 .   0   0   0 .   0   0   b   i  o  r   3 .   1   4   4   9 .   7   8   5 .   0   6   0 .   0   0   0 .   0   0   b   i  o  r   3 .   1   4   5   5 .   5   6   4 .   3   0   0 .   0   0   0 .   0   0   5   0 .   1   1   2 .   9   8   0 .   0   0   0 .   0   0   5   3 .   6   7   2 .   5   2   0 .   0   0   0 .   0   0   5   7 .   7   8   2 .   3   0   0 .   0   0   0 .   0   0   6   3 .   2   2   2 .   5   5   0 .   0   0   0 .   0   0   5   3 .   6   7   1 .   8   4   0 .   0   0   0 .   0   0 3    d   b   3   4   4   9 .   5   6   6 .   1   2   0 .   0   6   0 .   1   2   b   i  o  r   2 .   2   4   5   3 .   7   8   6 .   1   7   0 .   0   0   0 .   0   0   b   i  o  r   2 .   2   4   6   1 .   1   1   3 .   7   7   0 .   0   0   0 .   0   0  c  o   i   f   1   4   6   3 .   7   8   2 .   6   8   0 .   0   0   0 .   0   0   d   b   2   4   5   4 .   4   4   4 .   9   1   0 .   0   0   0 .   0   0   5   3 .   0   6   3 .   5   5   0 .   0   0   0 .   0   0   6   7 .   6   7   3 .   4   0   0 .   0   0   0 .   0   0   5   4 .   0   0   1 .   7   3   0 .   0   0   0 .   0   0   6   6 .   2   2   1 .   7   0   0 .   0   0   0 .   0   0   5   5 .   5   6   1 .   5   5   0 .   0   0   0 .   0   0 4    b   i  o  r   3 .   5   4   5   5 .   7   8   4 .   8   0   0 .   0   0   0 .   0   0  c  o   i   f   1   4   6   8 .   0   0   3 .   9   6   0 .   0   6   0 .   1   2   d   b   1   4   5   6 .   2   2   3 .   5   7   0 .   0   0   0 .   0   0   b   i  o  r   3 .   1   4   6   4 .   8   9   4 .   8   8   0 .   0   0   0 .   0   0   b   i  o  r   3 .   1   4   5   8 .   2   2   3 .   3   0   0 .   0   0   0 .   0   0 532
Search
Similar documents
View more...
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks