Reconstructing ERP Signals Using Generative Adversarial Networks for Mobile Brain-Machine Interface

Reconstructing ERP Signals Using Generative Adversarial Networks for Mobile Brain-Machine Interface

Reconstructing ERP Signals Using Generative Adversarial Networks for Mobile Brain-Machine Interface Young-Eun Lee Minji Lee Seong-Whan Lee Dept. Brain and Cognitive Engineering Dept. Brain and Cognitive Engineering Dept. Artificial Intelligence Korea University Korea University Korea University Seoul, Republic of Korea Seoul, Republic of Korea Seoul, Republic of Korea ye [email protected] [email protected] [email protected] Abstract—Practical brain-machine interfaces have been widely or robots by decoding human intention from brain signals [1]– studied to accurately detect human intentions using brain signals [4]. There are many state-of-the-art BMI systems to increase in the real world. However, the electroencephalography (EEG) the performance of identifying user intention in a laboratory signals are distorted owing to the artifacts such as walking and head movement, so brain signals may be large in amplitude condition [5], [6]. In particular, BMIs under an ambulatory rather than desired EEG signals. Due to these artifacts, de- condition are important issues for practical BMIs to recognize tecting accurately human intention in the mobile environment human intention in the real world [7]–[9]. However, the is challenging. In this paper, we proposed the reconstruction movement artifacts can have difficulty detecting user intention framework based on generative adversarial networks using the because they affect electroencephalography (EEG) signals event-related potentials (ERP) during walking. We used a pre- trained convolutional encoder to represent latent variables and with large magnitudes. These artifacts could arise from head reconstructed ERP through the generative model which shape movement, electromyography, muscle activity, skin, and cable similar to the opposite of encoder. Finally, the ERP was classified movement [10]. Several studies about BMIs in the ambulatory using the discriminative model to demonstrate the validity of environment have been actively conducted by applying artifact our proposed framework. As a result, the reconstructed signals removal methods in the pre-processing phase [11], [12] or had important components such as N200 and P300 similar to ERP during standing. The accuracy of reconstructed EEG was using the high-tech methodology in the feature extraction or similar to raw noisy EEG signals during walking. The signal- classification phase to better understand user intention [13], to-noise ratio of reconstructed EEG was significantly increased [14]. These processes to reduce the effects of artifacts are as 1.3. The loss of the generative model was 0.6301, which essential for practical BMIs. is comparatively low, which means training generative model Generative models produced the data distribution through had high performance. The reconstructed ERP consequentially showed an improvement in classification performance during decoding progress commonly applying for audio, images, or walking through the effects of noise reduction. The proposed videos. While the two trainable models contested each other, framework could help recognize human intention based on the they learned in the direction that they do not know whether brain-machine interface even in the mobile environment. data are generated or not. Recently, a novel generative model is introduced using deep neural networks to represent and re- Index Terms—brain-machine interface, ambulatory BMI, mo- bile BMI, generative adversarial networks, event-related poten- construct such as generative adversarial networks (GANs) [15] tials and many of its advanced version. GANs are machine learning frameworks consisting of two neural networks that contest arXiv:2005.08430v1 [eess.SP] 18 May 2020 I. INTRODUCTION each other in a zero-sum game. Deep convolutional GANs (DCGANs) [16] are an advanced model of GANs which can Brain-machine interfaces (BMIs) are technical systems that train models with convolutional layers to be more stable than enable impaired people to communicate and control machines normal GANs. Auxiliary classifier GANs (ACGANs) [17] are also one of the improved version of GANs, which can train 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, the class information of data at the same time to improve the including reprinting/republishing this material for advertising or promotional generative data. Various advanced versions of GANs are used purposes, creating new collective works, for resale or redistribution to servers differently depending on purposes such as data augmentation or lists, or reuse of any copyrighted component of this work in other works. This work was supported by the Institute for Information & Communica- and style of image changes. tions Technology Planning & Evaluation (IITP) grant funded by the Korea Recently, many studies have been reported to improve government (No. 2017-0-00451, Development of BCI based Brain and Cog- classification performance by using deep neural networks in nitive Computing Technology for Recognizing Users Intentions using Deep Learning; No. 2015-0-00185, Development of Intelligent Pattern Recognition EEG data [18], [19]. However, researchers have struggled Softwares for Ambulatory Brain Computer Interface). to use traditional deep neural networks since EEG signals Stimulus 300 Session Stimulus 2 Stimulus 1 Presenting End Presenting Presenting Rest Rest 0.5 s 0.5–1.5 s 0.5 s 0.5–1.5 s 1 trial (a) Experimental setup (b) Experimental paradigm Fig. 1. Experimental design. (a) Experimental setup. All subjects are asked to stand or walk on the treadmill in front of a display. (b) Experimental paradigm. ERP paradigm including ‘target’ and ‘non-target’ is used in this experiment. have different characteristics from typical input of deep neural pre-trained model consisted of convolutional neural networks, networks. EEG has dynamic time series data and the am- encoding noisy EEG signals in the ambulatory environment. plitude of artifact is higher than the sources which contain We hypothesized that reconstructed EEG would contain ERP the human intention. Thus, there are several attempts to components but not have artifacts. We performed subject- fit EEG signals into deep neural networks. In Schirrmeister dependent and subject-independent training sessions. We also et al. [20], they introduced deep ConvNets which is EEG- evaluated the reconstructed ERP with the visual inspection, fitted convolutional neural networks (CNN) and compared ERP performance, and the loss of the generative model. This with traditional classifier for motor imagery, having much work could be a noise reduction method and extracting user higher performance. EEGNET [21] was developed for EEG intention methods. signals during BMI paradigms including P300 visual-evoked potential (VEP) paradigm. Moreover, a few papers recently II. MATERIALS AND METHODS used GANs in EEG data to generate another EEG data. In Hartmann et al. [22], they generated EEG signals of hand A. Experimental Setup movement using GANs with different architectures, showing 1) subjects: Eighteen healthy young subjects (four females, the signals generated well in time-series and frequency spectra. age 24.5 ± 3.1 years) were included in this experiment. None In addition, GANs were trained to classify and generate EEG of the subjects had a history of neurological, psychiatric, or data for driving fatigue [23]. To date, most studies applying any other pertinent disease that otherwise might have affected GANs to EEG data are used for data augmentation purposes the experimental results. All subjects gave their written in- to improve classification performance. formed consent before the experiments. All experiments were GANs are used for noise reduction in a few studies. carried out corresponding to the Declaration of Helsinki. This In Wolterink et al. [24], they reduced noises in computed study was reviewed and approved by the Korea University tomography (CT) data using GANs with convolutional neu- Institutional Review Board (KUIRB-2019-0194-01). ral networks to minimize voxelwise loss. As a result, they The subjects were on the treadmill at 80 (±5) cm in front produced more accurate CT images. Another researcher [25] of a 24 inch LCD monitor (refresh rate: 60 Hz, resolution: performed the reduced noise in the CT image using models 1920 × 1080) and stood, walked at 1.6 m/s during the BMI inspired by cycle-GANs [26] and PatchGan [27]. These studies paradigms (Fig. 1-(a)). demonstrated that not only normal image and audio data but 2) Data acquisition: We used a wireless interface (MOVE also brain-related data such as brain imaging are applied for system, Brain Product GmbH) and Ag/AgCl electrodes to noise reduction using GANs. acquire EEG signals from the scalp and Smarting System In this paper, we proposed the reconstruction framework (mBrainTrain LLC) to record EEG signals. The cap electrodes of event-related potential (ERP) from noisy EEG signals were placed according to the 10-20 international system at during walking. To reconstruct the EEG signals, we utilized locations in 32 channels: Fp1, Fp2, AFz, F7, F3, Fz, F4, F8, a generative model framework inspired by EEGNET [21], FC5, FC1, FC2, FC6, C3, Cz, C4, CP5, CP1, CP2, CP6, P7, DCGANs [16], and ACGANs [17] and then classified the ERP P3, Pz, P4, P8, PO7, PO3, POz, PO4, PO8, O1, Oz, and signals using convolutional discriminative models. To make O2. The impedance was maintained below 10 kΩ. We set the the latent variables for the generative model,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us