Reconstructing ERP Signals Using Generative Adversarial Networks for Mobile Brain-Machine Interface

Young-Eun Lee Minji Lee Seong-Whan Lee Dept. Brain and Cognitive Engineering Dept. Brain and Cognitive Engineering Dept. Artificial Intelligence Korea University Korea University Korea University Seoul, Republic of Korea Seoul, Republic of Korea Seoul, Republic of Korea ye [email protected] [email protected] [email protected]

Abstract—Practical brain-machine interfaces have been widely or robots by decoding human intention from brain signals [1]– studied to accurately detect human intentions using brain signals [4]. There are many state-of-the-art BMI systems to increase in the real world. However, the (EEG) the performance of identifying user intention in a laboratory signals are distorted owing to the artifacts such as walking and head movement, so brain signals may be large in amplitude condition [5], [6]. In particular, BMIs under an ambulatory rather than desired EEG signals. Due to these artifacts, de- condition are important issues for practical BMIs to recognize tecting accurately human intention in the mobile environment human intention in the real world [7]–[9]. However, the is challenging. In this paper, we proposed the reconstruction movement artifacts can have difficulty detecting user intention framework based on generative adversarial networks using the because they affect electroencephalography (EEG) signals event-related potentials (ERP) during walking. We used a pre- trained convolutional encoder to represent latent variables and with large magnitudes. These artifacts could arise from head reconstructed ERP through the generative model which shape movement, electromyography, muscle activity, skin, and cable similar to the opposite of encoder. Finally, the ERP was classified movement [10]. Several studies about BMIs in the ambulatory using the discriminative model to demonstrate the validity of environment have been actively conducted by applying artifact our proposed framework. As a result, the reconstructed signals removal methods in the pre-processing phase [11], [12] or had important components such as N200 and similar to ERP during standing. The accuracy of reconstructed EEG was using the high-tech methodology in the feature extraction or similar to raw noisy EEG signals during walking. The signal- classification phase to better understand user intention [13], to-noise ratio of reconstructed EEG was significantly increased [14]. These processes to reduce the effects of artifacts are as 1.3. The loss of the generative model was 0.6301, which essential for practical BMIs. is comparatively low, which means training generative model Generative models produced the data distribution through had high performance. The reconstructed ERP consequentially showed an improvement in classification performance during decoding progress commonly applying for audio, images, or walking through the effects of noise reduction. The proposed videos. While the two trainable models contested each other, framework could help recognize human intention based on the they learned in the direction that they do not know whether brain-machine interface even in the mobile environment. data are generated or not. Recently, a novel generative model is introduced using deep neural networks to represent and re- Index Terms—brain-machine interface, ambulatory BMI, mo- bile BMI, generative adversarial networks, event-related poten- construct such as generative adversarial networks (GANs) [15] tials and many of its advanced version. GANs are machine learning frameworks consisting of two neural networks that contest arXiv:2005.08430v1 [eess.SP] 18 May 2020 I.INTRODUCTION each other in a zero-sum game. Deep convolutional GANs (DCGANs) [16] are an advanced model of GANs which can Brain-machine interfaces (BMIs) are technical systems that train models with convolutional layers to be more stable than enable impaired people to communicate and control machines normal GANs. Auxiliary classifier GANs (ACGANs) [17] are also one of the improved version of GANs, which can train 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, the class information of data at the same time to improve the including reprinting/republishing this material for advertising or promotional generative data. Various advanced versions of GANs are used purposes, creating new collective works, for resale or redistribution to servers differently depending on purposes such as data augmentation or lists, or reuse of any copyrighted component of this work in other works. This work was supported by the Institute for Information & Communica- and style of image changes. tions Technology Planning & Evaluation (IITP) grant funded by the Korea Recently, many studies have been reported to improve government (No. 2017-0-00451, Development of BCI based Brain and Cog- classification performance by using deep neural networks in nitive Computing Technology for Recognizing Users Intentions using Deep Learning; No. 2015-0-00185, Development of Intelligent Pattern Recognition EEG data [18], [19]. However, researchers have struggled Softwares for Ambulatory Brain Computer Interface). to use traditional deep neural networks since EEG signals Stimulus 300 Session Stimulus 2 Stimulus 1 Presenting End Presenting Presenting Rest Rest

0.5 s 0.5–1.5 s 0.5 s 0.5–1.5 s

1 trial

(a) Experimental setup (b) Experimental paradigm

Fig. 1. Experimental design. (a) Experimental setup. All subjects are asked to stand or walk on the treadmill in front of a display. (b) Experimental paradigm. ERP paradigm including ‘target’ and ‘non-target’ is used in this experiment. have different characteristics from typical input of deep neural pre-trained model consisted of convolutional neural networks, networks. EEG has dynamic time series data and the am- encoding noisy EEG signals in the ambulatory environment. plitude of artifact is higher than the sources which contain We hypothesized that reconstructed EEG would contain ERP the human intention. Thus, there are several attempts to components but not have artifacts. We performed subject- fit EEG signals into deep neural networks. In Schirrmeister dependent and subject-independent training sessions. We also et al. [20], they introduced deep ConvNets which is EEG- evaluated the reconstructed ERP with the visual inspection, fitted convolutional neural networks (CNN) and compared ERP performance, and the loss of the generative model. This with traditional classifier for motor imagery, having much work could be a noise reduction method and extracting user higher performance. EEGNET [21] was developed for EEG intention methods. signals during BMI paradigms including P300 visual- (VEP) paradigm. Moreover, a few papers recently II.MATERIALS AND METHODS used GANs in EEG data to generate another EEG data. In Hartmann et al. [22], they generated EEG signals of hand A. Experimental Setup movement using GANs with different architectures, showing 1) subjects: Eighteen healthy young subjects (four females, the signals generated well in time-series and frequency spectra. age 24.5 ± 3.1 years) were included in this experiment. None In addition, GANs were trained to classify and generate EEG of the subjects had a history of neurological, psychiatric, or data for driving fatigue [23]. To date, most studies applying any other pertinent disease that otherwise might have affected GANs to EEG data are used for data augmentation purposes the experimental results. All subjects gave their written in- to improve classification performance. formed consent before the experiments. All experiments were GANs are used for noise reduction in a few studies. carried out corresponding to the Declaration of Helsinki. This In Wolterink et al. [24], they reduced noises in computed study was reviewed and approved by the Korea University tomography (CT) data using GANs with convolutional neu- Institutional Review Board (KUIRB-2019-0194-01). ral networks to minimize voxelwise loss. As a result, they The subjects were on the treadmill at 80 (±5) cm in front produced more accurate CT images. Another researcher [25] of a 24 inch LCD monitor (refresh rate: 60 Hz, resolution: performed the reduced noise in the CT image using models 1920 × 1080) and stood, walked at 1.6 m/s during the BMI inspired by cycle-GANs [26] and PatchGan [27]. These studies paradigms (Fig. 1-(a)). demonstrated that not only normal image and audio data but 2) Data acquisition: We used a wireless interface (MOVE also brain-related data such as brain imaging are applied for system, Brain Product GmbH) and Ag/AgCl electrodes to noise reduction using GANs. acquire EEG signals from the scalp and Smarting System In this paper, we proposed the reconstruction framework (mBrainTrain LLC) to record EEG signals. The cap electrodes of event-related potential (ERP) from noisy EEG signals were placed according to the 10-20 international system at during walking. To reconstruct the EEG signals, we utilized locations in 32 channels: Fp1, Fp2, AFz, F7, F3, Fz, F4, F8, a generative model framework inspired by EEGNET [21], FC5, FC1, FC2, FC6, C3, Cz, C4, CP5, CP1, CP2, CP6, P7, DCGANs [16], and ACGANs [17] and then classified the ERP P3, Pz, P4, P8, PO7, PO3, POz, PO4, PO8, O1, Oz, and signals using convolutional discriminative models. To make O2. The impedance was maintained below 10 kΩ. We set the the latent variables for the generative model, we used the sampling rate at 500 Hz. Latent variables Real/Fake C … C

Pretrained … … G D T model T EEG signals during Reconstructed walking (input) EEG signals (fake) classes

C …

T EEG signals during standing (real)

Fig. 2. Proposed GAN architectures for ERP reconstruction with the input of EEG signals. C indicates the number of channels and T indicates the time samples in a segment. The input of EEG signals during walking goes through the pre-trained model and latent variables are produced. Latent variables are the input of the generative model G to reconstruct ERP signals. Discriminative model D distinguishes whether the input signals are fake (reconstructed signals) or real (EEG signals during standing) and the classes.

3) Paradigm: We acquired data of ERP based TABLE I on the OpenBMI (http://openbmi.org) [28], BBCI GENERATIVE ARCHITECTURE LAYER AND OUTPUT SHAPE (http://github.com/bbci/bbci public) [29], and Psychophysics (http://psychtoolbox.org) toolboxes [30] in Matlab (The Generator Discriminator Mathworks, Natick, MA). Layers Output Layers Output Dense T/2 Conv2D C×T×8 ERP is an electrical potential induced in the central and Reshape 2×T/16×4 ReLU C×T×8 parietal cortex in response to particular cognitive tasks [31]. Batch normalization 2×T/16×4 dropout C×T×8 Attention on target induces ERP components such as N200 up sampling 4×T/4×4 permute 8×T×C zero padding 5×T/4×4 conv2D 8×T×8 and P300 which have task-relevant negative peaks 200 ms conv2D 5×T/4×8 ReLU 8×T×8 and positive peaks 300 ms after a target stimulus. In this ReLU 5×T/4×8 max pooling 4×T/4×8 experiment, this paradigm is executed with the target (‘OOO’) Batch normalization 5×T/4×8 dropout 4×T/4×8 up sampling 10×T×8 Batch normalization 4×T/4×8 and non-target (‘XXX’) characters. The ratio of the target was conv2D 10×T×C conv2D 4×T/4×4 0.2 and the number of total trials is 300. In a trial, a stimulus ReLU 10×T×C ReLU 4×T/4×4 was presented for 0.5 s, and showing cross to take a rest for Batch normalization 10×T×C max pooling 2×T/16×4 up sampling 20×T×C dropout 2×T/16×4 randomly 0.5–1.5 s (Fig. 1-(b)). permute C×T×20 Batch normalization 2×T/16×4 conv2D C×T×1 flatten T/2 B. Generative Adversarial Networks ReLU C×T×1 GANs were a novel generative model, consisting of two adversarial models which are generative model and discrimi- native model. The generative model produced the data which G reconstructed ERP signals from the input of noisy EEG is the input of the discriminative model which distinguished signals. The discriminator D distinguished the reconstructed whether the data is reconstructed or standing data and its class. data and data during standing whether real or fake and the The generative model is trained by mapping from latent vari- classes. ables to purpose data distribution. While the latent variables 1) Discriminative model: The discriminative model was are normally random noise, we put feature vectors through inspired by EEGNET [21] fitted to EEG data consisting the pre-trained model including CNN architectures from every of convolutional layers, leaky rectified linear unit (ReLU), single noisy EEG signal during walking. The discriminative dropout, max pooling, and batch normalization. While the model learned to distinguish whether EEG signals during or input of the discriminative model is cleaned EEG signals and reconstructed fake signals (validity) and classify their classes. reconstructed EEG signals, the output is whether real or fake The contest progressed well-made fake signals versus precise and the classes. discriminator. Thus, the purpose of the generative model is to 2) Generative model: The generative model was similar to increase the validity loss of the discriminative model which the opposite of the structure of the discriminative model, which tried to reduce its loss. consisting of dense, convolutional layer, batch normalization, Fig. 2 showed the frameworks of used GANs with the input up sampling, and zero padding. The latent variable was pro- of EEG signals to detect ERP signals. We used the pre-trained duced by convolutional pre-trained model inputting of each CNN model for the first encoder from the input. The generator noisy EEG samples. While the input of generative model is

V] * V]

휇 ** ** 휇 **

1

0.8 1.5 Reconstructed [ EEG

Input noisy ERP signal [ signal ERPInput noisy 0.6 1 Time [ms] 0.4 Time [ms] 0.2 0.5 EEG signals during standing EEG signals during walking0 Reconstructed signals

0

V] V] V]

휇 휇 휇

[ [ [

Time [ms] Time [ms] Time [ms]

Fig. 3. Grand average of EEG signals during standing and walking and reconstructed EEG signals. The reconstructed signals are derived from the raw EEG signals during walking through the proposed framework. latent variables, the output is the reconstructed signals which are the same shape of noisy EEG samples. ** * 3) Training: Generative model G generated the data distri- ** ** bution from latent values and discriminative model D distin- guished the data whether fake or not (validity). We used an advanced model of GANs that can offer not only the validity of data but also classification. The D of GANs is trained to maximize the log-likelihood of validity LS and classification

LC , which can be represented by following as:

SNR AUC

LS = E[log P (S =real|Xreal)]+ (1)

E[log P (S = fake|Xfake)]

LC = E[log P (C =c|Xreal)]+ (2) E[log P (C = c|X )] fake Standing Walking Reconstructed where D is trained to maximize LS + LC , G is trained to maximize L − L . S C Fig. 4. The performance in subject-independent session. Standing and walking All data were applied high-pass filter at 0.5 Hz and epoched refer to raw EEG signals during standing and walking, respectively. The into the time-interval between –200 ms and 800 ms based on restructured refers to the regenerated signals through the proposed framework the trigger. In reconstructed EEG signals, we selected one based on EEG signals during walking. One asterisk and two asterisks indicate 5 % and 1 %, significance level, respectively. channel at Pz where the amplitude of ERP is apparent. To divide the dataset, we performed cross-validation for each subject which we left one subject as test set and trained rest subject dataset. This is named leave-one-subject-out which is quality and diversity. Generative model and discriminative cross validation inter trials but inter subjects [32]. For the pre- model were evaluated by visual inspection, classification trained data, we trained neural networks with walking dataset performance, and signal-to-noise ratio (SNR). The statistic and tested standing dataset. As we managed the imbalanced analysis of a two-tailed paired t-test was performed in the data for training, we reduced the number of non-target trials confidence level of 95% and 99%. as same as target trials. We trained GANs in each batch, the A. Discriminative Model size with 32, using loss function cross-entropy for validity and classification of ERP, and mean squared error for gen- 1) Visual inspection: Fig. 3 showed the grand average erative model loss of calculating between the original input of noisy EEG signals during standing, EEG signals during signals and reconstructing signals. We trained model using walking, and reconstructed EEG signals from noisy signals. Adam optimizer [33] with a learning rate of 0.0002 and the Although the EEG signals during standing had apparent N200 exponential decay rates of the 1st moment estimates of the and P300 components, the signals during walking in the mo- gradient 0.5 [16]. bile environment had the lower amplitude at N200 and P300 because of the great amplitude of artifacts. The reconstructed III.RESULTS AND DISCUSSION EEG data also had strong N200 and P300 components in We performed the training in subject-independent sessions. signals as standing signals. The figure showed the generators We have presented quantitative metrics indicating the training made data similar to standing data having high SNR. oe a rie ihdvriy n ahipto noisy of input generative each the and diversity, means with which trained of shapes, was diversity different model the EEG had reconstructed demonstrated of signals sample signals also Each EEG signals. 5 contain EEG the reconstructed Fig. still the to to retain standing. comparing seem can time during samples pre-stimulus they few in that A artifacts so origins. signals, of were their which information walking of during signals signals reconstruct raw original to to similar tried the components, also Moreover, signals model components. P300 ERP-related generative and containing signals the N200 the had that sample means each which seen, be can EEG Model Generative than B. higher even is signals. standing. signals EEG be during reconstructed reconstructed signals would in of pre-stimulus P300 SNR at than ( The rather noise significantly lot of a SNR amplitude reduced in the GANs because increase applying is an However, of data to artifacts. average of motion led the because to data showed due SNR standing 4 distorting The than data. Fig. decreased reconstructed walking [34]. and during ms walking, (-200 Pz RMS standing, baseline channel for by pre-stimulus SNR at divided of P300 ms) 0 amplitude at – peaks average square of mean the root amplitude of the the by of calculated (RMS) is which reduction, noise similar condition. was of data walking environment. reconstructed to distortion mobile of the AUC the averaged in the of amplitude However, great because decreased with walking condition signals EEG during standing signals than EEG rather of curve AUC characteristic and operating The m/s, receiver 1.6 (AUC). the calcu- under at areas was walking by performance lated during classification The data data. noisy reconstructed data, Each standing walking. of during signals EEG on based framework proposed the through ms. generated 800 and were ms signals –200 These between signals. plot EEG time-series reconstructed the of showed sample Samples 5. Fig. i.5soe h ape frcntutdEGsgas As signals. EEG reconstructed of samples the showed 5 Fig. quality: Signal 3) performance: Classification 2)

[휇V] [휇V] [휇V] - 200 Sample 13 Sample 7 Sample 1 0 Time Time [ [휇V] [휇V] [휇V] 200 - 200 400 Sample Sample Sample 1 ms ] h N sa seta niao of indicator essential an is SNR The 600 0 Time [ 200 7 4 800 400 ms ] - 600 Sample 14 Sample 8 Sample 2 200 800 0 i.4soe h accuracy the showed 4 Fig. Time Time [ 200 - 200 400 Sample Sample 5 Sample 2 ms ] 600 0 Time [ 8 200 800 < p 400 ms - 200 ] Sample 3 Sample 15 Sample 9 600 0 0 800 . 01 Time Time [ 200 .This ). - 200 400 ms Sample Sample Sample 3 ] 600 0 Time [ 200 9 6 800 400 ms rpsdfaeokwuddrcl epBI ntemobile the The in performance. BMIs ERP help directly the is would enhance framework noisy it can proposed from which components Therefore, signals ERP performance. reconstruct means EEG or that high extract low to had thought comparatively was generator loss model training The produced generative P300. it and the N200 that of containing so ERPs environment. signals of diverse walking samples was the EEG various progress in to signals reconstructing noisy similar The was than data P300 higher reconstructed of and much SNR the N200 Moreover, standing. as during such oddball during ponents performed signals and EEG treadmill, of the classification. paradigm. on data ERP do m/s collected 1.6 can at we also walking dataset, but generate validity the only classify not For and to signals data able EEG the GANs provided constructed model We generative combining generation. and and approach signals ConvNets novel a EEG deep size is Generating model kernel networks generative series. the considered neural time using which deep and ConvNets, the channels deep from regarding of of was different One be EEG properties images. would for the for EEG networks for of networks the Because neural networks. the EEG, classifiers neural developing deep and signals, using brain extracting from methods, features en- removal mobile critical many artifact the been developing in by performance have much BMI vironment there is the enhance Therefore, EEG which to intention. artifacts, challenges the user of running, than amplitude or huger the great walking In contain as GANs. signals using such walking environment during mobile signals noisy from work to comparing value generative low the quite of is loss. loss training which MSE other 0.6301, The was output. model the impact signals - ] 200 Sample 16 Sample 10 Sample 4 600 h eosrce aatruhGN a infiatcom- significant had GANs through data reconstructed The frame- reconstruction ERP the proposed we paper, this In 0 800 Time Time [ 200 400 ms ] 600 800 - 200 Sample 17 Sample 11 Sample 5 V C IV. 0 Time Time [ 200 400 ONCLUSION ms ] 600 800 - 200 Sample 18 Sample 12 Sample 6 0 Time Time [ 200 400 ms ] 600 800 - 200 Sample 21 environment to reduct noise in terms of SNR. In the future, [20] R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, we would advance the model to improve the classification K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball, “Deep learning with convolutional neural networks for EEG decoding performance. Moreover, we will train all 32 channels of EEG and visualization,” Hum. Brain Mapp., vol. 38, no. 11, pp. 5391–5420, signals to reconstruct ERP signals. Aug. 2017. [21] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, REFERENCES and B. J. Lance, “Eegnet: a compact convolutional neural network for eeg-based brain–computer interfaces,” J. Neural Eng., vol. 15, no. 5, p. [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. 056013, Jul. 2018. Vaughan, “Brain-computer interfaces for communication and control,” [22] K. G. Hartmann, R. T. Schirrmeister, and T. Ball, “EEG-GAN: Gen- Clin. Neurophysiol., vol. 113, no. 6, pp. 767–791, Jun. 2002. erative adversarial networks for electroencephalograhic (EEG) brain [2] M. Lee, C.-H. Park, C.-H. Im, J.-H. Kim, G.-H. Kwon, L. Kim, W. H. signals,” arXiv preprint arXiv:1806.01875, 2018. Chang, and Y.-H. Kim, “Motor imagery learning across a sequence of [23] S. Panwar, P. Rad, J. Quarles, E. Golob, and Y. Huang, “A semi- trials in stroke patients,” Restor. Neurol. Neurosci., vol. 34, no. 4, pp. supervised wasserstein generative adversarial network for classifying 635–645, Aug. 2016. driving fatigue from EEG signals,” in IEEE Int. Conf. Systems, Man [3] X. Zhu, H.-I. Suk, S.-W. Lee, and D. Shen, “Canonical feature selection and Cybernetics (SMC), 2019, pp. 3943–3948. for joint regression and multi-class identification in alzheimers disease [24] J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Isgum,ˇ “Generative diagnosis,” Brain Imaging Behav., vol. 10, no. 3, pp. 818–828, Aug. adversarial networks for noise reduction in low-dose CT,” IEEE Trans. 2016. Med. Imaging, vol. 36, no. 12, pp. 2536–2545, Dec. 2017. [4] X. Ding and S.-W. Lee, “Changes of functional and effective connectiv- [25] G. Choi, D. Ryu, Y. Jo, Y. S. Kim, W. Park, H.-s. Min, and Y. Park, ity in smoking replenishment on deprived heavy smokers: a resting-state “Cycle-consistent deep learning approach to coherent noise reduction in fmri study,” PLoS One, vol. 8, no. 3, p. e59331, Mar. 2013. optical diffraction tomography,” Opt. Express, vol. 27, no. 4, pp. 4927– [5] M.-H. Lee, S. Fazli, J. Mehnert, and S.-W. Lee, “Subject-dependent clas- 4943, Feb. 2019. sification for robust idle state detection using multi-modal neuroimaging [26] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image and data-fusion techniques in BCI,” Pattern Recognit., vol. 48, no. 8, translation using cycle-consistent adversarial networks,” in Proc. IEEE pp. 2725–2737, Aug. 2015. Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 2223–2232. [6] Y. Chen, A. D. Atnafu, I. Schlattner, W. T. Weldtsadik, M.-C. Roh, H. J. [27] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation Kim, S.-W. Lee, B. Blankertz, and S. Fazli, “A high-security EEG-based with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. login system with RSVP stimuli and dry electrodes,” IEEE Trans. Inf. Pattern Recognit. (CVPR), 2017, pp. 1125–1134. Forensic Secur., vol. 11, no. 12, pp. 2635–2647, Jun. 2016. [28] M.-H. Lee, O.-Y. Kwon, Y.-J. Kim, H.-K. Kim, Y.-E. Lee, J. Williamson, [7] K. Gramann, J. T. Gwin, N. Bigdely-Shamlo, D. P. Ferris, and S. Makeig, S. Fazli, and S.-W. Lee, “EEG dataset and OpenBMI toolbox for “Visual evoked responses during standing and walking,” Front. Hum. three BCI paradigms: An investigation into BCI illiteracy,” GigaScience, Neurosci., vol. 4, no. 202, Oct. 2010. vol. 8, no. 5, p. giz002, Jan. 2019. [8] T. P. Luu, S. Nakagome, Y. He, and J. L. Contreras-Vidal, “Real-time [29] R. Krepki, B. Blankertz, G. Curio, and K.-R. Muller,¨ “The Berlin Brain- EEG-based brain-computer interface to a virtual avatar enhances cortical Computer Interface (bbci)–towards a new communication channel for involvement in human treadmill walking,” Sci. Rep., vol. 7, no. 1, p. online control in gaming applications,” Multimed. Tools. Appl., vol. 33, 8895, Aug. 2017. no. 1, pp. 73–90, Feb. 2007. [9] B. R. Malcolm, J. J. Foxe, J. S. Butler, W. B. Mowrey, S. Molholm, and [30] M. Kleiner, D. Brainard, and D. Pelli, “What’s new in Psychtoolbox-3?” P. De Sanctis, “Long-term test-retest reliability of event-related potential Percept., vol. 36, no. 1 suppl., p. 14, Aug. 2007. (ERP) recordings during treadmill walking using the mobile brain/body [31] M.-H. Lee, J. Williamson, D.-O. Won, S. Fazli, and S.-W. Lee, “A high imaging (MoBI) approach,” Brain Res., vol. 1716, pp. 62–69, Aug. 2019. performance spelling system based on EEG-EOG signals with visual [10] E.-R. Symeonidou, A. D. Nordin, W. D. Hairston, and D. P. Ferris, feedback,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 26, no. 7, pp. “Effects of cable sway, electrode surface area, and electrode mass on 1443–1459, May 2018. electroencephalography signal quality during motion,” Sens., vol. 18, [32] F. Fahimi, Z. Zhang, W. B. Goh, T.-S. Lee, K. K. Ang, and C. Guan, no. 4, p. 1073, Apr. 2018. “Inter-subject transfer learning with an end-to-end deep convolutional [11] A. D. Nordin, W. D. Hairston, and D. P. Ferris, “Dual-electrode motion neural network for eeg-based bci,” J. Neural Eng., vol. 16, no. 2, p. artifact cancellation for mobile electroencephalography,” J. Neural Eng., 026007, Jan. 2019. vol. 15, no. 5, p. 056024, Aug. 2018. [33] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” [12] S. Blum, N. Jacobsen, M. G. Bleichner, and S. Debener, “A Rieman- arXiv preprint arXiv:1412.6980, 2014. nian modification of artifact subspace reconstruction for EEG artifact [34] H. Schimmel, “The (±) reference: Accuracy of estimated mean com- handling,” Front. Hum. Neurosci., vol. 13, p. 141, Apr. 2019. ponents in average response studies,” Science, vol. 157, no. 3784, pp. [13] N.-S. Kwak, K.-R. Muller,¨ and S.-W. Lee, “A lower limb exoskeleton 92–94, Jul. 1967. control system based on steady state visual evoked potentials,” J. Neural Eng., vol. 12, no. 5, p. 056009, Aug. 2015. [14] Y.-E. Lee and M. Lee, “Decoding visual responses based on deep neural networks with ear-EEG signals,” in Proc. 8th Int. Conf. IEEE Brain- Computer Interface (BCI), 2020, pp. 1–6. [15] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Adv. Neural. Inf. Process. Syst. (NIPS), 2014, pp. 2672–2680. [16] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015. [17] A. Odena, C. Olah, and J. Shlens, “Conditional image synthesis with auxiliary classifier gans,” in Proc. 34th Int. Conf. Machine Learning (ICML), 2017, pp. 2642–2651. [18] N.-S. Kwak, K.-R. Muller,¨ and S.-W. Lee, “A convolutional neural network for steady state visual evoked potential classification under ambulatory environment,” PloS One, vol. 12, no. 2, p. e0172578, Feb. 2017. [19] M. Lee, S.-K. Yeom, B. Baird, O. Gosseries, J. O. Nieminen, G. Tononi, and S.-W. Lee, “Spatio-temporal analysis of eeg signal during conscious- ness using convolutional neural network,” in Proc. 6th Int. Conf. IEEE Brain-Computer Interface (BCI), 2018, pp. 1–3.