REAL TIME CLASSIFICATION OF EMOTIONS TO CONTROL STAGE LIGHTING
DURING DANCE PERFORMANCE
A Thesis
Presented to
The Faculty of the Department of Biomedical Engineering
University of Houston
In Partial Fulfillment
of the Requirements for the Degree
Master of Science
In Biomedical Engineering
By
Shruti Ray
August 2016 REAL TIME CLASSIFICATION OF EMOTIONS TO CONTROL STAGE LIGHTING
DURING DANCE PERFORMANCE
______Shruti Ray
Approved:
______Chair of The Committee Dr. Jose Luis Contreras – Vidal, Professor, Department of Electrical and Computer Engineering
Committee Members:
______Dr. Ahmet Omurtag, Associate Professor, Department of Biomedical Engineering
______Dr. Saurabh Prasad, Assistant Professor, Department of Electrical and Computer Engineering,
______Dr. Suresh K. Khator, Dr. Metin Akay, Founding Chair, Associate Dean John S. Dunn Cullen Endowed Professor, Cullen College of Engineering Department of Biomedical Engineering Acknowledgement
I would like to show my deepest gratitude for my advisor, Dr. Jose Luis Contreras - Vidal, for his continuous guidance, encouragement and support throughout this research project.
I would also like to thank my colleagues from Laboratory for Noninvasive Brain-
Machine Interface Systems, for their immense support and encouragement and help in data collection for analysis. I would like to thank Ms. Rebecca B. Valls and Ms. Anastasiya
Kopteva for their dancer performances with EEG caps to help me with the data collection.
Additionally, I would like to thank all my friends Su Liu, Thomas Potter, Dr. Kinjal Dhar
Gupta and my sister Shreya Ray who have supported me in both happy and adverse conditions.
Last, but not the least, I would like to thank my parents and family to believe in my dreams and supporting my quest for higher education. Without their love and support, it wouldn’t have been possible to experience the amount of success that I have.
iv
REAL TIME CLASSIFICATION OF EMOTIONS TO CONTROL STAGE LIGHTING
DURING DANCE PERFORMANCE
An Abstract
of a
Thesis
Presented to
The Faculty of the Department of Biomedical Engineering
University of Houston
In Partial Fulfillment
of the Requirements for the Degree
Master of Science
In Biomedical Engineering
By
Shruti Ray
August 201
Abstract
Recently, there has been a growing research in the field of Electroencephalography
(EEG) based recognition of emotions known as affective computing, where the subjects are either shown pictures to elicit the necessary emotional response or made to imagine a particular situation to produce the desired emotion. Research has shown that different emotions affect the brain waves differently thus leading to further research in computerized recognition of human emotions [1] [2] [3]. In my current master’s thesis, I have analyzed the neural (EEG) data recordings during emotional dance performance from 2 trained dancers. This processed data was used to control the stage lighting color
(with changing emotions). Data from subject 1 and subject 2 was used to train the classifier offline. The classification was done by use of Artificial Neural Network. Four musical pieces (details in the method section) were selected by the dancers, each representing a particular emotion – “Anger”, “Fear”, “Neutral” and “Happy”. These emotions were so selected to cover the emotional range of positive, negative and neutral emotions. The feature type of ASM12 [4] with temporal resolution of one second and 50% overlapping hamming window was used. The sub band frequency range - delta (1-3 Hz), theta (4-7 Hz), alpha (8-12 Hz) and beta (14-30 Hz) were used for each of the symmetric electrode pair.
The results showed a high level of accuracy of 72.1% was obtained for subject 1 and an accuracy of 75.7% was obtained for subject 2 obtained during offline model training and testing of model using multilayer neural network with 1 hidden layer and 32 hidden layer units. The real-time accuracy was low, and could majorly classify two emotional classes.
vi
Table of Contents
Acknowledgement ...... iv
Abstract ...... vi
Table of Contents…………………………………………………………………………………………………vii
List of Figures ...... ix
List of Tables ...... xii
Introduction ...... 1
1.1.1 Problem Statement ...... 1
1.2 Contribution ...... 4
1.3 Thesis Organization ...... 4
Background and Related Work ...... 6
2.1 Introduction to Affective Computing ...... 6
2.1.2 Applications of Affective Computing ...... 7
2.2 Emotions ...... 8
2.2.1 How do we define emotion?...... 8
2.2.2 Brain and Emotions ...... 9
2.3 Machine Learning Background ...... 10
2.3.1 Methods in machine learning ...... 10
2.3.2 Algorithms – ...... 11
2.4 Current Technology for emotion classification ...... 13
2.4.1 My Contribution ...... 15
vii
Brain Computer Interface ...... 16
3.1 What is Brain Computer Interface? ...... 16
3.2 Brain Machine Learning ...... 18
Experimental setup ...... 19
4.1 Subjects ...... 20
4.2 Equipment used...... 20
4.3 Experimental Protocol ...... 20
Data Processing and Analysis ...... 26
5.1 Data Preprocessing ...... 26
5.2 Data Analysis ...... 27
5.3 Feature Matrix –...... 28
5.4 Feature matrix classification – ...... 28
5.5 Mapping of classified data to stage lights – ...... 29
Results and Discussion ...... 31
6.1 Power spectral density of the four emotions ...... 31
6.2 Classification results ...... 38
6.3 Confusion Matrix - ...... 42
6.4 Discussion ...... 43
Future Work and Conclusion ...... 46
7.1 Limitations and Future work ...... 46
7.2 Conclusion ...... 47
viii
References ...... 48
ix
List of Figures
Figure 1: Wheel of Emotions created by Robert Plutchik.………………………………………….…2
Figure 2: Classification of emotions in a 2 dimensional scale ……………………….…………..…3
Figure 3: Components of BCI ……………………………………….…………………………………………17
Figure 4: BCI Signal Processing ………………………………………………………………………………17
Figure 5: Experimental setup and data collection …….…………………………………………….…19
Figure 6: Data collection: Dance protocol for subject 1 Trial 1 and Trial ……….………….…22
Figure 7: Data collection: Dance protocol for subject 2 Trial 1 and Trial 2…………………..23
Figure 8: Amplitude and Time frequency map of the musical pieces used during the dance
performance [ (A) – Neutral, (B) – Fear, (C) – Happy, (D) – Anger]………….…24
Figure 9: EEG signal Data Processing ……………………………………………….………………..…..26
Figure 10: Scalp map of the ASM12 electrodes. ………………………………………………………..28
Figure 11: Color wheel scheme used for lighting ……………………………………………………….29
Figure 12: Power Spectrum for electrode channel number 1, 2, 3, 4 (Shown in red on scalp
map) for subjects 1 and 2…………………………………………………………………..……32
Figure 13: Power Spectrum for electrode channel number 6, 7, 12, 13 (shown in red on
scalp map) for subjects 1 and 2………………………………………………………………..33
Figure 14: Power Spectrum for electrode channel number 15, 16, 23, 24 (shown in red on
scalp map) for subjects 1 and 2……………………..…………………………………………34
Figure 15: Power Spectrum for electrode channel number 26, 27, 29, 31 (shown in red on
scalp map) for subjects 1 and 2……………………………..…………………………….….35
Figure 16: Power Spectrum for electrode channel number 42, 43, 44, 45 (shown in red on
scalp map) for subjects 1 and 2………………………………………………………….……36
Figure 17: Power Spectrum for electrode channel number 51, 52, 54, 55 (shown in red on
scalp map) for subjects 1 and 2………………………………………………………………..37
x
Figure 18: ASM12 classification results for subject 1. A) Feature 1 vs. Feature 38. B)
Feature 1 vs. Feature 42. C)Feature 6 vs. Feature 45. D) Feature 13 vs. Feature
14. E) Feature 24 vs. Feature 47. F) Feature 37 vs. Feature 38. G) Feature 45
vs. Feature 39.……………………….….…………………………………………………………38
Figure 19: Figure 19: ASM12 classification results for subject 2. A) Feature 1 vs. Feature
38. B) Feature 1 vs. Feature 42. C)Feature 6 vs. Feature 45. D) Feature 13 vs.
Feature 14. E) Feature 24 vs. Feature 47. F) Feature 37 vs. Feature 38. G)
Feature 45 vs. Feature 39………………………………………………………………………39
Figure 20: Scalp Map showing the feature regions shown on table 5 (in
red).…………………………………………………………………………………………….………41
Figure 21: Confusion matrix for subject 1 using MNN classifier with 32 hidden layer
nodes……………………………………………………………………………………………………42
Figure 22: Confusion matrix for subject 2 using MNN classifier with 32 hidden layer
nodes……………………………………………………………………………….…………...……43
Figure 23: Dance performance during online testing.………………………………………………45
xi
List of Tables
Table 1: Light colors assigned to each emotion.……………………………….……………………..…15
Table 2: List of music associated with each emotion during the dancer’s performance
…………………………………………………………………………………….………………………………………21
Table 3 ASM12 electrode distribution list…………………………………………………………………27
Table 4: Table showing the color mapping scheme……………………………………………….…..30
Table 5: List of Features, ASM12 electrode pair, sub-band frequency and electrode
location for classification results showed in figure 12 to figure 17
…………………………………………………………………………………………………………………………….41
Table 6: Results of online testing……………………………………………………………………………..45
xii
Chapter 1
Introduction
1.1.1 Problem Statement
Human Computer Interfacing has seen tremendous advancements in the past few years. Machines have become increasingly productive for the users, and in some instances surpass humans in their efficiency. Machines now have the ability to respond in a desired manner to human commands. One of the major challenges now lies in the machines ability to read and understand human emotions and respond accordingly. As rightly put forth by
Picard and Klein:
“Recognizing affect should greatly facilitate the ability of computers to heed the rules of human – human communication” [4].
There has been increasing research in the field of computers ability to estimate human’s emotions – through facial recognition [5, 6, 7], voice recognition [8, 6, 9], or through the fusion of both [10]. However, it must be kept in mind that in the psychological aspect, there is an explicit separation between the physiological arousal, the behavioral experience (affect) and the conscious experience of the emotion (feeling) [11]. Facial recognition and voice recognition falls under the category of behavioral experience, i.e., expression. This might vary from person to person and should take into account a number of other factors like heart rate, skin conductance, pupil dilation etc. [12, 13].
In order to study human emotions two theoretical perspectives are prevalent. These are as follows:
1
Darwin – The evolution of the basic emotions is a result of natural selection. A number of emotions have been thus derived, with Plutchik [14] proposing eight basic human emotions: anger, fear, sadness, disgust, surprise, curiosity, acceptance and joy. Ekman had chosen other emotions to be basic and concluded that these emotions and expression of these emotions are universal – anger, fear, sadness, happiness, disgust and surprise. Robert Plutchik in 1980s created a new concept of emotion which he called as the “Wheel of Emotions” which showed how emotions have a tendency to blend into one another creating new kind of emotions. The figure 1 below shows how each emotion is related to each other [15]:
Figure 1: Wheel of Emotions created by Robert Plutchik.
Cognition – This approach proposed by Lang classifies emotions into a 2D scale, where they are mapped according to their valence (positive vs. negative) and arousal (calm vs. excited) [16].
2
High Arousal
Anger Happy
“Negative” Emotions “Positive” Emotions
Sad Content
Low Arousal
Figure 2: Classification of emotions in a 2 dimensional scale.
The study of neural data, via EEG, fMRI helps to better understand how the brain elicits a particular emotion, and this understanding can help us to train the machine in a way that meets the needs of the user without explicitly giving commands to the machine.
The main disadvantage of using EEG electrodes is that it is placed outside the skull, leading to signal artifacts. Also it is widely known that EEG recordings are not just readings from one spot but the result of the skull spreading out the brain activity from the entire brain. But, fortunately there has been a growing research interest in the field of neural decoding and emotional recognition using EEG (Affective Computing). One important result is the role of the brain’s alpha waves in different emotions. M.B.
Kostyunina et al., [17] showed that different emotions show different peaks in the alpha frequency band. Yuan – Pin Lin et al., [4] concluded that the classification performance using SVM showed that spectrum power asymmetry index (ASM12) would be sensitive to reflect the brain activation related to emotion responses. This particular feature extraction technique as shown in [4] gave high classification accuracy for emotion classification
3
which was tested and used in real time to classify and control the color of the stage lighting
as a closed loop BCI.
1.2 Contribution
In this thesis, the main goal is to device a methodology for the online classification
of four different emotions (Happy, anger, fear, neutral) based on the performer’s
emotional state of mind and change the color of the stage lighting accordingly. With the
help of necessary data analysis and machine learning techniques, it was possible to
develop a methodology for the programmed control of the stage light colors for the four
different emotional responses. Here, I proposed the use of real time ASM12 electrode
pairing system and short time fourier transform for the extraction of sub band frequency
of four different brain waves – delta (1-3 Hz), theta (4-7 Hz), alpha (8-12 Hz) and Beta (14-
30 Hz) as features for input to the Multilayer Neural Network with 1 hidden layer and 32
hidden layer units for the classification of emotions and subsequent control of the stage
lighting both offline and online.
1.3 Thesis Organization
Chapter 1 contains the problem statement and our contribution to solve the
problem. For the proper understanding of the problem and technical concepts related to
methodology, some background knowledge and information regarding affective
computing, emotions and brain regions responsible for emotions and closed loop Brain
Computer Interfacing will be provided in Chapter 2 and chapter 3. Chapter 4 will include
methodology explained in details. Followed by chapter 5, which will include explanation
of the data processing and analysis techniques followed. Chapter 6 includes the results
discussed in a precise manner. The last chapter of this thesis is chapter 7, which will
4 conclude the thesis by providing limitations, future work and summary of the thesis as the conclusion.
5
Chapter 2
Background and Related Work
2.1 Introduction to Affective Computing
With the shift of using Human – Computer Interaction (HCI) and Interaction
Design from design and work oriented applications towards dealing with leisure-oriented
applications like games, social computing, art and tools for creativity, there arises the
necessity to consider constituents of experience, how to deal with user’s experiences and
an understanding of aesthetic practices and experiences. During the early 1990ies, there
was a wave of new research on emotions and its diverse role in psychology (e.g., Ellsworth
and Scherer, 2003) [45], neurology (e.g., LeDoux, 1996) [46], medicine (e.g., Damasio,
1995) [47], and sociology (e.g., Katz, 1999) [49]. Prior to this emotion were not something
of interest for research, and researchers manly focused on how emotions got in the way of
rational thinking – things like how scared pilots suffered from tunnel vision, angry
business meetings could sabotage meetings, nervousness in a presentation could
negatively affect the outcome. Emotional expression is not restricted to just our brains but
to whole body experiences like in hormone changes in our blood streams, nervous signals
to muscle tensing or relaxing, blood rushing to different parts of the body, body postures,
movements, facial expressions (Davidson et al., 2002) [48]. Our bodily reactions in turn
give feedback to our minds, creating experiences that regulate our thinking, in turn feeding
back to our bodies. Thus, emotional experiences can start through body movements for
example – dancing wildly when you are happy. Neurologists have studied how the brain
works and how emotions processes are a key part of cognition. Emotional processes are
6
basically sitting in the middle of most processing going from frontal lobe processing in the
brain, via brain stem to body and back (e.g., LeDoux, 1996) [46].
The part of the new wave of research on emotions – artificial intelligence considers
emotion to be an important regulatory process which determines behavior in autonomous
systems of various kinds, e.g., Robots. Artificial Intelligence field picked up the idea on
human rational thinking and its connection to emotional processing. Rosalind Picard’s
“Affective Computing” had a major effect on both the Artificial Intelligence and Human
Computer Interfacing fields (Picard, 1997) [24]. Her idea was majorly focused on the
creation of machines that would relate too, arise from or deliberately influence emotion or
other affective phenomenon. The roots of affective computing can be traced back to
neurology, medicine and psychology. It mainly implements a biologistic perspective on
emotion processes in the brain, body, and interaction with others and with machines. The
most interesting application from Rosalind Picard’s group deal with the training of autistic
children to recognize emotional states in others and in themselves and act accordingly.
Recently affective computing has been put to commercial usage through its branching into
recognizing interest in commercials or dealing with stress in call centers.
Thus to put together the main essence of affective computing :
“Today’s computers are cold, logical machines. They needn’t be.” – Hal’s Legacy.
2.1.2 Applications of Affective Computing a. Emotion Monitoring – In this application anger is detected in disgruntles users to provide
a consumer feedback. b. Emotion Monitoring – Self Training of speaker’s or during a presentation. c. Understanding tutor – This can be used to give the tutors a audience feedback (interest),
another application can be to provide feedback in case of e – learning.
7
d. Believable agents – to produce intelligent artificial beings with real life emotional
responses like in case of video games.
e. Emotion in Computer Mediated Communication – For example in case of mobile phones,
where the device picks up the user emotions to automatically generate emoticons.
f. Medical Applications – For treatment of autistic patients to recognize and express
emotions more effectively.
g. The recent launch of Google App, the wearable mindRDR, where with strong focus on a
particular scene, pictures can be clicked and subsequently a stronger focus can help post
these pictures on facebook or Instagram.
2.2 Emotions
2.2.1 How do we define emotion?
The first question that arises when one speaks of emotions is – what exactly is
emotion? Or how can we define emotion? Philosophers have been concerned about the
nature of emotion since Socrates and the “Pre – Socrates” who preceded him. This
definition can have behavioral, philosophical and scientific approach. It cannot be
completely defined by a person’s emotional experience, nor can it be completely defined
by any electrophysiological measures of occurrences in the brain, nervous system,
circulatory system, respiratory or endocrine system. So coming back to our first question,
how exactly do we define emotion?
Plutchik defines emotion as a patterned bodily reaction corresponding to one of
the underlying adaptive biological processes common to all living organisms [3]. The
Greek philosopher Aristotle thought of emotion as a stimulus that evaluates experiences
based on the potential for gain or pleasure. Kleinginna and Kleinginna gathered and
analyzed 92 definitions of emotion from literature present that day [18]. Thus to conclude
a comprehensive definition:
8
Emotion is a complex set of interactions among subjective and objective factors, mediated
by neural/hormonal systems, which can [19]:
1. Give rise to affective experiences such as feelings of arousal, pleasure/ displeasure.
2. Generate cognitive processes such as emotionally relevant perceptual effects, appraisals,
labelling processes.
3. Activate widespread physiological adjustments to the arousing conditions;
4. Lead to behavior that is often, but not always, expressive, goal directed, and adaptive.
2.2.2 Brain and Emotions
The work of Paul D. MacLean [20] and other [21, 22] suggested the role of the
limbic system of the brain in emotion elicitation. The main structures in the limbic system
involved with emotions are:
1. Amygdala – The connection of amygdala to regions of the temporal lobe, the pre frontal
cortex, the medial dorsal nucleus of the thalamus makes it possible for it to play a major
role in the mediation and control of activities like friendship, love and affection. It is
mainly involved in the expression of negative emotions like fear, rage and aggression.
2. Thalamus – This region is associated with changes in emotional reactivity.
3. Hypothalamus – The Hypothalamus major functions include thermal regulation,
sexuality, combativeness, hunger and thirst. It also plays a major role in emotions. The
lateral parts of this structure is involved with pleasure and rage, while the median parts
are involved in aversion and displeasure.
4. Fornix – Works as an important connecting pathway in the limbic system.
5. Cingulate gyrus – The frontal part of the gyrus coordinates smells and sight with pleasant
memories of previous emotions. This region is also involved in emotional reaction to pain
and the regulation of aggressive behavior.
9
Other parts of the brain involved in emotion:
1. Brainstem – In inferior vertebrates this region is responsible for “emotional reactions”. In
humans this structure is involved in alerting mechanisms and maintenance of the sleep –
awake cycle.
2. Ventral Tegmental Area – This region is responsible for pleasurable sensations like that of
an orgasm. Certain brainstem structures like the nuclei of the cranial nerves, stimulated
by impulses coming from the cortex and the striatum are responsible for the
physiognomic: expressions of anger, joy, sadness, tenderness.
3. Septum – This region is found anterior to the thalamus. Within this region, lies the
pleasure of orgasm (four for women and one for men). This region is majorly responsible
for sexual experiences.
4. Prefrontal area – This region plays a critical role in the regulation of emotion and behavior
by anticipating the consequences of our actions. It also plays a role in delayed gratification
by maintaining the emotions over time and organizing it toward a specific goal.
2.3 Machine Learning Background
2.3.1 Methods in machine learning
Machine learning is a sub-part in the field of computer science, a sub area of
artificial intelligence where the computers can be trained to learn without any explicit
programming. There has been successful application of machine learning in various real
world problems and applications. The use of machine learning algorithms in recognition
and interpretation of human emotions is termed as affective computing. The origin of this
field can be traced back to the philosophical enquiries into emotion [23] while its more
modern branching into computer science begin with Rosalind Picard’s paper [24] in 1995
on affective computing. With advancements in affective computing the machines
10
would soon be able to interpret emotional state of humans and adapt its behavior towards
them, giving the most appropriate response for that particular emotion.
As rightly put forth by Marvin Minsky in “The Emotion Machine”
“emotions are not especially different from the processes that we call thinking.”
Affective computing can be through features such as emotional speech, facial expression,
or more recently neurological data.
2.3.2 Algorithms
There are a number of machine learning algorithms that are used for this purpose, a few
of them are listed below:
k-NN: Also known as the K Nearest Neighbor and classification here is done by locating
the object in the feature space and comparing with the k nearest neighbor.
GMM: known as the Gaussian Mixture Model is a probabilistic model and calculates the
existence of the sub population within the overall population.
SVM: SVM is the abbreviation for the support vector machine (mostly a binary classifier)
and classifies the input into one of the two classes.
ANN: This technique would be used for emotion classification in my thesis.
Artificial Neural Network takes its idea from the traditional biological neural networks and
can better classify non-linear features in the feature space. They have the ability to learn
with or without a supervisor. They are mainly used when the solution to the problem of
interest is difficult due to:
Lack of physical/statistical understanding of the problem
Statistical variations in the observable data
Complex mechanism responsible for the generation of the data.
11
2.3.2.1 Artificial Neurons
As already stated, this kind of machine learning algorithm, takes its inspiration
from the biological neuron, where we can imagine the dendrites to act as the input vector.
The dendrites receive signals from a large number of adjacent neurons, and each neuron
perform the “multiplication” through the dendrites “weight” value. In case of artificial
neurons, there exists one or more inputs (dendrites) which are summed together(soma)
to give us the output (axon). The sums at each node are weighted and them passed through
a non-linear function called as the activation function (maybe sigmoid, step, linear
functions).
Basic structure:
The basic structure of the artificial neural network consists of the following: a set
of connections (synapses) to other neurons in the network. Each of these connections are
features by a synaptic weight Wkj. A function that sums up these incoming signals
multiplied by their corresponding synaptic weights. The activation function (which limits
the range of the neuron’s output) usually is in the range of [-1,1] or [0,1]. The ANN also
consists of a bias which acts as a threshold. Now considering an artificial neural network
with input vector x, (m+1) in number. W being the number of weights from w0 to wm. The
input x0 has the value of +1 and functions as the threshold. Thus the output kth neuron is
given as
,
where φ is the transfer function above. And Uk is the weighted sum of all the input
functions till the kth neuron.
12
The simplest form of artificial neural network is a perceptron and works as a binary
classifier.
The Multilayer neural network or the multilayer perceptron (MLP) is a
feedforward artificial neural network model that classifies sets of input data into sets of
output data. It consists of multiple layers of nodes (an input layer, an output layer and one
or more hidden layers). Each of the nodes in the neuron have a nonlinear activation
function associated with it other than the input node.
2.4 Current Technology for emotion classification
The current technology used to detect a person’s emotion involves the use of
passive sensors to capture raw behavioral or physical state of the person sans any prior
interpretation. This might include the use of cameras for capturing the facial expressions
of the person, microphone to record audio signals, skin temperature, galvanic resistors for
recording skin impedance, neural signals recorded from Electroencephalograph (EEG),
functional magnetic resonance (fMRI).
For a proper detection of an individual’s emotions, these raw recorded signals need
to be meaningfully interpreted. For this purpose, machine learning algorithms such as –
facial recognition expression detection, speech recognition, neural networks are used to
process the given signal to give a precise result based on the input data.
EEG signal processing techniques
In [41] 20 subjects were shown 5 video clips for each emotion – disgust, happy,
surprise, fear and neutral. EEG data was collected using the 64 channel EEG device. The
raw EEG was processed using Surface Laplacian filtering method and decomposed into
three different frequency bands (alpha, beta and gamma) using discrete wavelet transform
13
(DWT). For the classification of the 5 emotions K Nearest Neighbor (KNN) and Linear
Discriminant Analysis (LDA) were used. An energy based feature extraction technique called as Recoursing Energy Efficiency (REE) and modified versions – Logarithmic REE and Absolute REE was used. The maximum classifier accuracy was obtained using a combination of KNN and ALREE feature of 83.26%.
In [42] 5 subjects are exposed to 3 neural stimuli – one sound, one visual and one audiovisual stimulus). Linear Fisher’s Discriminant Analysis classifier was used to train the classifier for each class (audio/visual/ audiovisual, positive/negative, aroused/calm).
For feature selection only the alpha and beta waves were considered for each of the two channel (Fpz and F3/F4: alpha power, beta power, beta to alpha ratio for Fpz and for
F3/F4 to classify into different arousal and valence state with classification results of 80%.
In [43] EEG data was recorded from 3 women and 3 men. Emotions were elicited in the subjects with help of movie clips of 4-minute length each. The paper mainly focused on classifying emotions into positive and negative emotions. A Self – Assessment manikin
(SAM) was used to measure the emotional content in the movies. After extraction of sub- band frequencies like delta (1-4 Hz), theta (4 - Hz), alpha (8 – 13 Hz), beta (13 – 30 Hz) and delta (36 – 40 Hz) with a resolution of 1 s and non – overlapping window, the log of the band energy was considered as input feature. For dimensional reduction purpose correlation coefficients between features and labels for each channel and each band on training set was calculated. Next, the correlation coefficients were ranked in descending order and features corresponding to the top N coefficients were chosen to use together with linear SVM. An average accuracy of 87.53% was obtained.
In [44], they recorded dance performance from 5 dancers and classified emotion
(e.g., anger, fear, grief, joy) based on Lab’s Effort movements. In this method they use a one-way ANOVA for each motion cue and perform regression analysis.
14
In [34] Yuan – Pin Lin uses the ASM12 electrode system to classify 4 emotional
states – joy, anger, sadness and pleasure using support vector machine to get high
accuracy of 82.29% during music listening.
2.4.1 My Contribution
Yuan – Pin Lin et al., [34] in their paper titled “Support Vector Machine for EEG
Signal Classification during Listening to Emotional Music” used the asymmetry index of
the EEG signal to classify four emotions – joy, angry, sadness and pleasure using Support
Vector Machine with an average classification accuracy of 92.73%. In this research I tested
the use of asymmetry index with a resolution of 1 second to classify four emotions namely
– Happy (positive valence and high arousal), fear (negative valence and low arousal),
anger (negative valence and high arousal) and neutral (zero valence and zero arousal) in
dancers. The classification was performed using a multilayer neural network (1 hidden
layer and 32 hidden layer nodes). The training was performed offline on pre-recorded data
and tested both offline and real-time during the dancer’s performance and also when the
subject was stationary (but imaging a particular emotion) to control the stage lighting with
the following color mapping.
Table 1: Light colors assigned to each emotion
Emotion Color Anger Red Happy Yellow Fear Blue Neutral White
15
Chapter 3
Brain Computer Interface
3.1 What is Brain Computer Interface?
Brain - Computer interface (BCI) also called as Brain – Machine Interface (BMI)
is a direct communication pathway between an enhanced or wired brain and an external
device (computer or machine). Research in BCI began early 1970s at University of
California Los Angeles (UCLA) under grants from National Science Foundation (NSF) and
Defense Advances Research Projects Agency (DARPA). This also the first appearance of
the expression brain – computer interface in scientific literature.
The history in BCI started with the discovery of electrical activity of the human
brain by Hans Berger and the development of electroencephalography (EEG). Berger
recorded the first human activity in 1924 by means of EEG. He was the first to identify
brain oscillatory electrical activity, such as Berger’s wave or more widely known as the
alpha wave (8 – 13 Hz).
Jaques Vidal coined the term BCI and produced the first peer reviewed publication
on this topic [36] [37]. His first work on BCI focused on visual evoked potential for the
control of cursor direction and till date widely used in BCIs [38] [39] [40].
The BCI technology holds a great promise for people who can’t use their arms or
hands, because of spinal cord injuries or ALS (amyotrophic lateral sclerosis or cerebral
palsy. BCI could help them control computers, wheelchairs, televisions or other devices
16
with their brain’s electrical activity. The various components and the signal processing
technique involved in development of BCI is shown in figure 3 and figure 4 respectively.
Algorithms for Ways to neural signal mapping the measure neural decoding of decoded brain signals from brain activity to the the human states/intention intended brain s behavior or task
Figure 3: Components of BCI
Classification - Preprocessing Feature translating the - removal of Extraction - specific fearues into noise and detect specific useful control artifacts to target patterns in signals to be sent to enhance the SNR brain activity an external device
Figure 4: BCI Signal Processing
Types of BCI :
Invasive Techniques – Electrodes for brain signal acquisition are directly implanted onto
a patient’s brain. These techniques are usually riskier and require surgery. These include
– Electrocorticography (ECoG) and Local Field Potential (LFP).
Noninvasive Technique – medical scanning devices or sensors are mounted on caps or
headbands to read brain signals. Noninvasive techniques are less intrusive but also have
low signal to noise ratio.
17
3.2 Brain Machine Learning
For the ability to interpret the task a patient would want to perform a certain
amount of learning both the user and the BCI needs to undergo. The user needs to be able
to learn to modulate their brain activity in way that maximizes the performance of the BCI
and/or the device must itself learn to identify, interpret and adapt to the key neural signals
that would best decode the intended action of the user.
To specify there are two key ways for the BCI to be equilibrated with the users’ brain waves:
Open Loop BCI:
In this paradigm, the computer is altered for better decoding of the brain activity.
Therefore, while performing several trials during the initial training of the BCI system, the
subject is unaware of the way in which the computer is interacting with the recorded brain
activity. After the task is completed a specific task would be matched with the recorded
activity.
Closed Loop BCI:
In this paradigm, the brain activity is modified in a manner to better be representative of
a computer activity. The subject is therefore provided with real-time feedback of how their
brain activity is performing. For this, an optimal level of brain activity must be reached in
order for an optimal computer functioning to occur. Therefore, in this case the user is an
active participate in learning how to best control the computer though their brain activity.
Generally, the closed loop BCI is preferential over the open loop BCI and is used in this
project.
18
Chapter 4
Experimental setup
Figure 5: Experimental setup and data collection
19
4.1 Subjects
The initial EEG data was collected from one trained female professional dancer
(Subject 1) and one trained dancer (Subject 2). Both these data were tested offline for
emotion classification – training and testing of the model. Subject 2 was later tested for
online classification of emotion.
4.2 Equipment used
The Brain activity was acquired non – invasively using a 64 channel wireless, active
EEG system sampled at 1000 Hz (brainAmpDC with actiCAP, brain Products GmbH).
Electrode labelling was prepared in accordance with the 10 – 20 international system
using FCz as reference and AFz as ground (figure 5). Two electrodes were used as vertical
EOG (VEOGR- L) channels and two electrodes were used as Horizontal EOG (HEOGR-L)
channel (useful for eye blink artifact removal). Six inertial measurement units (IMU) were
used to record the kinematics of the artists during their performance from the Head, left
wrist, right wrist, torso, left ankle and right ankle (OPAL, APDM Inc.) sampled at 128 Hz.
Each of the sensors contains a triaxial magnetometer, gyroscope and accelerometer. Three
high definition video recording devices (Nest Labs) helped capture the real time dance
performance of the professional dancer.
4.3 Experimental Protocol
There were two trial sessions. The scalp EEG data and whole body kinematics data
were recorded for 2-minute performance each for subject 1 and subject 2. Four musical
pieces with four different emotions were selected. The description of each of them are
given in table 2.
20
Table 2: List of music associated with each emotion during the dancer’s performance
S.No. Emotion Music
Neutral 1. Clogs by Kapsburger Emotion
2. Anger Emotion The Octopus Project by Porno Disaster
Istvan Masta – Doom. A sigh performed by 3. Fear Emotion Kronos Quartet
Happy The Four Seasons recomposed by max Richter – 4. Emotion Vivaldi – Spring 0,1 in Malinec Slovakia
Each of the musical pieces were cut into a total length of 2-minutes segment each for data collection. These musical pieces were selected by the dancer and choreographed by her to represent each specific emotion. For Subject 1 – (Figure 6) during the course of
Trial 1 session 1 the order of the music played based on the emotions were – Anger,
Neutral, Happy and Fear. For Trial session 2 the order of the music was - Neutral, Fear,
Anger and Happy. For Subject 2 (the order was changed to avoid any bias due to the order in which the emotional music for performance was played) – (Figure 7) trial 1 the order of music played based on the emotions were – Happy, Fear, Neutral and Anger. For trial 2 – the order was – Fear, Happy, Anger and Neutral. The dancer was given a break of 1 minute between each of the dance performances so that the psychological emotions would not be carried from one section to the other. Also the baseline data was collected at the beginning and the end of the experiment for eyes closed and eyes open for a period of 1 minute.
21
Figure 6: Data collection: Dance protocol for subject 1 Trial 1 and Trial
2
22
Figure 7: Data collection: Dance protocol for subject 2 Trial 1 and Trial 2
23
The time frequency maps of the musical pieces (each 2 minute in duration) selected
for each of the musical pieces selected are shown in figure 8. The corresponding list of
musical extracts are shown in table 2.
(A)
(B)
24
(C )
(D)
Figure 8: Amplitude and Time frequency map of the musical pieces used during the dance performance [(A) – Neutral, (B) – Fear, (C) – Happy, (D) – Anger].
25
Chapter 5
Data Processing and Analysis
Figure 9: EEG signal Data Processing
5.1 Data Preprocessing
The EEG data was initially band pass filtered in the range of 1 - 35 Hz using 2nd
order Butterworth filter to remove power line noise and higher frequency EMG
(Electromyography) artifacts and lower frequency eye blink artifacts.
26
5.2 Data Analysis
After the initial EEG data preprocessing, the filtered data obtained (EEG data for
different emotions) was short time fourier transformed (STFT) using a hamming window
length of 1 second and overlap of 50% to extract four power spectral values – delta (1-3
Hz), theta (4 – 7 Hz), alpha (8 – 13 Hz), beta (14 – 30 Hz). The EEG data of eyes closed at
start was used as baseline data. The baseline data was fourier transformed and then
subtracted from the short time fourier transformed power spectral values of the EEG
emotion data for the asymmetrical electrodes (Table 3) before the extraction of the four
brain frequency bands. The scalp map of the ASM12 electrode system is shown in figure
10. All of the steps involved in the EEG signal processing from data acquisition to feature
extraction and light control are shown in figure 11.
The list of the 12 asymmetrical electrodes (ASM12) is given in table 3 below.
Table 3: ASM12 electrode distribution list
S.NO. ASM12 (channel Name) ASM12 (channel number) 1. FP1 – FP2 1 – 2 2. F3 – F4 3 – 7 3. FC3 – FC4 4 – 6 4. C3 – C4 42 - 45 5. CP3 – CP4 43 - 44 6. P3 – P4 12 - 16 7. O1 – O2 23 - 27 8. F7 – F8 13 - 15 9. FT7 – FT8 51 - 55 10. T7 – T8 52 - 54 11. TP7 – TP8 24 - 26 12. P7 – P8 29 - 31
27
The scalp map of the electrode location used in ASM12 is shown in figure 10 below.
Figure 10: Scalp map of the ASM12 electrodes. [4]
5.3 Feature Matrix
The feature matrix consisted of the four brain frequency bands - delta, theta, alpha
and beta and the 12 differential asymmetric power spectrum values giving a total of 49
features. The number of sample points extracted from one subject (120 seconds, 120000
sample points with window size of 1 second – 239 rows (frequency values) and 501
columns (Time). Thus with 2 trials of 2minute data each a total of 478 * 48 data samples
for each of the four emotions were fed into the classifier for training the multilayer neural
network for classification. The corresponding classes were assigned based on the
experimental protocol.
5.4 Feature matrix classification
The multilayer neural network was used as a classifier for the classifying the four
different emotions – Happy, Anger, Fear and Neutral. The number of hidden layers used
28
was 5. Once the model was build, it was used offline to test subject 2 with trial 3. Subject
2 was also tested real time during the dance performance to control stage lights.
5.5 Mapping of classified data to stage lights
The class labels obtained from the classifier communicate to the DMX stage lights
via processing software. The hue and saturation values were used to control the lighting.
The following generalized color wheel scheme given in figure 11 below.
Figure 11: Color wheel scheme used for lighting. [35]
29
Table 4: Table showing the color mapping scheme used
S.No. Emotion Color
Neutral 1. White Emotion
2. Anger Emotion Red
3. Fear Emotion Blue
4. Happy Emotion Yellow
30
Chapter 6
Results and Discussion
6.1 Power spectral density of the four emotions
The power spectral density (dB) for subject 1 and subject 2 of the four emotions
and the baseline eyes closed data were obtained after passing it through a 2nd order band
pass Butterworth filter with cut-off frequency 1 to 35 Hz were obtained for the 24
electrodes of the ASM12 electrode pair (figure 10). The graphs for all the 24 electrodes are
shown from figures 11 to figure 16. The corresponding electrode location for the power
spectrum in shown in the scalp map above in red. The sub band frequency bands mainly
considered were alpha band (8 – 12 Hz), beta band (12 – 30 Hz), delta band (2 – 4 Hz)
and theta band (4 – 8 Hz). Figure 12 to figure 17 shows the power spectral density for
subject 1 and subject 2 for the 24 ASM12 electrodes. Figures 18 – figure 19 show the
classification results of the feature matrices obtained for the four emotions (classes). The
corresponding details regarding the features shown in figurer 18 to 19 is given in table 4,
and shown on the scalp map in figure 20.
31
Subject 1 -
Subject 2 -
-
Figure 12: Power Spectrum for electrode channel number 1, 2, 3, 4 (Shown in red on scalp map) for subjects 1 and 2.
32
Subject 1 -
Subject 2 -
Figure 13: Power Spectrum for electrode channel number 6, 7, 12, 13 (shown in red on scalp map) for subjects 1 and 2.
33
Subject 1 -
Subject 2 –
Figure 14: Power Spectrum for electrode channel number 15, 16, 23, 24 (shown in red on scalp map) for subjects 1 and 2.
34
Subject 1 -
Subject 2 -
Figure 15: Power Spectrum for electrode channel number 26, 27, 29, 31 (shown in red on scalp map) for subjects 1 and 2.
35
Subject 1 -
Subject 2 –
Figure 16: Power Spectrum for electrode channel number 42, 43, 44, 45 (shown in red on scalp map) for subjects 1 and 2.
36
Subject 1 -
Subject 2 -
Figure 17: Power Spectrum for electrode channel number 51, 52, 54, 55 (shown in red on scalp map) for subjects 1 and 2
37
6.2 Classification results
Subject 1
A) B)
C) D)
E) F)
G)
38
Figure 18: ASM12 classification results for subject 1. A) Feature 1 vs. Feature 38. B) Feature 1 vs. Feature 42. C)Feature 6 vs. Feature 45. D) Feature 13 vs. Feature 14. E) Feature 24 vs. Feature 47. F) Feature 37 vs. Feature 38. G) Feature 45 vs. Feature 39.
Subject 2
A) B)
C) D)
E) F)
39
G)
Figure 19: ASM12 classification results for subject 2. A) Feature 1 vs. Feature 38. B) Feature 1 vs. Feature 42. C)Feature 6 vs. Feature 45. D) Feature 13 vs. Feature 14. E) Feature 24 vs. Feature 47. F) Feature 37 vs. Feature 38. G) Feature 45 vs. Feature 39.
40
Feature list
Table 5: List of Features, ASM12 electrode pair, sub-band frequency and electrode location for classification results showed in figure 18 to figure 19.
Feature Electrode Sub-band Electrode Number Pair Frequency Location Feature 1 Fp1 – Fp2 Alpha Wave (8 – 12 Hz) Frontal Lobe
Feature 6 T7 – T8 Alpha Wave (8 – 12 Hz) Temporal Lobe
Feature 13 Fp1 – Fp2 Beta Wave (12 – 30 Hz) Frontal Lobe
Feature 14 F7 – F8 Beta Wave (12 – 30 Hz) Frontal Lobe
Feature 24 O1 – O2 Beta Wave (12 – 30 Hz) Occipital Lobe
Feature 37 Fp1 – Fp2 Theta Wave (4 – 8 Hz) Frontal Lobe
Feature 38 F7 – F8 Theta Wave (4 – 8 Hz) Frontal Lobe
Feature 39 F3 – F4 Theta Wave (4 – 8 Hz) Frontal Lobe
Feature 42 T7 – T8 Theta Wave (4 – 8 Hz) Temporal Lobe
Feature 45 Tp7 – Tp8 Theta Wave (4 – 8 Hz) Temporal Lobe
Figure 20: Scalp Map showing the feature regions shown on table 4 (in red).
41
6.3 Confusion Matrix
Figure 21: Confusion matrix for subject 1 using MNN classifier with 32 hidden layer nodes.
42
Figure 22: Confusion matrix for subject 2 using MNN classifier with 32 hidden layer nodes.
6.4 Discussion
The power spectral densities from figure 12 to figure 17, shows the
distribution of power for the four different emotions. These values were then
subtracted from the baseline data and then subtracted from each other based on
the ASM12 electrode system given in table 3. Thus there were 12 features obtained
per emotion with a total of 12 * 4 = 48 features as the input matrix to the classifier.
The best features that distinguish the 4 classes (the 4 different emotions) are
shown in figures 18 and 19. A more detailed information is given in table 4 and
43 electrode locations are shown on the scalp map in figure 20. After training the input matrices using the Multilayer Neural Network (48 features and 1948 instances for subject 1 and 48 features and 1914 instances for subject 2) the confusion matrices obtained are shown in figure 20 and figure 21. Trial 1 was used for training the model using Multilayer Neural Network and trial 2 was used for testing the model for both subjects 1 and 2 (figure 21 and figure 22). For subject 1 an accuracy of 72.1% was obtained and for subject 1 an accuracy of 75.7% was obtained.
Online Testing
The model was tested online on subject 2, and the online performance was quantified based on how well the lights responded to the changing emotions of the dancers. Initial baseline data is to be collected before each testing. Figure 23 shows the online testing of the dance performance. The musical pieces were played randomly and the corresponding lighting was observed, the table 6 below shows us the performance of the light:
44
Table 6: List of music producing a definite emotion in the dancer and the corresponding light color.
True Class Predicted Class True Color Real Time Color
Happy Happy Yellow Yellow
Fear Happy Blue Yellow
Fear Anger Blue Red
Anger Anger Red Red
Fear Happy Blue Yellow
Neutral Anger White Red
Figure 23: Dance performance during online testing
45
Chapter 7
Future Work and Conclusion
7.1 Limitations and Future work
The real time Control of the lights during the dance performance had the drawback
of correctly predicting two major emotions for subject 2 from the trained model – Anger
and Happy corresponding to Red and yellow lights respectively. The other emotions were
almost always classified to either of the two emotions. One of the major reasons for this
may be the location of the emotion feature clusters and the distance between the clusters
in between the two offline trials for the same emotion. Also, the entire 48 features matrix
was used to feed into the multilayer classifier. Testing of the offline built model using only
the 10 major features might improve the classification accuracy. Also in this experiment I
had trained the model using neural networks, use of other commonly used classifiers like
Gaussian Mixture Model might increase the accuracy of the emotion classification offline
but wasn’t preferred because it might not be suitable for online applications. Also Due to
very high degree of motion of the subject during dance performance causing motion
artifacts might affect the power band frequencies which might cause decrease in the
accuracy of the control of the lights. Use of Real time artifact removal techniques from
EEG data might help solve this issue. In this research I propose the use of the ASM12
electrode system for feature extraction. Use of other suitable techniques mentioned in
background might also help improve the results. In addition, use of facial recognition in
addition to ASM12 might yield very good results in improving the accuracy of stage light
control during dance performance.
46
7.2 Conclusion
The Primary focus of this research was to device a method and use of neural
network mechanism for the adequate feature extraction, model building and classification
for the four emotions – ‘Anger’, ‘Happy’, ‘Fear’ and ‘Neutral’. The use of the multilayer
neural network with 1 hidden layer and 32 hidden layer nodes for 48 features gave the best
classification results.
The presence of motion artifacts may have contributed to the low accuracy during
online testing. An increase in the number of trial sessions may increase the accuracy
further. Also, the use of other classifiers like Gaussian mixture model could help increase
the accuracy of the emotion classification during real time control which could be extended
as further scope of the project. The current offline classification accuracy obtained was
72.1% for subject 1 and 75.7% for subject 2.
47
References
[1] Yisi Liu, Olga Sourina and Minh Khoa Nguyen. “Real – Time EEG-based Human
Emotion Recognition and Visualization.” Nanyang Technological University, Singapore.
2010 International Conference on Cyberworlds.
[2] Dan Nie, Xiao-Wei Wang, Li-Chen Shi, and Bao-Liang Lu. ‘EEG-based Emotion
Recognition during watching movies.” Proceedings of the 5th International IEEE EMBS
Conference on Neural Engineering Cancun, Mexico, April 27 – May 1, 2011.
[3] Klaus R. Scherer. “What are emotions? And how can they be measured?” Trends and developments: research on emotions. Social Science information. SAGE publications.
2005.
[4] Yuan – Pin Lin, Chi – Hong Wang, Tzyy-Ping Jung, Tien-Lin Wu, Shyh-Kang Jeng,
Jeng-Ren Duann and Jyh-Horng Chen. “EEG-Based Emotion Recognition in Music
Listening.” IEEE Transactions on Biomedical Engineering, Vol. 57, No. 7, July 2010.
[5] F. Dellaert, T. Polzin, and A. Waibel. “Recognizing emotion in speech.” In ICSLP 96.
Proceedings, volume 3, pages 1970–1973, October 1996.
[6] R. Cowie, E. Douglas-Cowie, N. Taspatsoulis. G. Votsis, S. Kollias, W. Fellenz, and J.G.
Taylor. “Emotion recognition in human-computer interaction.” Signal Processing
Magazine, IEEE, 18:32–80, 2001.
[7] M.W. Bhatti, Y. Wang, and L. Guan. “A neural network approach for human emotion recognition in speech.” In ISCAS ’04. Proceedings of the 2004 International Symposium on Circuits and Systems, volume 2, pages II– 181–4, 2004.
[8] M Pantic and L.J.M. Rothkrantz. “Automatic analysis of facial expressions: The state of the art.” In IEEE Transactions on Pattern Analysis and Machine Intelligence, volume
22, pages 1424–1445, December 2000.
48
[9] B. Fasel and J. Luettin. “Automatic facial expression analysis: A survey.” Pattern recognition, 36:148–275, 2003.
[10] C. Busso, Z. Deng, S. Yildirim, M. Bulut, C.M. Lee, A. Kazemzadeh, S. Lee, U.
Neumann, and S. Narayanan. “Analysis of emotion recognition using facial expressions, speech and multimodal information.” In ICMI ’04: Proceedings of the 6th international conference on Multimodal interfaces, pages 205–211, New York, NY, USA, 2004. ACM
Press.
[11] Danny Oude Bos. “EEG-based Emotion Recognition, the influence of visual and auditory stimuli.” Department of Computer Science, University of twente, The
Neatherlands.
[12] P. He, G. Wilson, and C. Russell. “Removal of ocular artifacts from electroencephalogram by adaptive filtering.” Medical and Biological Engineering and
Computing, 42:407–412, 2004.
[13] I. Guyon and A. Elisseeff. “An introduction to variable and feature selection.” Journal of Machine Learning Research, 3:1157–1182, 2003.
[14] P. Ekman. “Basic emotions. Handbook of Cognition and Emotion.”, pages 45–60,
1999.
[15] Robert Plutchik. “The Nature of Emotions.” American Scientist, Volume 89. July –
August 2001.
[16] P. J. Lang. “The Emotion Probe: Studies of Motivation and Attention.” American
Psychologist, 50(5):372–385, 1995.
[17] M.B. Kostyunina and M.A. Kulikov. “Frequency characteristics of EEG spectra in the emotions.” Neuroscience and Behavioral Physiology, 26(4):340–343, 1996.
[18] P.R. Kleinginna Jr. and A.M. Kleinginna. “A categorized list of emotion definitions, with suggestions for a consensual definition.” Motivation and Emotion, 5:345–379, 1981.
49
[19] Robert Holdings. “Emotion recognition using brain activity.” Man – Machine
Interation Group. TU Delft. 27 March, 2008.
[20] Maclean, P.D. (1952). “Psychiatric implications of physiological studies on frontotemporal portion of limbic system (visceral brain).” Electroencephelograph Clinical
Neurophisiology Supplement 4 (4): 407-18. Doi: 10.1016/0013 – 4694(52)90073-4.
PMID 12998590.
[21] Broca, P. (1878). "Anatomie comparée des circonvolutions cérébrales: le grand lobe limbique." Rev. Anthropol 1: 385–498.
[22] Papez, J.W. (1937). "A proposed mechanism of emotion." J Neuropsychiatry Clin
Neurosci 7 (1): 103–12. PMID 7711480.
[23] James, William (1884). “What is Emotion.” Mind 9: 188-205. Doi: 10.1093/mind/os-
IX 34.188. Cited by Tao and Tan.
[24] R.W. Picard. “Affective Computing.” M.I.T Media Laboratory Perceptual Computing
Section Technical Report No. 321. Revised November 26, 1995.
[25] “Issues and assumptions on the road from raw signals to metrics of frontal EEG asymmetry in emotion” by John J.B. Allen.
[26] Wheeler, R.E., Davidson, R.J. and Tomarken, A.J. (1993) “Frontal brain asymmetry and emotional reactivity: A biological substrate of affective style.” Psychophysiology, 30,
82-89.
[27] James A. Coan, John J.B. allen. “Frontal EEG asymmetry as a moderator and mediator of emotion.” Biological Psychology 67 (2004) 7 – 49.
[28] K. Takahashi. “Remarks on emotion recognition from bio-potential signals.” 2nd
International Conference on Autonomous Robots and Agents, pages 186 – 191, 2004.
[29] J. A. Russell, “Affective space is bipolar,” Journal of Personality and Social
Psychology, vol. 37, 1979, pp. 345 – 356.
50
[30] J. A. Russell, “Affective space is bipolar,” Journal of Personality and Social
Psychology, vol. 37, 1979, pp. 345 – 356.
[31] V. Kulish, A. Sourin, and O. Sourina, “Human electroencephalograms seen as fractal time series: Mathematical analysis and visualization,” Computers in Biology and
Medicine, vol. 36, 2006, pp. 291-302.
[32] Yisi Liu, Olga Sourina and Minh Khoa Nguyen, “Real – time EEG – based Human
Emotion Recognition and Visualization,” Nanyang Technological University, Singapore.
[33] D. Nie, X. W. Wang, L. C. Shi and B. L. Lu, "EEG-based emotion recognition during watching movies," Neural Engineering (NER), 2011 5th International IEEE/EMBS
Conference on, Cancun, 2011, pp. 667-670.
[34] Y. P. Lin, C. H. Wang, T. L. Wu, S. K. Jeng and J. H. Chen, "Support vector machine for EEG signal classification during listening to emotional music," Multimedia Signal
Processing, 2008 IEEE 10th Workshop on, Cairns, Qld, 2008, pp. 127-130.
[35] https://en.wikipedia.org/wiki/HSL_and_HSV
[36] Vidal, JJ (1973). “Toward direct brain-computer communication”. Annual Review of
Biophysics and Bioengineering 2 (1): 157 – 80.
Doi:10.1146/annurev.bb.02.060173.001105. PMID 4583653.
[37] J. Vidal (1977). “Real-Time Detection of Brain Events in EEG”. IEEE proceedings 65
(5): 633-641. Doi: 10. 1109/PROC.1977.10542.
[38] Allison BZ, Brunner C, Altstatter C, Wagner IC, Grissmann S, Neuper C. “A hybrid
ERD/SSVP BCI for continuous simultaneous two dimensional cursor control.” Journal of
Neuroscience Methods. June 22, 2012.
[39] Tobias Kaufmann, Stefan Volker, Laura Gunesch and Andrea Kubler. “Spelling is just a click away – a user – centered brain – computer interface including auto – calibration and predictive text entry.” Front. Neurosci., 23 May 2012.
51
[40] Christoph Kapeller, Kyousuke Kamada, Hiroshi Ogawa, Robert Prueckl, Josef
Scharinger and Christoph. “An electrocorticographic BCI using code-based VEP for control in video applications: a single-subject study”. Frontiers in Systems Neuroscience.
August 7, 2014. Doi: 10.3389/fnsys.2014.00139.
[41] Murugappan Murugappan, Nagarajan Ramachandran, Yaacov Sazali. “Classification of Human Emotion froom EEG using discrete wavelet transform.” Journal of Biomedical
Science and Engineering, 2010, 3, 390 – 396.
[42] Danny Oude Bos. “EEG – based Emotion Recognition – The Influence of Visual and
Auditory Stimuli.” Department of Computer Science, University of Twente, The
Netherlands.
[43] Dan Nie, Xiao – Wei Wang, Li – Chen Shi, and Bao – Liang Lu. “EEG-based
Emotion Recognition during Watching Movies.” Proceedings of the 5th international
IEEE EMBS Conference on Neural Engineering, Cancun, Mexico, April 27 – May 1, 2011.
[44] Antonio Camurri, Ingrid Lagerlof, Gualtiero Volpe. “Recognizing emotion from dance movement: comparison of spectator recognition and automated techniques.”
International Journal of Human – Computer Studies 59 (2003) 213 – 225.
[45] Ellsworth, P.C., & Scherer, K. R. (2003). Appraisal processes in emotion. In R. J.
Davidson, H. Goldsmith, & K.R. Scherer (Eds.), Handbook of Affective Sciences. New
York and Oxford: Oxford University Press.
[46] Joseph LeDoux. “The Emotional Brain, Fear, and the Amygdala.” Cellular and
Molecular Neurobiology. Vol. 23, Nos. 4/5, October 2003.
[47] Antonio R. Damasio. “Descartes’ Error: Emotion, Reason, and the Human Brain.”
Avon Books, New York, USA (1994).
[48] Davidson RJ. “Anxiety and affective style: role of prefrontal cortex and amygdala.”
Biol Psychiatry. 2002 Jan 1, 51 (1) 68 – 80.
52
[49] Katz LF, Author DH. “Changes in the Wage Structure and Earnings Inequality.” In:
Ashenfelter O, Card D Handbook of Labor Economics, vol. 3A.; 1999. Pp. 1463 – 1555.
53