Multimodal Emotion Recognition Using Deep Learning Architectures

Total Page:16

File Type:pdf, Size:1020Kb

Multimodal Emotion Recognition Using Deep Learning Architectures Multimodal Emotion Recognition using Deep Learning Architectures Hiranmayi Ranganathan, Shayok Chakraborty and Sethuraman Panchanathan Center for Cognitive Ubiquitous Computing (CUbiC) Arizona State University fhiranmayi.ranganathan,shayok.chakraborty,[email protected] Abstract cific features [5],[6]. Deep architectures and learning tech- niques have shown to overcome these limitations by captur- Emotion analysis and recognition has become an in- ing complex non-linear feature interactions in multimodal teresting topic of research among the computer vision re- data [7]. search community. In this paper, we first present the emoF- In this paper, as our first contribution, we present the BVP database of multimodal (face, body gesture, voice and emoFBVP database of multimodal recordings-facial expres- physiological signals) recordings of actors enacting vari- sions, body gestures, vocal expressions and physiological ous expressions of emotions.The database consists of au- signals, of actors enacting various expressions of emotions. dio and video sequences of actors displaying three differ- The database consists of audio and video sequences of ac- ent intensities of expressions of 23 different emotions along tors enacting 23 different emotions in three varying intensi- with facial feature tracking, skeletal tracking and the corre- ties of expressions along with facial feature tracking, skele- sponding physiological data. Next, we describe four deep tal tracking and the corresponding physiological data. This belief network (DBN) models and show that these models is one of the first emotion databases that has recordings of generate robust multimodal features for emotion classifica- varying intensities of expressions of emotions in multiple tion in an unsupervised manner. Our experimental results modalities recorded simultaneously. We strongly believe show that the DBN models perform better than the state that the affective computing community will greatly benefit of the art methods for emotion recognition. Finally, we from the large collection of modalities recorded. Our sec- propose convolutional deep belief network (CDBN) mod- ond contribution investigates the use of deep learning archi- els that learn salient multimodal features of expressions of tectures - DBNs and CDBNs for multimodal emotion recog- emotions. Our CDBN models give better recognition accu- nition. We describe four deep belief network (DBN) mod- racies when recognizing low intensity or subtle expressions els and show that they generate robust multimodal features of emotions when compared to state of the art methods. for emotion classification in an unsupervised manner. This is done to validate the use of our emoFBVP database for multimodal emotion recognition studies. The DBN models 1. Introduction used are extensions of the models proposed by [7] for audio- visual emotion classification. Finally, we propose convolu- In recent years, there has been a growing interest in tional deep belief network (CDBN) models that learn salient the development of technology to recognize an individual’s multimodal features of low intensity expressions of emo- emotional state. There is also an increase in the use of mul- tions. timodal data (facial expressions, body expressions, vocal expressions and physiological signals) to build such tech- 2. Related Work nologies. Each of these modalities have very distinct sta- tistical properties and fusing these modalities helps us learn Previous research has shown that deep architectures ef- useful representations of the data. Emotion recognition is a fectively generate robust features by exploiting the com- process that uses low level signal cues to predict high level plex non-linear interactions in the data [8]. Deep architec- tures and learning techniques are very popular in the speech emotion labels. Literature has shown various techniques and language processing community [9]-[11]. Ngiam et for generating robust multimodal features [1]-[4] for emo- al. [12] report impressive results on audio-visual speech tion recognition tasks. The high dimensionality of the data, classification. They use sparse Restricted Boltzmann Ma- the non-linear interactions across the modalities along with chines (RBMs) for cross-modal learning, shared representa- the fact that the way an emotion is expressed varies across tion learning and multimodal fusion on CUAVE and AVLet- people complicate the process of generating emotion spe- ters dataset. Srivastava et al. [13] applied multimodal deep belief networks to learn joint representations that outper- We include a regularization penalty as in [16] given as: formed SVMs. They used multimodal deep Boltzmann ma- k m 2 X 1 X h (l) (l)i chines to learn a generative model of images and text for im- λ p − E h j v (5) m j age retrieval tasks. Kahou et al. used an ensemble of deep j=1 l=1 learning models to perform emotion recognition from video E λ clips [14]. This was the winning submission to the Emo- Here, [.] is the conditional expectation given the data, is tion Recognition in the Wild Challenge [15]. Deep learn- a regularization parameter, and p is a constant that specifies ing has also been applied in many visual recognition studies the activation of the hidden unit. [16]-[20]. Our research is motivated by the above recent ap- Convolutional deep belief networks (CDBNs) [24] are proaches in multimodal deep learning. In this paper, we fo- similar to DBNs and can be trained in a greedy layer-wise cus on applying deep architectures for multimodal emotion fashion. Lee et al [24] used CDBNs to show good perfor- recognition using face, body, voice and physiological sig- mance in many visual recognition tasks. Convolutional re- nal modalities. We apply extensions of known DBN mod- stricted Boltzmann machines (CRBMs) [24]-[26] are build- els for multimodal emotion recognition using the emoFBVP ing blocks for (CDBNs). In a CRBM [22], the weights be- database and investigate recognition accuracies to validate tween the hidden units and visible units are shared among the utility of the database for emotion recognition tasks. all locations in the hidden layer. The CRBM consists of To the best of our knowledge, the use of DBNs for mul- timodal emotion recognition of data comprising of all the two layers: an input (visible) layer V and a hidden layer above mentioned modalities ( facial expressions, body ges- H. The hidden units are binary-valued, and the visible units tures, vocal expressions and physiological signals) has not are binary-valued or real-valued. Please refer to Lee et al. been explored by the affective research community. [24] for the expression for the energy function, conditional Recent developments in deep learning techniques exploit and joint probabilities. In this paper, we use CRBMs with the use of single layer building blocks called as Restricted probabilistic max-pooling as building blocks for convolu- Boltzmann Machines (RBMs) [21] to build DBNs in an un- tional deep belief networks. For training the CRBMs, we supervised manner. DBNs are constructed by greedy layer- use contrastive divergence [23] to approximate the gradient wise training of stacked RBMs to learn hierarchical repre- of the log-likelyhood term effectively. Like in [16], we add sentations from the multimodal data [22]. RBMs are undi- a sparsity penalty term as well. Post training, we stack the rected graphical models that use binary latent variables to CRBMs to form a CDBN. represent the input. Like [7], we also use Gaussian RBMs for training the first layer of the network. The visible units The rest of the paper is organized as follows. Section of the first layer are real-valued. The deeper layers are 3 describes the emoFBVP database and its salient proper- trained using Bernoulli-Bernoulli RBMs that employ visi- ties. The descriptions of experimental set up for deep learn- ble and hidden units that are binary valued. The joint prob- ing, feature extraction techniques and baseline models are ability distribution for a Gaussian RBM with visible units v in Section 4. Section 5 introduces our DemoDBN models and hidden units h is given as follows: and investigates their usage for multimodal emotion recog- 1 nition in an unsupervised context. Section 6 describes our P (v; h) = exp (−E(v; h)) ; (1) Z CDBN models and investigates their usability to recognize subtle or low intensities of expressions of emotions. Finally, D The corresponding energy function with q 2 R and r 2 we share our conclusions and future work in Section 7. RK as biases of visible and hidden units and W 2 RD×K as weights between visible units and hidden units is given 3. emoFBVP Database as: 1 X 2 To study human emotional experience and expression in E(v; h) = v 2σ2 i more detail and to develop benchmark methods for auto- i ! matic emotion recognition, researchers are in need of rich 1 X X X − q v + r h + v W h ; (2) sets of data. We have recorded responses of actors to affec- σ2 i i j j i i;j j i j i;j tive emotion labels using four different modalities - facial expressions, body expressions, vocal expressions and phys- These parameters are learned using a technique called con- iological signals. Along with the multimodal recordings, trastive divergence, explained in [23]. σ is a hyperparameter and Z is a normalization constant. The conditional proba- we provide facial feature tracking and skeletal tracking data. bility distributions of the Gaussian RBM are as follows: The recordings of all the data are rated through an evalua- tion form completed by the actors immediately after each !! 1 X excerpt of acting emotions. The recordings of this database P (h = 1 j v) = sigmoid W v + r ; (3) j σ2 i;j i j i are synchronized to enable study of simultaneous emotional responses using all the modalities. ! X 2 Ten participants (who are professional actors) were in- P (vi j h) = N vi; Wi;j hj + qi; σ : (4) j volved in data capture, and every participant displayed 23 different emotions.
Recommended publications
  • Neuroticism and Facial Emotion Recognition in Healthy Adults
    First Impact Factor released in June 2010 bs_bs_banner and now listed in MEDLINE! Early Intervention in Psychiatry 2015; ••: ••–•• doi:10.1111/eip.12212 Brief Report Neuroticism and facial emotion recognition in healthy adults Sanja Andric,1 Nadja P. Maric,1,2 Goran Knezevic,3 Marina Mihaljevic,2 Tijana Mirjanic,4 Eva Velthorst5,6 and Jim van Os7,8 Abstract 1School of Medicine, University of Results: A significant negative corre- Belgrade, 2Clinic for Psychiatry, Clinical Aim: The aim of the present study lation between the degree of neuroti- 3 Centre of Serbia, Faculty of Philosophy, was to examine whether healthy cism and the percentage of correct Department of Psychology, University of individuals with higher levels of neu- answers on DFAR was found only for Belgrade, Belgrade, 4Special Hospital for roticism, a robust independent pre- happy facial expression (significant Psychiatric Disorders Kovin, Kovin, Serbia; after applying Bonferroni correction). 5Department of Psychiatry, Academic dictor of psychopathology, exhibit Medical Centre University of Amsterdam, altered facial emotion recognition Amsterdam, 6Departments of Psychiatry performance. Conclusions: Altered sensitivity to the and Preventive Medicine, Icahn School of emotional context represents a useful Medicine, New York, USA; 7South Methods: Facial emotion recognition and easy way to obtain cognitive phe- Limburg Mental Health Research and accuracy was investigated in 104 notype that correlates strongly with Teaching Network, EURON, Maastricht healthy adults using the Degraded
    [Show full text]
  • How Deep Neural Networks Can Improve Emotion Recognition on Video Data
    HOW DEEP NEURAL NETWORKS CAN IMPROVE EMOTION RECOGNITION ON VIDEO DATA Pooya Khorrami1, Tom Le Paine1, Kevin Brady2, Charlie Dagli2, Thomas S. Huang1 1 Beckman Institute, University of Illinois at Urbana-Champaign 2 MIT Lincoln Laboratory 1 fpkhorra2, paine1, [email protected] 2 fkbrady, [email protected] ABSTRACT An alternative way to model the space of possible emo- There have been many impressive results obtained us- tions is to use a dimensional approach [11] where a person’s ing deep learning for emotion recognition tasks in the last emotions can be described using a low-dimensional signal few years. In this work, we present a system that per- (typically 2 or 3 dimensions). The most common dimensions forms emotion recognition on video data using both con- are (i) arousal and (ii) valence. Arousal measures how en- volutional neural networks (CNNs) and recurrent neural net- gaged or apathetic a subject appears while valence measures works (RNNs). We present our findings on videos from how positive or negative a subject appears. the Audio/Visual+Emotion Challenge (AV+EC2015). In our Dimensional approaches have two advantages over cat- experiments, we analyze the effects of several hyperparam- egorical approaches. The first being that dimensional ap- eters on overall performance while also achieving superior proaches can describe a larger set of emotions. Specifically, performance to the baseline and other competing methods. the arousal and valence scores define a two dimensional plane while the six basic emotions are represented as points in said Index Terms— Emotion Recognition, Convolutional plane. The second advantage is dimensional approaches can Neural Networks, Recurrent Neural Networks, Deep Learn- output time-continuous labels which allows for more realistic ing, Video Processing modeling of emotion over time.
    [Show full text]
  • Emotion Recognition Depends on Subjective Emotional Experience and Not on Facial
    Emotion recognition depends on subjective emotional experience and not on facial expressivity: evidence from traumatic brain injury Travis Wearne1, Katherine Osborne-Crowley1, Hannah Rosenberg1, Marie Dethier2 & Skye McDonald1 1 School of Psychology, University of New South Wales, Sydney, NSW, Australia 2 Department of Psychology: Cognition and Behavior, University of Liege, Liege, Belgium Correspondence: Dr Travis A Wearne, School of Psychology, University of New South Wales, NSW, Australia, 2052 Email: [email protected] Number: (+612) 9385 3310 Abstract Background: Recognising how others feel is paramount to social situations and commonly disrupted following traumatic brain injury (TBI). This study tested whether problems identifying emotion in others following TBI is related to problems expressing or feeling emotion in oneself, as theoretical models place emotion perception in the context of accurate encoding and/or shared emotional experiences. Methods: Individuals with TBI (n = 27; 20 males) and controls (n = 28; 16 males) were tested on an emotion recognition task, and asked to adopt facial expressions and relay emotional memories according to the presentation of stimuli (word & photos). After each trial, participants were asked to self-report their feelings of happiness, anger and sadness. Judges that were blind to the presentation of stimuli assessed emotional facial expressivity. Results: Emotional experience was a unique predictor of affect recognition across all emotions while facial expressivity did not contribute to any of the regression models. Furthermore, difficulties in recognising emotion for individuals with TBI were no longer evident after cognitive ability and experience of emotion were entered into the analyses. Conclusions: Emotion perceptual difficulties following TBI may stem from an inability to experience affective states and may tie in with alexythymia in clinical conditions.
    [Show full text]
  • Emotion Detection in Twitter Posts: a Rule-Based Algorithm for Annotated Data Acquisition
    2020 International Conference on Computational Science and Computational Intelligence (CSCI) Emotion detection in Twitter posts: a rule-based algorithm for annotated data acquisition Maria Krommyda Anastatios Rigos Institute of Communication and Computer Systems Institute of Communication and Computer Systems Athens, Greece Athens, Greece [email protected] [email protected] Kostas Bouklas Angelos Amditis Institute of Communication and Computer Systems Institute of Communication and Computer Systems Athens, Greece Athens, Greece [email protected] [email protected] Abstract—Social media analysis plays a key role to the person that provided the text and it is categorized as positive, understanding of the public’s opinion regarding recent events negative, or neutral. Such analysis is mainly used for movies, and decisions, to the design and management of advertising products and persons, when measuring their appeal to the campaigns as well as to the planning of next steps and mitigation actions for public relationship initiatives. Significant effort has public is of interest [4] and require extended quantities of text been dedicated recently to the development of data analysis collected over a long period of time from different sources to algorithms that will perform, in an automated way, sentiment ensure an unbiased and objective output. analysis over publicly available text. Most of the available While this technique is very popular and well established, research work, focuses on binary categorizing text as positive or with many important use cases and applications, there are negative without further investigating the emotions leading to that categorization. The current needs, however, for in-depth analysis other equally important use cases where such analysis fail to of the available content combined with the complexity and multi- provided the needed information.
    [Show full text]
  • Facial Emotion Recognition
    Issue 1 | 2021 Facial Emotion Recognition Facial Emotion Recognition (FER) is the technology that analyses facial expressions from both static images and videos in order to reveal information on one’s emotional state. The complexity of facial expressions, the potential use of the technology in any context, and the involvement of new technologies such as artificial intelligence raise significant privacy risks. I. What is Facial Emotion FER analysis comprises three steps: a) face de- Recognition? tection, b) facial expression detection, c) ex- pression classification to an emotional state Facial Emotion Recognition is a technology used (Figure 1). Emotion detection is based on the ana- for analysing sentiments by different sources, lysis of facial landmark positions (e.g. end of nose, such as pictures and videos. It belongs to the eyebrows). Furthermore, in videos, changes in those family of technologies often referred to as ‘affective positions are also analysed, in order to identify con- computing’, a multidisciplinary field of research on tractions in a group of facial muscles (Ko 2018). De- computer’s capabilities to recognise and interpret pending on the algorithm, facial expressions can be human emotions and affective states and it often classified to basic emotions (e.g. anger, disgust, fear, builds on Artificial Intelligence technologies. joy, sadness, and surprise) or compound emotions Facial expressions are forms of non-verbal (e.g. happily sad, happily surprised, happily disgus- communication, providing hints for human emo- ted, sadly fearful, sadly angry, sadly surprised) (Du tions. For decades, decoding such emotion expres- et al. 2014). In other cases, facial expressions could sions has been a research interest in the field of psy- be linked to physiological or mental state of mind chology (Ekman and Friesen 2003; Lang et al.
    [Show full text]
  • Deep-Learning-Based Models for Pain Recognition: a Systematic Review
    applied sciences Article Deep-Learning-Based Models for Pain Recognition: A Systematic Review Rasha M. Al-Eidan * , Hend Al-Khalifa and AbdulMalik Al-Salman College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia; [email protected] (H.A.-K.); [email protected] (A.A.-S.) * Correspondence: [email protected] Received: 31 July 2020; Accepted: 27 August 2020; Published: 29 August 2020 Abstract: Traditional standards employed for pain assessment have many limitations. One such limitation is reliability linked to inter-observer variability. Therefore, there have been many approaches to automate the task of pain recognition. Recently, deep-learning methods have appeared to solve many challenges such as feature selection and cases with a small number of data sets. This study provides a systematic review of pain-recognition systems that are based on deep-learning models for the last two years. Furthermore, it presents the major deep-learning methods used in the review papers. Finally, it provides a discussion of the challenges and open issues. Keywords: pain assessment; pain recognition; deep learning; neural network; review; dataset 1. Introduction Deep learning is an important branch of machine learning used to solve many applications. It is a powerful way of achieving supervised learning. The history of deep learning has undergone different naming conventions and developments [1]. It was first referred to as cybernetics in the 1940s–1960s. Then, in the 1980s–1990s, it became known as connectionism. Finally, the phrase deep learning began to be utilized in 2006. There are also some names that are based on human knowledge.
    [Show full text]
  • A Review of Alexithymia and Emotion Perception in Music, Odor, Taste, and Touch
    MINI REVIEW published: 30 July 2021 doi: 10.3389/fpsyg.2021.707599 Beyond Face and Voice: A Review of Alexithymia and Emotion Perception in Music, Odor, Taste, and Touch Thomas Suslow* and Anette Kersting Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany Alexithymia is a clinically relevant personality trait characterized by deficits in recognizing and verbalizing one’s emotions. It has been shown that alexithymia is related to an impaired perception of external emotional stimuli, but previous research focused on emotion perception from faces and voices. Since sensory modalities represent rather distinct input channels it is important to know whether alexithymia also affects emotion perception in other modalities and expressive domains. The objective of our review was to summarize and systematically assess the literature on the impact of alexithymia on the perception of emotional (or hedonic) stimuli in music, odor, taste, and touch. Eleven relevant studies were identified. On the basis of the reviewed research, it can be preliminary concluded that alexithymia might be associated with deficits Edited by: in the perception of primarily negative but also positive emotions in music and a Mathias Weymar, University of Potsdam, Germany reduced perception of aversive taste. The data available on olfaction and touch are Reviewed by: inconsistent or ambiguous and do not allow to draw conclusions. Future investigations Khatereh Borhani, would benefit from a multimethod assessment of alexithymia and control of negative Shahid Beheshti University, Iran Kristen Paula Morie, affect. Multimodal research seems necessary to advance our understanding of emotion Yale University, United States perception deficits in alexithymia and clarify the contribution of modality-specific and Jan Terock, supramodal processing impairments.
    [Show full text]
  • Emotion Recognition in Conversation: Research Challenges, Datasets, and Recent Advances
    1 Emotion Recognition in Conversation: Research Challenges, Datasets, and Recent Advances Soujanya Poria1, Navonil Majumder2, Rada Mihalcea3, Eduard Hovy4 1 School of Computer Science and Engineering, NTU, Singapore 2 CIC, Instituto Politécnico Nacional, Mexico 3 Computer Science & Engineering, University of Michigan, USA 4 Language Technologies Institute, Carnegie Mellon University, USA [email protected], [email protected], [email protected], [email protected] Abstract—Emotion is intrinsic to humans and consequently conversational data. ERC can be used to analyze conversations emotion understanding is a key part of human-like artificial that take place on social media. It can also aid in analyzing intelligence (AI). Emotion recognition in conversation (ERC) is conversations in real times, which can be instrumental in legal becoming increasingly popular as a new research frontier in natu- ral language processing (NLP) due to its ability to mine opinions trials, interviews, e-health services and more. from the plethora of publicly available conversational data in Unlike vanilla emotion recognition of sentences/utterances, platforms such as Facebook, Youtube, Reddit, Twitter, and others. ERC ideally requires context modeling of the individual Moreover, it has potential applications in health-care systems (as utterances. This context can be attributed to the preceding a tool for psychological analysis), education (understanding stu- utterances, and relies on the temporal sequence of utterances. dent frustration) and more. Additionally, ERC is also extremely important for generating emotion-aware dialogues that require Compared to the recently published works on ERC [10, an understanding of the user’s emotions. Catering to these needs 11, 12], both lexicon-based [13, 8, 14] and modern deep calls for effective and scalable conversational emotion-recognition learning-based [4, 5] vanilla emotion recognition approaches algorithms.
    [Show full text]
  • Machine Learning Based Emotion Classification Using the Covid-19 Real World Worry Dataset
    Anatolian Journal of Computer Sciences Volume:6 No:1 2021 © Anatolian Science pp:24-31 ISSN:2548-1304 Machine Learning Based Emotion Classification Using The Covid-19 Real World Worry Dataset Hakan ÇAKAR1*, Abdulkadir ŞENGÜR2 *1Fırat Üniversitesi/Teknoloji Fak., Elektrik-Elektronik Müh. Bölümü, Elazığ, Türkiye ([email protected]) 2Fırat Üniversitesi/Teknoloji Fak., Elektrik-Elektronik Müh. Bölümü, Elazığ, Türkiye ([email protected]) Received Date : Sep. 27, 2020 Acceptance Date : Jan. 1, 2021 Published Date : Mar. 1, 2021 Özetçe— COVID-19 pandemic has a dramatic impact on economies and communities all around the world. With social distancing in place and various measures of lockdowns, it becomes significant to understand emotional responses on a great scale. In this paper, a study is presented that determines human emotions during COVID-19 using various machine learning (ML) approaches. To this end, various techniques such as Decision Trees (DT), Support Vector Machines (SVM), k-nearest neighbor (k-NN), Neural Networks (NN) and Naïve Bayes (NB) methods are used in determination of the human emotions. The mentioned techniques are used on a dataset namely Real World Worry dataset (RWWD) that was collected during COVID-19. The dataset, which covers eight emotions on a 9-point scale, grading their anxiety levels about the COVID-19 situation, was collected by using 2500 participants. The performance evaluation of the ML techniques on emotion prediction is carried out by using the accuracy score. Five-fold cross validation technique is also adopted in experiments. The experiment works show that the ML approaches are promising in determining the emotion in COVID- 19 RWWD.
    [Show full text]
  • Emotion Classification Based on Biophysical Signals and Machine Learning Techniques
    S S symmetry Article Emotion Classification Based on Biophysical Signals and Machine Learning Techniques Oana Bălan 1,* , Gabriela Moise 2 , Livia Petrescu 3 , Alin Moldoveanu 1 , Marius Leordeanu 1 and Florica Moldoveanu 1 1 Faculty of Automatic Control and Computers, University POLITEHNICA of Bucharest, Bucharest 060042, Romania; [email protected] (A.M.); [email protected] (M.L.); fl[email protected] (F.M.) 2 Department of Computer Science, Information Technology, Mathematics and Physics (ITIMF), Petroleum-Gas University of Ploiesti, Ploiesti 100680, Romania; [email protected] 3 Faculty of Biology, University of Bucharest, Bucharest 030014, Romania; [email protected] * Correspondence: [email protected]; Tel.: +40722276571 Received: 12 November 2019; Accepted: 18 December 2019; Published: 20 December 2019 Abstract: Emotions constitute an indispensable component of our everyday life. They consist of conscious mental reactions towards objects or situations and are associated with various physiological, behavioral, and cognitive changes. In this paper, we propose a comparative analysis between different machine learning and deep learning techniques, with and without feature selection, for binarily classifying the six basic emotions, namely anger, disgust, fear, joy, sadness, and surprise, into two symmetrical categorical classes (emotion and no emotion), using the physiological recordings and subjective ratings of valence, arousal, and dominance from the DEAP (Dataset for Emotion Analysis using EEG, Physiological and Video Signals) database. The results showed that the maximum classification accuracies for each emotion were: anger: 98.02%, joy:100%, surprise: 96%, disgust: 95%, fear: 90.75%, and sadness: 90.08%. In the case of four emotions (anger, disgust, fear, and sadness), the classification accuracies were higher without feature selection.
    [Show full text]
  • Recognition of Emotional Face Expressions and Amygdala Pathology*
    Recognition of Emotional Face Expressions and Amygdala Pathology* Chiara Cristinzio 1,2, David Sander 2, 3, Patrik Vuilleumier 1,2 1 Laboratory for Behavioural Neurology and Imaging of Cognition, Department of Neuroscience and Clinic of Neurology, University of Geneva 2 Swiss Center for Affective Sciences, University of Geneva 3 Department of Psychology, University of Geneva Abstract Reconnaissance d'expressions faciales émo- tionnelles et pathologie de l'amygdale The amygdala is often damaged in patients with temporal lobe epilepsy, either because of the primary L'amygdale de patients atteints d'une épilepsie du epileptogenic disease (e.g. sclerosis or encephalitis) or lobe temporal est souvent affectée, soit en raison de because of secondary effects of surgical interventions leur maladie épileptogénique primaire (p.ex. sclérose (e.g. lobectomy). In humans, the amygdala has been as- ou encéphalite), soit en raison des effets secondaires sociated with a range of important emotional and d'interventions chirurgicales (p.ex. lobectomie). L'amyg- social functions, in particular with deficits in emotion dale a été associée à une gamme de fonctions émo- recognition from faces. Here we review data from tionnelles et sociales importantes dans l'organisme hu- recent neuropsychological research illustrating the main, en particulier à certains déficits dans la faculté de amygdala role in processing facial expressions. We décrypter des émotions sur le visage. Nous passons ici describe behavioural findings subsequent to focal en revue les résultats des récents travaux de la neuro- lesions and possible factors that may influence the na- psychologie illustrant le rôle de l'amygdale dans le trai- ture and severity of deficits in patients.
    [Show full text]
  • DEAP: a Database for Emotion Analysis Using Physiological Signals
    IEEE TRANS. AFFECTIVE COMPUTING 1 DEAP: A Database for Emotion Analysis using Physiological Signals Sander Koelstra, Student Member, IEEE, Christian M¨uhl, Mohammad Soleymani, Student Member, IEEE, Jong-Seok Lee, Member, IEEE, Ashkan Yazdani, Touradj Ebrahimi, Member, IEEE, Thierry Pun, Member, IEEE, Anton Nijholt, Member, IEEE, Ioannis Patras, Member, IEEE Abstract—We present a multimodal dataset for the analysis of human affective states. The electroencephalogram (EEG) and peripheral physiological signals of 32 participants were recorded as each watched 40 one-minute long excerpts of music videos. Participants rated each video in terms of the levels of arousal, valence, like/dislike, dominance and familiarity. For 22 of the 32 participants, frontal face video was also recorded. A novel method for stimuli selection is proposed using retrieval by affective tags from the last.fm website, video highlight detection and an online assessment tool. An extensive analysis of the participants’ ratings during the experiment is presented. Correlates between the EEG signal frequencies and the participants’ ratings are investigated. Methods and results are presented for single-trial classification of arousal, valence and like/dislike ratings using the modalities of EEG, peripheral physiological signals and multimedia content analysis. Finally, decision fusion of the classification results from the different modalities is performed. The dataset is made publicly available and we encourage other researchers to use it for testing their own affective state estimation methods. Index Terms—Emotion classification, EEG, Physiological signals, Signal processing, Pattern classification, Affective computing. ✦ 1 INTRODUCTION information retrieval. Affective characteristics of multi- media are important features for describing multime- MOTION is a psycho-physiological process triggered dia content and can be presented by such emotional by conscious and/or unconscious perception of an E tags.
    [Show full text]