Facial Micro-Expression Analysis Jingting Li

Total Page:16

File Type:pdf, Size:1020Kb

Facial Micro-Expression Analysis Jingting Li Facial Micro-Expression Analysis Jingting Li To cite this version: Jingting Li. Facial Micro-Expression Analysis. Computer Vision and Pattern Recognition [cs.CV]. CentraleSupélec, 2019. English. NNT : 2019CSUP0007. tel-02877766 HAL Id: tel-02877766 https://tel.archives-ouvertes.fr/tel-02877766 Submitted on 22 Jun 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THESE DE DOCTORAT DE CENTRALESUPELEC COMUE UNIVERSITE BRETAGNE LOIRE ECOLE DOCTORALE N° 601 Mathématiques et Sciences et Technologies de l'Information et de la Communication Spécialité : Signal, Image, Vision Par « Jingting LI » « Facial Micro-Expression Analysis » Thèse présentée et soutenue à « Rennes », le « 02/12/2019 » Unité de recherche : IETR Thèse N° : 2019CSUP0007 Rapporteurs avant soutenance : Composition du Jury : Olivier ALATA Professeur, Université Jean Monnet, Président / Rapporteur Saint-Etienne Olivier ALATA Professeur, Fan YANG-SONG Professeur, Université de Bourgogne, Université Jean Monnet, Dijon Saint-Etienne Rapporteuse Fan YANG-SONG Professeur, Université de Bourgogne, Dijon Examinateurs Catherine SOLADIE Maître de Conférence, CentraleSupélec, Rennes Pascal BOURDON Maître de Conférence, Université de Poitiers, Poitiers Wassim HAMIDOUCHE Maître de Conférence, INSA de Rennes, Rennes Directeur de thèse Renaud SEGUIER Professeur HDR, CentraleSupélec, Rennes Titre : L’Analyse de Micro-Expression Faciale Mots clés : Micro-expression, Détection, Motif temporel et local, Augmentation des données, Modèle d’Hammerstein Résumé : Les micro-expressions (MEs) sont En approximant chaque S-pattern par un HM, nous porteuses d'informations non verbales spécifiques. pouvons filtrer les S-patterns réels et générer de Cependant, en raison de leur nature locale et brève, il nouveaux S-patterns similaires. Ainsi, nous effectuons est difficile de les détecter. Dans cette thèse, nous une augmentation et une fiabilisation des S-patterns proposons une méthode de détection par pour l'apprentissage et améliorons ainsi la capacité reconnaissance d'un motif local et temporel de de différencier les MEs d'autres mouvements. mouvement du visage. Ce motif a une forme Lors du premier challenge de détection de MEs, nous spécifique (S-pattern) lorsque la ME apparait. Ainsi, à avons participé à la création d’une nouvelle méthode l'aide de SVM, nous distinguons les MEs des autres d'évaluation des résultats. Cela a aussi été l’occasion mouvements faciaux. Nous proposons également une d’appliquer notre méthode à longues vidéos. Nous fusion spatiale et temporelle afin d'améliorer la avons fourni le résultat de base au challenge. distinction entre les MEs (locaux) et les mouvements Les expérimentions sont effectuées sur CASME I, de la tête (globaux). CASME II, SAMM et CAS(ME)2. Les résultats Cependant, l'apprentissage des S-patterns est limité montrent que notre méthode proposée surpasse la par le petit nombre de bases de données de ME et méthode la plus populaire en termes de F1-score. par le faible volume d'échantillons de ME. Les L'ajout du processus de fusion et de l'augmentation modèles de Hammerstein (HM) est une bonne des données améliore encore les performances de approximation des mouvements musculaires. détection. Title : Facial Micro-expression Analysis Keywords: Micro-expression, Spotting, Local temporal pattern, Data augmentation, Hammerstein model Abstract: The Micro-expressions (MEs) are very By approximating each S-pattern with a HM, we can important nonverbal communication clues. However, both filter outliers and generate new similar S- due to their local and short nature, spotting them is patterns. By this way, we perform a data challenging. In this thesis, we address this problem by augmentation for S-pattern training dataset and using a dedicated local and temporal pattern (LTP) of improve the ability to differentiate MEs from other facial movement. This pattern has a specific shape facial movements. (S-pattern) when ME are displayed. Thus, by using a In the first ME spotting challenge of MEGC2019, we classical classification algorithm (SVM), MEs are took part in the building of the new result evaluation distinguished from other facial movements. We also method. In addition, we applied our method to propose a global final fusion analysis on the whole spotting ME in long videos and provided the baseline face to improve the distinction between ME (local) result for the challenge. and head (global) movements. The spotting results, performed on CASME I and However, the learning of S-patterns is limited by the CASME II, SAMM and CAS(ME)2, show that our small number of ME databases and the low volume of proposed LTP outperforms the most popular spotting ME samples. Hammerstein models (HMs) are known method in terms of F1-score. Adding the fusion to be a good approximation of muscle movements. process and data augmentation improve even more the spotting performance. Acknowledgement There are many people that have earned my gratitude for their contribution to my PhD study. Firstly, I would like to express my sincere gratitude to my advisors Prof. Renaud Séguier and Dr. Catherine Soladié for the continuous support of my PhD study and re- lated research, for their patience, motivation, and immense knowledge. The guidance of Dr. Catherine Soladié helped me in all the time of research and writing of this thesis. She is the best! Besides my advisors, I would like to thank the rest of my thesis committee: Prof. Olivier Alata , Prof. Fan Yang-Song, Dr. Pascal Bourdon, and Dr. Wassim Hamidouche for their great support and invaluable advice, which encourages me to widen my research from various perspectives. I would also like to thank my lab mates that include Siwei Wang, Sarra Zaied, Sen Yan, Nam Duong Duong, Dai Xiang, Amanda Abreu, Raphael Weber, Salah Eddine Kab- bour, Corentin Guezenoc, Adrien Llave, Eloïse Berson, and Lara Younes for making my experience in our small but lovely campus exciting and fun in the last three years. I am also grateful to the following university staffs: Karine Bernard, Bernard Jouga, Cecile Dubois; Catherine Piednoir and Grégory Cadeau for their unfailing support and assistance. Thanks are also due to the China Scholar Council (CSC) and ANR Reflet for their finan- cial support that I otherwise would not have been able to develop my scientific discoveries. Last but not least, I would like to express my deepest gratitude to my big and warm family: my grandparents, my parents, my sister, and my baby nieces, also to my dear friends: Chunmei Xie and Yi Hui. This dissertation would not have been possible without 1 their warm love, continued patience, and endless support. 2 Résumé Français Chapitre 1: Introduction L’expression faciale est l’un des indicateurs externes les plus importants pour connaître l’émotion et le statut psychologique d’une personne [12]. Parmi les expressions faciales, les micro-expressions (MEs) [28] sont des expressions locales et brèves qui apparaissent involontairement, notamment dans le cas de forte pression émotionnelle. Leurs durées varient de 1/25 à 1/5 de seconde [28]. Leur caractère involontaire permet souvent d’affirmer qu’elles représentent des émotions véritables d’une personne [28]. La détection de MEs a des applications nombreuses notamment dans le domaine de la sécurité nationale [26], des soins médicaux [30], des études sur la psychologie politique [108] et la psychologie de l’éducation [17]. L’existence de MEs a d’abord été découverte par Haggard et Isaacs en 1966 [39] puis Ekman et Friesen [28] l’ont nommée en 1969. Plusieurs années plus tard, ils ont développé un outil pour former les personnes à la détection de micro-expressions (METT) [25]. Pour- tant, le taux de reconnaissance global pour les 6 émotions de base à l’œil nu est inférieur à 50%, même par un expert formé [31]. Pour coder les MEs, le système de codage d’actions faciales (FACS) [27] est souvent utilisé. Il a été créé pour analyser la relation entre la déformation du muscle facial et l’expression émotionnelle. Les unités d’action (AUs) sont les composantes faciales du FACS, qui représentent le mouvement musculaire local. L’étiquette de l’AU sur le visage permet d’identifier la ou les régions où la ME se produit. En conséquence, le système FACS peut aider à annoter l’apparence et la dynamique d’une ME dans une vidéo. Depuis les années 2000, la recherche sur la détection et la reconnaissance automatique 3 de micro-expressions (micro-expression spotting and recognition, MESR) s’est dévelop- pée. La figure 0-1 indique l’évolution du nombre d’articles de recherche MESR. Le nom- bre des articles reste faible et les résultats ne sont pas encore très satisfaisants du fait de la nature même des MEs (micro) ainsi que du nombre limité de bases de données (BDDs) publiques de MEs. Cependant, il y a eu de plus en plus d’études émergentes ces dernières années. Nous pouvons remarquer qu’il y a beaucoup plus de papiers qui concernant la reconnaissance de ME que la détection. Le taux de reconnaissance commence à être satis- fisant, par exemple 86.35% de précision pour 5 classes dans [20]. En revanche, les résultats de la détection de ME sont loin d’être bons, certainement à cause de la nature de ME et également du nombre limité de bases de données publiques de ME. En effet, la plupart Figure 0-1: Tendance de la recherche MESR. Le nombre d’articles sur MESR augmente d’année en année, principalement dans le domaine de la reconnaissance de l’ME (colonne du bas).
Recommended publications
  • Is It Contagious? an Eye-Movement and Emg Study of Smile Authenticity Judgment
    IS IT CONTAGIOUS? AN EYE-MOVEMENT AND EMG STUDY OF SMILE AUTHENTICITY JUDGMENT by JUSTIN CHAMBERLAND Thesis submitted in partial fulfillment of the requirement for the degree of Master of Arts in Psychology (M.A.) Faculty of Graduate Studies Laurentian University Sudbury, Ontario © JUSTIN CHAMBERLAND, 2016 ii THESIS DEFENCE COMMITTEE/COMITÉ DE SOUTENANCE DE THÈSE Laurentian Université/Université Laurentienne Faculty of Graduate Studies/Faculté des études supérieures Title of Thesis Titre de la thèse IS IT CONTAGIOUS? AN EYE-MOVEMENT AND EMG STUDY OF SMILE AUTHENTICITY JUDGMENT Name of Candidate Nom du candidat Chamberland, Justin Degree Diplôme Master of Arts Department/Program Date of Defence Département/Programme Psychology Date de la soutenance March 10, 2016 APPROVED/APPROUVÉ Thesis Examiners/Examinateurs de thèse: Dr. Annie Roy-Charland (Co-supervisor/Co-Directeur(trice) de thèse) Dr. Mélanie Perron (Co-supervisor/Co-Directeur(trice) de thèse) Dr. Robert Jack (Committee member/Membre du comité) Dr. Joel Dickinson (Committee member/Membre du comité) Approved for the Faculty of Graduate Studies Approuvé pour la Faculté des études supérieures Dr. David Lesbarrères Monsieur David Lesbarrères Dr. Lauri Nummenmaa Acting Dean, Faculty of Graduate Studies (External Examiner/Examinateur externe) Doyen intérimaire, Faculté des études supérieures ACCESSIBILITY CLAUSE AND PERMISSION TO USE I, Justin Chamberland, hereby grant to Laurentian University and/or its agents the non-exclusive license to archive and make accessible my thesis, dissertation, or project report in whole or in part in all forms of media, now or for the duration of my copyright ownership. I retain all other ownership rights to the copyright of the thesis, dissertation or project report.
    [Show full text]
  • The Development of Emotion Recognition from Facial Expressions and Non‐Linguistic Vocalizations During Childhood
    1 British Journal of Developmental Psychology (2014) © 2014 The British Psychological Society www.wileyonlinelibrary.com The development of emotion recognition from facial expressions and non-linguistic vocalizations during childhood Georgia Chronaki1,2*, Julie A. Hadwin2, Matthew Garner3, Pierre Maurage4 and Edmund J. S. Sonuga-Barke2,5 1Section of Clinical and Cognitive Neuroscience, School of Psychological Sciences, University of Manchester, UK 2Developmental Brain-Behaviour Laboratory, Psychology, University of Southampton, UK 3Psychology, Faculty of Social & Human Sciences & Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, UK 4Laboratory for Experimental Psychopathology, Psychological Sciences Research Institute, Catholic University of Louvain, Louvain-la-Neuve, Belgium 5Department of Experimental Clinical and Health Psychology, Ghent University, Belgium Sensitivity to facial and vocal emotion is fundamental to children’s social competence. Previous research has focused on children’s facial emotion recognition, and few studies have investigated non-linguistic vocal emotion processing in childhood. We compared facial and vocal emotion recognition and processing biases in 4- to 11-year-olds and adults. Eighty-eight 4- to 11-year-olds and 21 adults participated. Participants viewed/listened to faces and voices (angry, happy, and sad) at three intensity levels (50%, 75%, and 100%). Non-linguistic tones were used. For each modality, participants completed an emotion identification task. Accuracy and bias for each emotion and modality were compared across 4- to 5-, 6- to 9- and 10- to 11-year-olds and adults. The results showed that children’s emotion recognition improved with age; preschoolers were less accurate than other groups. Facial emotion recognition reached adult levels by 11 years, whereas vocal emotion recognition continued to develop in late childhood.
    [Show full text]
  • Recognising Spontaneous Facial Micro-Expressions
    Recognising Spontaneous Facial Micro-expressions Tomas Pfister, Xiaobai Li, Guoying Zhao and Matti Pietik¨ainen Machine Vision Group, Department of Computer Science and Engineering, University of Oulu PO Box 4500, 90014 Oulu, Finland {tpfister,lxiaobai,gyzhao,mkp}@ee.oulu.fi Abstract Facial micro-expressions are rapid involuntary facial ex- pressions which reveal suppressed affect. To the best knowl- edge of the authors, there is no previous work that success- fully recognises spontaneous facial micro-expressions. In this paper we show how a temporal interpolation model together with the first comprehensive spontaneous micro- expression corpus enable us to accurately recognise these very short expressions. We designed an induced emotion Figure 1. An example of a facial micro-expression (top-left) be- suppression experiment to collect the new corpus using a ing interpolated through graph embedding (top-right); the result high-speed camera. The system is the first to recognise from which spatiotemporal local texture descriptors are extracted spontaneous facial micro-expressions and achieves very (bottom-right), enabling recognition with multiple kernel learning. promising results that compare favourably with the human micro-expression detection accuracy. proposed a suitable price. Since the human recognition accuracy is so low, an alternative method for recognising 1. Introduction micro-expressions would be very valuable. Humans are good at recognising full facial expressions The major challenges in recognising micro-expressions which present a rich source of affective information [6]. involve their very short duration and involuntariness. The However, psychological studies [4, 8] have shown that af- short duration means only a very limited number of frames fect also manifests itself as micro-expressions.
    [Show full text]
  • Visual Perception of Facial Emotional Expressions During Saccades
    behavioral sciences Article Visual Perception of Facial Emotional Expressions during Saccades Vladimir A. Barabanschikov and Ivan Y. Zherdev * Institute of Experimental Psychology, Moscow State University of Psychology and Education, 29 Sretenka street, Moscow 127051, Russia; [email protected] * Correspondence: [email protected]; Tel.: +7-999-829-52-79 Received: 28 October 2019; Accepted: 23 November 2019; Published: 27 November 2019 Abstract: The regularities of visual perception of both complex and ecologically valid objects during extremely short photo expositions are studied. Images of a person experiencing basic emotions were displayed for as low as 14 ms amidst a saccade spanning 10 degrees of visual angle. The observers had a main task to recognize the emotion depicted, and a secondary task to point at the perceived location of the photo on the screen. It is shown that probability of correct recognition of emotion is above chance (0.62), and that it depends on its type. False localizations of stimuli and their compression in the direction of the saccade were also observed. According to the acquired data, complex environmentally valid objects are perceived differently during saccades in comparison to isolated dots, lines or gratings. The rhythmic structure of oculomotor activity (fixation–saccade–fixation) does not violate the continuity of the visual processing. The perceptual genesis of facial expressions does not take place only during gaze fixation, but also during peak speed of rapid eye movements both at the center and in closest proximity of the visual acuity area. Keywords: transsaccadic processing; facial expression; emotional perception; saccadic suppression; gaze contingency; visual recognition 1.
    [Show full text]
  • Facial Expression and Emotion Paul Ekman
    1992 Award Addresses Facial Expression and Emotion Paul Ekman Cross-cultural research on facial expression and the de- We found evidence of universality in spontaneous velopments of methods to measure facial expression are expressions and in expressions that were deliberately briefly summarized. What has been learned about emo- posed. We postulated display rules—culture-specific pre- tion from this work on the face is then elucidated. Four scriptions about who can show which emotions, to whom, questions about facial expression and emotion are dis- and when—to explain how cultural differences may con- cussed: What information does an expression typically ceal universal in expression, and in an experiment we convey? Can there be emotion without facial expression? showed how that could occur. Can there be a facial expression of emotion without emo- In the last five years, there have been a few challenges tion? How do individuals differ in their facial expressions to the evidence of universals, particularly from anthro- of emotion? pologists (see review by Lutz & White, 1986). There is. however, no quantitative data to support the claim that expressions are culture specific. The accounts are more In 1965 when I began to study facial expression,1 few- anecdotal, without control for the possibility of observer thought there was much to be learned. Goldstein {1981} bias and without evidence of interobserver reliability. pointed out that a number of famous psychologists—F. There have been recent challenges also from psychologists and G. Allport, Brunswik, Hull, Lindzey, Maslow, Os- (J. A. Russell, personal communication, June 1992) who good, Titchner—did only one facial study, which was not study how words are used to judge photographs of facial what earned them their reputations.
    [Show full text]
  • Emotions and Deception Detection
    Emotions and Deception Detection Mircea Zloteanu Division of Psychology and Language Sciences University College London, UK A dissertation submitted in part fulfilment of the requirement for the degree of Doctor of Philosophy in Psychology Prepared under the supervision of Dr. Daniel C. Richardson September 2016 1 I, Mircea Zloteanu, confirm that the work presented in this thesis is my own. Where information has been derived from other sources, I confirm that this has been indicated in the thesis. Signature 2 Abstract Humans have developed a complex social structure which relies heavily on communication between members. However, not all communication is honest. Distinguishing honest from deceptive information is clearly a useful skills, but individuals do not possess a strong ability to discriminate veracity. As others will not willingly admit they are lying, one must rely on different information to discern veracity. In deception detection, individuals are told to rely on behavioural indices to discriminate lies and truths. A source of such indices are the emotions displayed by another. This thesis focuses on the role that emotions have on the ability to detect deception, exploring the reasons for low judgemental accuracy when individuals focus on emotion information. I aim to demonstrate that emotion recognition does not aid the detection of deception, and can result in decreased accuracy. This is attributed to the biasing relationship of emotion recognition on veracity judgements, stemming from the inability of decoders to separate the authenticity of emotional cues. To support my claims, I will demonstrate the lack of ability of decoders to make rational judgements regarding veracity, even if allowed to pool the knowledge of multiple decoders, and disprove the notion that decoders can utilise emotional cues, both innately and through training, to detect deception.
    [Show full text]
  • Darwin, Deception, and Facial Expression
    Darwin, Deception, and Facial Expression PAUL EKMAN Department of Psychiatry, University of California, San Francisco, San Francisco, California 94143, USA ABSTRACT: Darwin did not focus on deception. Only a few sentences in his book mentioned the issue. One of them raised the very interesting question of whether it is difficult to voluntarily inhibit the emotional expressions that are most difficult to voluntarily fabricate. Another suggestion was that it would be possible to unmask a fabricated expression by the absence of the difficult-to-voluntarily-generate facial actions. Still another was that during emotion body movements could be more easily suppressed than fa- cial expression. Research relevant to each of Darwin’s suggestions is re- viewed, as is other research on deception that Darwin did not foresee. KEYWORDS: Darwin; deception; facial expression; lies; lying; emotion; in- hibition; smile; leakage; micro-facial-expressions; Facial Action Coding System; illustrators; pitch; Duchenne; asymmetry The scientific study of the facial expression of emotion began with Charles Darwin’s The Expression of Emotions in Man and Animals, first published in 1872.1 Among his many extraordinary contributions Darwin gathered ev- idence that some emotions have a universal facial expression, cited examples and published pictures suggesting that emotions are evident in other ani- mals, and proposed principles explaining why particular expressions occur for particular emotions—principles that, he maintained, applied to the ex- pressions of all animals. But Darwin did not consider at any length when, how, and why emotional expressions are reliable or misleading. Neither deception nor lies (or lying) appears in the index to his book.
    [Show full text]
  • Microexpression Identification and Categorization Using a Facial
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2016 1 Microexpression Identification and Categorization using a Facial Dynamics Map Feng Xu, Junping Zhang, James Z. Wang Abstract—Unlike conventional facial expressions, microexpres- researchers have investigated spontaneous expressions that can sions are instantaneous and involuntary reflections of human reflect true emotions [9], [10]. emotion. Because microexpressions are fleeting, lasting only a As a type of spontaneous expression, microexpressions have few frames within a video sequence, they are difficult to perceive and interpret correctly, and they are highly challenging to identify not yet been explored extensively [11]–[15]. Microexpressions and categorize automatically. Existing recognition methods are are involuntary and instant facial dynamics during the time often ineffective at handling subtle face displacements, which can the subject is experiencing emotions, including when the be prevalent in typical microexpression applications due to the subject attempts to conceal emotions, especially in high-stake constant movements of the individuals being observed. To address situations [16], [17]. As early as in 1969, Ekman observed this problem, a novel method called the Facial Dynamics Map is proposed to characterize the movements of a microexpression in microexpresssions when he analyzed an interview video of a different granularity. Specifically, an algorithm based on optical patient with depression [18]. The patient who attempted to flow estimation is used to perform pixel-level alignment for commit suicide showed brief yet intense sadness but resumed microexpression sequences. Each expression sequence is then smiling quickly. Such microexpressions only last for a few divided into spatiotemporal cuboids in the chosen granularity. We frames in the standard 25 fps (frames per second) video but also present an iterative optimal strategy to calculate the princi- pal optical flow direction of each cuboid for better representation are often capable of revealing the actual emotions of subjects of the local facial dynamics.
    [Show full text]
  • Gestures: Your Body Speaks
    GESTURES: YOUR BODY SPEAKS How to Become Skilled WHERE LEADERS in Nonverbal Communication ARE MADE GESTURES: YOUR BODY SPEAKS TOASTMASTERS INTERNATIONAL P.O. Box 9052 • Mission Viejo, CA 92690 USA Phone: 949-858-8255 • Fax: 949-858-1207 www.toastmasters.org/members © 2011 Toastmasters International. All rights reserved. Toastmasters International, the Toastmasters International logo, and all other Toastmasters International trademarks and copyrights are the sole property of Toastmasters International and may be used only with permission. WHERE LEADERS Rev. 6/2011 Item 201 ARE MADE CONTENTS Gestures: Your Body Speaks............................................................................. 3 Actions Speak Louder Than Words...................................................................... 3 The Principle of Empathy ............................................................................ 4 Why Physical Action Helps........................................................................... 4 Five Ways to Make Your Body Speak Effectively........................................................ 5 Your Speaking Posture .................................................................................. 7 Gestures ................................................................................................. 8 Why Gestures? ...................................................................................... 8 Types of Gestures.................................................................................... 9 How to Gesture Effectively..........................................................................
    [Show full text]
  • The Role of Facial Expression in Intra-Individual and Inter-Individual Emotion Regulation
    From: AAAI Technical Report FS-01-02. Compilation copyright © 2001, AAAI (www.aaai.org). All rights reserved. The Role of Facial Expression in Intra-individual and Inter-individual Emotion Regulation Susanne Kaiser and Thomas Wehrle University of Geneva, Faculty of Psychology and Education 40 Bd. du Pont d'Arve, CH-1205 Geneva, Switzerland [email protected] [email protected] Abstract Ellsworth (1991) has argued, facial expressions have been This article describes the role and functions of facial discrete emotion theorists’ major ‘evidence’ for holistic expressions in human-human and human-computer emotion programs that could not be broken down into interactions from a psychological point of view. We smaller units. However, even though universal prototypical introduce our theoretical framework, which is based on patterns have been found for the emotions of happiness, componential appraisal theory and describe the sadness, surprise, disgust, anger, and fear, these findings methodological and theoretical challenges to studying the have not enabled researchers to interpret facial expressions role of facial expression in intra-individual and inter- as unambiguous indicators of emotions in spontaneous individual emotion regulation. This is illustrated by some interactions. results of empirical studies. Finally we discuss the potential significance of componential appraisal theory for There are a variety of problems. First, the mechanisms diagnosing and treating affect disturbances and sketch linking facial expressions to emotions are not known. possible implications for emotion synthesis and affective Second, the task of analyzing the ongoing facial behavior user modeling. in dynamically changing emotional episodes is obviously more complex than linking a static emotional expression to a verbal label.
    [Show full text]
  • Facial Expression Learning Tion Expressions As Adults
    Comp. by: DMuthuKumar Stage: Galleys Chapter No.: 1925 Title Name: ESL Page Number: 0 Date:4/6/11 Time:11:25:27 1 F the same universal and distinguishable prototypic emo- 39 2 Facial Expression Learning tion expressions as adults. These prototypic expressions 40 are produced by affect programs. Affect programs are 41 1 2 Au1 3 DANIEL S. MESSINGER ,EILEEN OBERWELLAND ,WHITNEY neurophysiological mechanisms that link subjectively felt 42 3 3 4 MATTSON ,NAOMI EKAS emotions to facial expressions in an invariant fashion 43 1 5 University of Miami, Coral Gables, FL, USA across the lifespan. Changes in facial expressions are 44 2 6 Leiden, The Netherlands thought to be due to maturation and the influence of 45 3 7 Coral Gables, FL, USA societal display rules on underlying expressions. 46 Cognitive theories of facial expression suggest that 47 newborns begin life with three primary emotion expres- 48 8 Synonyms sions: distress, pleasure, and interest. As cognitive func- 49 9 Emotional Expression Development tions grow in complexity across development, these facial 50 expressions become more differentiated in their presenta- 51 10 Definition tion and more tightly linked to specific contexts. The 52 11 Facial expression learning. Facial expressions are produced cognitive concepts that are necessary to develop the basic 53 12 as the muscles of the face contract, creating facial config- emotions of surprise, interest, joy, anger, sadness, fear, and 54 13 urations that serve communicative and emotional func- disgust develop over the first 6–8 months of life and 55 14 tions. Facial expression learning involves changes in the consist of perceptual and representational abilities.
    [Show full text]
  • Emotion Algebra Reveals That Sequences of Facial Expressions Have Meaning Beyond Their Discrete Constituents
    Emotion Algebra reveals that sequences of facial expressions have meaning beyond their discrete constituents Carmel Sofer1, Dan Vilenchik2, Ron Dotsch3, Galia Avidan1 1. Ben Gurion University of the Negev, Department of Psychology 2. Ben Gurion University of the Negev, Department of Communication Systems Engineering 3. Utrecht University, Department of Psychology 1 Abstract Studies of emotional facial expressions reveal agreement among observes about the meaning of six to fifteen basic static expressions. Other studies focused on the temporal evolvement, within single facial expressions. Here, we argue that people infer a larger set of emotion states than previously assumed, by taking into account sequences of different facial expressions, rather than single facial expressions. Perceivers interpreted sequences of two images, derived from eight facial expressions. We employed vector representation of emotion states, adopted from the field of Natural Language Processing and performed algebraic vector computations. We found that the interpretation participants ascribed to the sequences of facial expressions could be expressed as a weighted average of the single expressions comprising them, resulting in 32 new emotion states. Additionally, observes’ agreement about the interpretations, was expressed as a multiplication of respective expressions. Our study sheds light on the significance of facial expression sequences to perception. It offers an alternative account as to how a variety of emotion states are being perceived and a computational method to predict these states. Keywords: word embedding, emotions, facial expressions, vector representation 2 Studies of emotional facial expressions reveal agreement among observers about the meaning of basic expressions, ranging between six to fifteen emotional states (Ekman, 1994, 2016; Ekman & Cordaro, 2011; Keltner & Cordaro, 2015; Russell, 1994), creating a vocabulary conveyed during face to face interactions.
    [Show full text]