A Multi-Task Neural Approach for Emotion Attribution, Classification

A Multi-Task Neural Approach for Emotion Attribution, Classification

1 A Multi-task Neural Approach for Emotion Attribution, Classification and Summarization Guoyun Tuy, Yanwei Fuy, Boyang Li, Jiarui Gao, Yu-Gang Jiang and Xiangyang Xue Abstract—Emotional content is a crucial ingredient in user- recognition and structured understanding of video emotion generated videos. However, the sparsity of emotional expressions remains largely an open problem. In this paper, we focus in the videos poses an obstacle to visual emotion analysis. In this on the emotion perceived by the audience. The emotion may paper, we propose a new neural approach, Bi-stream Emotion Attribution-Classification Network (BEAC-Net), to solve three be expressed by facial expressions, event sequences (e.g., a related emotion analysis tasks: emotion recognition, emotion wedding ceremony), nonverbal language, or even just abstract attribution, and emotion-oriented summarization, in a single shapes and colors. This differs from work that focus on one integrated framework. BEAC-Net has two major constituents, an particular channel such as the human face [8]–[11], abstract attribution network and a classification network. The attribution paintings [12] or music [13]. Although it is possible for the network extracts the main emotional segment that classification should focus on in order to mitigate the sparsity issue. The perceived emotion to differ from the intended expression in classification network utilizes both the extracted segment and the the video, like jokes falling flat, we find such cases to be original video in a bi-stream architecture. We contribute a new uncommon in the datasets we used. dataset for the emotion attribution task with human-annotated We identify three major challenges faced by video emotion ground-truth labels for emotion segments. Experiments on two understanding. First, usually only a small subset of video video datasets demonstrate superior performance of the proposed framework and the complementary nature of the dual classifica- frames directly depicts emotions, whereas other frames pro- tion streams. vide context that is necessary for understanding the emotions. Thus, the recognition method must be sufficiently sensitive to sparse emotional content. Second, there is usually one I. INTRODUCTION dominant emotion for every video, but other emotions could The explosive growth of user-generated video has created make interspersed appearances. Therefore, it is important to great demand for computational understanding of visual data distinguish the video segments that contribute the most to the and attracted significant research attention in the multimedia video’s overall emotion, a problem known as video emotion community. Research from the past few decades shows the attribution [14]. Third, in comparison to commercial produc- cognitive processes responsible for emotion appraisals and tion, user-generated videos are highly variable in production coping play important roles in human cognition [1], [2]. It quality and contain diverse objects, scenes, and events, which follows that computational understanding of emotional content hinders computational understanding. in video will help predict how human audience will interact Observing these challenges, we argue that it is crucial to with video content and help answer questions like: extract feature representations that are sensitive to emotions and invariant under conditions irrelevant to emotions from • Will a video recently posted on social media go viral in the next few hours [3]? the videos. In previous work, this is achieved by combining low-level and middle-level features [15], or by using auxiliary • Will a commercial break disrupt the emotion of the video and ruin the viewing experience [4]? image sentiment dataset to encode video frames [14], [16]. The effectiveness of these features has been demonstrated on three arXiv:1812.09041v2 [cs.LG] 24 Jul 2019 As an illustrative example, a food commercial probably should emotion-related vision tasks, including emotion recognition, not accompany a video that elicits the emotion of disgust. emotion attribution, and emotion-oriented video summariza- Proper commercial placement would benefit from the precise tion. However, a major drawback of previous work is the three location of emotions, in addition to identifying the overall tasks were tackled separately and cannot inform each other. emotion. Extending our earlier work [17], we propose a multi- Significant successes have been achieved on the problem task neural architecture, the Bi-stream Emotion Attribution- of video understanding, such as the recognition of activi- Classification Network (BEAC-Net), which tackles both emo- ties [5], [6] and participants [7]. Nevertheless, computational tion attribution and classification at the same time, thereby Guoyun Tu, Yanwei Fu, Jiarui Gao, Yu-Gang Jiang and Xiangyang Xue allowing related tasks to reinforce each other. BEAC-Net is are with Fudan University, Shanghai, China. y indicates equal contribution. composed of an attribution network (A-Net) and a classifica- Boyang Li is with the Big Data Lab of Baidu Research, Sunnyvale, tion network (C-Net). The attribution network learns to select a California, US 94089. Email: [email protected]. Yanwei Fu is with the School of Data Science, Fudan University, Shanghai, segment from the entire video that captures the main emotion. China. Email:[email protected]. The classification network processes the segment selected by Y.-G. Jiang (corresponding author) is with School of Computer Science, the A-Net as well as the entire video in a bi-stream architecture Fudan University, and Jilian Technology Group (Video++), Shanghai, China, 200082. Email: [email protected]. in order to recognize the overall emotion. In this setup, both the content information and the emotional information are 2 retained to achieve high accuracy with a small number of B. Multimodal Emotion Recognition convolutional layers. Empirical evaluation on the Ekman-6 Researchers explored features for visual emotion recogni- and the Emotion6 Video datasets demonstrate clear benefits tion, such as features enlightened by psychological and art of the joint approach and the complementary nature of the theories [34] and shape features [35]. Jou et al. [36] focused two streams. on animated GIF files, which are similar to short video clips. The contributions of this work can be summarized as Sparse coding [37], [38] also proves to be effective for emotion follows: (1) We propose BEAC-Net, an end-to-end trainable recognition. neural architecture that tackles emotion attribution and classi- Facial expressions have been used as a main source of fication simultaneously with significant performance improve- information for emotion recognition [8], [9]. Zhen et al. [10] ments. (2) We propose an efficient dynamic programming create features by localizing facial muscular regions. Liu et method for video summarization based on the output of A- al. [11] construct expressionlet, a mid-level representation for Net. (3) To establish a good benchmark for emotion attribution, dynamic facial expression recognition. we re-annotate the Ekman-6 dataset with the most emotion- Combining multimodal information with visual input is oriented segments which can be used as the ground-truth for another promising direction. A number of works recognize the emotion attribution task. emotions and/or affects from speech [39]–[41]. Wang et al [42] adapted audio-visual features to classify 2040 frames of 36 Hollywood movies into 7 emotions. [43] jointly uses II. BACKGROUND AND RELATED WORK speech and facial expressions. [44] extracts mid-level audio- A. Psychological Theories and Implications visual features. [45] employs the visual, auditory, and textual modalities for video retrieval. [46] provides a comprehen- Most works on recognizing emotions from visual content sive technique that exploits audio, facial expressions, spatial- follow psychological theories that lay out a fixed number of temporal information, and mouth movements. emotion categories, such as Ekman’s six pan-cultural basic Deep neural networks have also been used for visual emotions [18], [19] and Plutchik’s wheel of emotion [20]. sentiment analysis [47], [48]. A large-scale visual sentiment These emotions are considered to be “basic” because they dataset was proposed in Sentibank [48] and DeepSentiBank are associated with prototypical facial expressions, verbal and [26]. Sentibank is composed of 1,533 adjective-noun pairs, non-verbal language, distinct antecedent events, and physio- such as “happy dog” and “beautiful sky”. Subsequently, the logical responses. The emotions constantly affect our expres- authors used deep convolutional neural networks (CNN) to sion and perception via appraisal-coping cycles throughout our deal with images of strong sentiment and achieved improved daily activities [21], including video production and consump- performance. For a recent survey on understanding emotions tion. and affects from video, we refer readers to [49]. Recent psychological theories [22], [23] suggest the range Most existing works on video emotion understanding focus of emotions is far more varied than prescribed by basic emo- on classification. As emotional content is sparsely expressed tion theories. The psychological constructionist view argues in user-generated videos, the task of identifying emotional that emotions emerge from other, more basic cognitive and segments in the video [14], [16], [50] may facilitate the affective ingredients. For example, bodily sensation patterns classification task.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us