Affective Image Content Analysis: a Comprehensive Survey∗

Affective Image Content Analysis: a Comprehensive Survey∗

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18) Affective Image Content Analysis: A Comprehensive Survey∗ Sicheng Zhaoy, Guiguang Dingz, Qingming Huang], Tat-Seng Chuax, Bjorn¨ W. Schuller♦ and Kurt Keutzery yDepartment of Electrical Engineering and Computer Sciences, University of California, Berkeley, USA zSchool of Software, Tsinghua University, China ]School of Computer and Control Engineering, University of Chinese Academy of Sciences, China xSchool of Computing, National University of Singapore, Singapore ♦Department of Computing, Imperial College London, UK Abstract Specifically, the task of AICA is often composed of three steps: human annotation, visual feature extraction and learn- Images can convey rich semantics and induce ing of mapping between visual features and perceived emo- strong emotions in viewers. Recently, with the ex- tions [Zhao et al., 2017a]. One main challenge for AICA is plosive growth of visual data, extensive research ef- the affective gap, which can be defined as “the lack of coinci- forts have been dedicated to affective image content dence between the features and the expected affective state in analysis (AICA). In this paper, we review the state- which the user is brought by perceiving the signal” [Hanjalic, of-the-art methods comprehensively with respect to 2006]. Recently, various hand-crafted or learning-based fea- two main challenges – affective gap and percep- tures have been designed to bridge this gap. Current AICA tion subjectivity. We begin with an introduction methods mainly assign an image with the dominant (average) to the key emotion representation models that have emotion category (DEC) with the assumption that different been widely employed in AICA. Available existing viewers react similarly to the same image. This task can be datasets for performing evaluation are briefly de- performed as a traditional single-label learning problem. scribed. We then summarize and compare the rep- However, labeling the emotions in ground-truth generation resentative approaches on emotion feature extrac- is in fact highly inconsistent. Different viewers may have to- tion, personalized emotion prediction, and emotion tally different emotional reactions to the same image, which distribution learning. Finally, we discuss some fu- is caused by many personal and situational factors, such as ture research directions. the cultural background, personality and social context [Peng et al., 2015; Zhao et al., 2016; Yang et al., 2017b]. This phe- 1 Introduction nomenon causes the so-called subjective perception problem, as shown in Figure 1. In such cases, just predicting the DEC With the rapid development of photography technology and is insufficient for this highly subjective variable. the wide popularity of social networks, people have become To tackle the subjectivity issue, we can conduct two kinds used to sharing their experiences and expressing their opin- of AICA tasks [Zhao et al., 2016]: for each viewer, we can ions online using images and videos together with text. This predict personalized emotion perceptions; for each image, we gives rise to a huge volume of multimedia data, which results can assign multiple emotion labels. For the latter one, we can in urgent demand of processing and understanding the visual employ multi-label learning methods, which associates one content. Compared to low-level visual appearances, humans instance with multiple emotion labels. However, the impor- could better perceive and understand the high cognitive level tance or extent of different labels is in fact unequal. In such and affective level of semantics [Zhao et al., 2014a]. Exist- cases, emotion distribution learning would make more sense, ing works on image content analysis from the computer vi- which aims to learn the degree to which each emotion de- sion community mainly focus on understanding the cognitive scribes the instance [Yang et al., 2017b]. aspects of images, such as object detection. Recently, a lot of research attention has been paid from the psychology, af- In this paper, we concentrate on reviewing the state-of-the- fective computing and multimedia communities to affective art methods on AICA and outlining research trends. First, level analysis of image content. As what people feel may di- we introduce the widely-used emotion representation models rectly determine their decision making, affective image con- and the available datasets for performing AICA evaluation. tent analysis (AICA) is of great importance, which can enable Second, we summarize and compare the representative ap- wide applications [Chen et al., 2014], ranging from human- proaches on emotion feature extraction, personalized emotion computer interaction to image retrieval. prediction, and emotion distribution learning, corresponding to the affective gap and perception subjectivity challenges. ∗Corresponding author: Sicheng Zhao ([email protected]) Finally, we discuss potential research directions to pursue. 5534 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18) Comments from viewers Personalized emotions Title: Light in Darkness Expected emotion: Tags: london, stormy, dramatic Wow, that is fantastic...it Emotion: awe looks so incredible,…..Sentiment: positive weather, … Emotion category: fear That sky is amazing. V: 7.121 A: 4.479 D: 6.635 Description: …there was a break Sentiment: negative Emotion: excitement in the clouds such that a strip of the Yup a fave for me as well. Sentiment: positive far bank and bridge were lit up Valence: 4.1956 Exciting drama at its best. V: 7.950 A: 6.950 D: 7.210 while the sky behind remained Arousal: 4.4989 somewhat foreboding. I thought it Hey, it really frightened me! Emotion: fear My little daughter just Sentiment: negative made for a pretty intense scene…. Dominance: 4.8378 looked scared. V: 2.625 A: 5.805 D: 3.625 (a) Original image (b) Image metadata (c) Expected emotion (d) Personalized emotion labels (e) Emotion distribution Figure 1: Illustration of the image emotion’s perception subjectivity phenomenon [Zhao et al., 2017c]. The expected emotions in (c) and the personalized emotions in (d) are assigned using the keywords in red based on the metadata (b) from the uploader and the comments from different viewers. (e) shows the differences of expected, personalized and average emotions, while the contour lines are the estimated continuous emotion distributions by the expectation maximization algorithm based on Gaussian mixture models. Model Reference Type Emotion states/dimensions 2 Emotion Models from Cognitive Science Ekman [Ekman, 1992] CES happiness, sadness, anger, disgust, Community fear, surprise Mikels [Mikels et al., 2005] CES amusement, anger, awe, contentment, disgust, excitement, fear, sadness Psychologists mainly employ two typical models to represent Plutchik [Plutchik, 1980] CES (× 3 scales) anger, anticipation, dis- emotions: categorical emotion states (CES), and dimensional gust, joy, sadness, surprise, fear, trust emotion space (DES). CES models consider emotions to be Sentiment CES positive, negative, (or neutral) VA(D) [Schlosberg, 1954] DES valence-arousal(-dominance) one of a few basic categories, such as fear and joy, etc. Some ATW [Lee and Park, 2011] DES activity-temperature-weight typical models include Ekman’s six basic emotions [Ekman, 1992] and Mikels’s eight emotions [Mikels et al., 2005]. Table 1: Representative emotion models employed in AICA. Specifically, emotions can be classified into positive and neg- ative [Zhao et al., 2018b], sometimes including neutral. In such case, “emotion” is usually called “sentiment”. DES 3 Datasets models usually employ a 3D or 2D Cartesian space to rep- The early datasets for AICA mainly come from the psychol- resent emotions, such as valence-arousal-dominance (VAD) ogy community with small-scale images. Recent large-scale [Schlosberg, 1954] and activity-temperature-weight [Lee and datasets are constructed using images from social networks. Park, 2011]. VAD is the most widely used DES, where va- The released datasets are summarized in Table 2. lence represents the pleasantness ranging from happy to un- The International Affective Picture System (IAPS) is an happy, arousal represents the intensity of emotion ranging emotion evoking image set in psychology [Lang et al., 1997]. from excited to calm, while dominance represents the degree It consists of 1,182 documentary-style natural color images of control ranging from controlled to in control. Dominance depicting complex scenes, such as portraits, babies, animals, is difficult to measure and is often omitted, leading to the landscapes, etc. Each image is associated with an empirically commonly used two dimensional VA space [Hanjalic, 2006], derived mean and standard deviation (STD) of VAD ratings where V is easier to recognize in AICA than A. in a 9-point rating scale by about 100 college students. The relationship between CES and DES and the transfor- The (IAPSa) dataset includes 246 images [Mikels et al., mation from one to the other are studied in [Sun et al., 2009]. 2005] selected from IAPS, which are labeled by 20 under- For example, positive valence relates to a happy state, while graduate participants with a mean age 19.55 years. negative valence relates to a sad or angry state. CES is easier The (Abstract) includes 279 peer rated abstract paint- for users to understand and label, but may not reflect the sub- ings without contextual content, while the Artistic database

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us