![Large-Scale Affective Computing for Visual Multimedia](https://data.docslib.org/img/3a60ab92a6e30910dab9bd827208bcff-1.webp)
LARGE-SCALE AFFECTIVE COMPUTING FOR VISUAL MULTIMEDIA Brendan Jou Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2016 c 2016 Brendan Jou All Rights Reserved ABSTRACT LARGE-SCALE AFFECTIVE COMPUTING FOR VISUAL MULTIMEDIA Brendan Jou In recent years, Affective Computing has arisen as a prolific interdisciplinary field for engineering systems that integrate human affections. While human-computer relationships have long revolved around cognitive interactions, it is becoming increasingly important to account for human affect, or feelings or emotions, to avert user experience frustration, provide disability services, predict virality of social media content, etc. In this thesis, we specifically focus on Affective Computing as it applies to large-scale visual multimedia, and in particular, still images, animated image sequences and video streams, above and beyond the traditional approaches of face expression and gesture recognition. By taking a principled psychology-grounded approach, we seek to paint a more holistic and colorful view of computational affect in the context of visual multimedia. For example, should emotions like ‘surprise’ and ‘fear’ be assumed to be orthogonal output dimensions? Or does a ‘positive’ image in one culture’s view elicit the same feelings of positivity in another culture? We study affect frameworks and ontologies to define, organize and develop machine learning models with such questions in mind to automatically detect affective visual concepts. In the push for what we call “Big Affective Computing,” we focus on two dimensions of scale for affect – scaling up and scaling out – which we propose are both imperative if we are to scale the Affective Computing problem successfully. Intuitively, simply increasing the number of data points corresponds to “scaling up”. However, less intuitive, is when prob- lems like Affective Computing “scale out,” or diversify. We show that this latter dimension of introducing data variety, alongside the former of introducing data volume, can yield par- ticular insights since human affections naturally depart from traditional Machine Learning and Computer Vision problems where there is an objectively truthful target. While no one might debate a picture of a ‘dog’ should be tagged as a ‘dog,’ but not all may agree that it looks ‘ugly’. We present extensive discussions on why scaling out is critical and how it can be accomplished while in the context of large-volume visual data. At a high-level, the main contributions of this thesis include: Multiplicity of Affect Oracles. Prior to the work in this thesis, little consideration has been paid to the affective label generating mechanism when learning functional map- pings between inputs and labels. Throughout this thesis but first in Chapter 2, starting in §2.1.2, we make a case for a conceptual partitioning of the affect oracle governing the label generation process in Affective Computing problems resulting a multiplicity of ora- cles, whereas prior works assumed there was a single universal oracle. In Chapter 3, the differences between intended versus expressed versus induced versus perceived emotion are discussed, where we argue that perceived emotion is particularly well-suited for scaling up because it reduces the label variance due to its more objective nature compared to other affect states. And in Chapter 4 and 5, a division of the affect oracle along cultural lines with manifestations along both language and geography is explored. We accomplish all this without sacrificing the ‘scale up’ dimension, and tackle significantly larger volume problems than prior comparable visual affective computing research. Content-driven Visual Affect Detection. Traditionally, in most Affective Comput- ing work, prediction tasks use psycho-physiological signals from subjects viewing the stimuli of interest, e.g., a video advertisement, as the system inputs. In essence, this means that the machine learns to label a proxy signal rather than the stimuli itself. In this thesis, with the rise of strong Computer Vision and Multimedia techniques, we focus on the learning to label the stimuli directly without a human subject provided biometric proxy signal (except in the unique circumstances of Chapter 7). This shift toward learning from the stimuli directly is important because it allows us to scale up with much greater ease given that biometric measurement acquisition is both low-throughput and somewhat invasive while stimuli are often readily available. In addition, moving toward learning directly from the stimuli will allow researchers to precisely determine which low-level features in the stimuli are actually coupled with affect states, e.g., which set of frames caused viewer discomfort rather a broad sense that a video was discomforting. In Part I of this thesis, we illustrate an emotion prediction task with a psychology-grounded affect representation. In particular, in Chapter 3, we develop a prediction task over semantic emotional classes, e.g., ‘sad,’ ‘happy’ and ‘angry,’ using animated image sequences given annotations from over 2.5 million users. Subsequently, in Part II, we develop visual sentiment and adjective-based semantics models from million-scale digital imagery mined from a social multimedia platform. Mid-level Representations for Visual Affect. While discrete semantic emotions and sentiment are classical representations of affect with decades of psychology grounding, the interdisciplinary nature of Affective Computing, now only about two decades old, al- lows for new avenues of representation. Mid-level representations have been proposed in numerous Computer Vision and Multimedia problems as an intermediary, and often more computable, step toward bridging the semantic gap between low-level system inputs and high-level label semantic abstractions. In Part II, inspired by this work, we adapt it for vision-based Affective Computing and adopt a semantic construct called adjective-noun pairs. Specifically, in Chapter 4, we explore the use of such adjective-noun pairs in the con- text of a social multimedia platform and develop a multilingual visual sentiment concept ontology with over 15,000 affective mid-level visual concepts across 12 languages associated with over 7.3 million images and representations from over 235 countries, resulting in the largest affective digital image corpus in both depth and breadth to date. In Chapter 5, we develop computational methods to predict such adjective-noun pairs and also explore their usefulness in traditional sentiment analysis but with a previously unexplored cross-lingual perspective. And in Chapter 6, we propose a new learning setting called cross-residual learning building off recent successes in deep neural networks, and specifically, in resid- ual learning; we show that cross-residual learning can be used effectively to jointly learn across even multiple related tasks in object detection (noun), more traditional affect mod- eling (adjectives), and affective mid-level representations (adjective-noun pairs), giving us a framework for better grounding the adjective-noun pair bridge in both vision and affect simultaneously. Table of Contents List of Figures vi List of Tables viii List of Abbreviations ix 1 Introduction 1 1.1 Motivations .................................... 2 1.1.1 Large-Scale and Ubiquitous Visual Data . .... 3 1.1.2 TheAffectiveGap ............................ 4 1.1.3 Computer Vision and Affective Science . 5 1.2 OverviewoftheThesis............................. 6 2 Overview and Frameworks of Affective Computing 8 2.1 AffectiveScience ................................. 9 2.1.1 Affective Mechanisms and Models . 9 2.1.2 Affective Representations . 11 2.2 AspectsofAffectiveGaps ............................ 14 2.2.1 Affective Computing Paradigms . 14 2.2.2 Affect Oracles and Targets . 16 2.3 VisualAffectDetection ............................. 18 2.3.1 Face Expression and Gesture Recognition . 18 2.3.2 Visual Affective Concept Detection . 20 2.4 BigAffectiveComputing. .. .. .. .. .. .. .. .. 23 i I Content-driven Visual Affect Detection 25 3 Perceived Emotion Prediction in Animated GIFs 26 3.1 Introduction.................................... 27 3.2 RelatedWork................................... 28 3.3 The Case for Perceived Affect Detection . 29 3.4 Perceived Emotions in Animated GIF Images . ..... 31 3.5 GIFGIFDataset ................................. 32 3.6 Multitask Emotion Regression . 34 3.6.1 Feature Representations of Emotional Animated GIFs . ....... 36 3.6.2 Animated GIF Emotion Regression Experiments . 38 3.7 Conclusions .................................... 40 II Mid-level Representations for Visual Affect 41 4 Multicultural Visual Affective Computing 42 4.1 Introduction.................................... 43 4.2 RelatedWork................................... 44 4.3 Multilingual Visual Sentiment Ontology (MVSO) . ........ 46 4.3.1 Adjective-Noun Pair Discovery . 47 4.3.2 Filtering Candidate Adjective-Noun Pairs . ...... 51 4.3.3 CrowdsourcingValidation . 54 4.3.4 Ontology-structured Image Mining . 57 4.4 Ontology Analysis and Statistics . ..... 58 4.4.1 Comparison with Other Visual Sentiment Ontologies . ....... 58 4.4.2 Sentiment Distributions . 60 4.4.3 EmotionDistributions . 61 4.5 Cross-lingualMatching. 62 4.5.1 ExactAlignment ............................. 63 4.5.2 ApproximateAlignment . 64 4.6 GeographicalVariety. 65 ii 4.6.1 GPSCoordinateData .......................... 67 4.6.2 Metadata-inferred Location Data
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages197 Page
-
File Size-