Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-The-Wild

Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-The-Wild

Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation McDuff, Daniel Jonathan; el Kaliouby, Rana; Senechal, Thibaud; Amr, May; Cohn, Jeffrey F.; Picard, Rosalind W. " Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild." Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2013. Publisher Institute of Electrical and Electronics Engineers (IEEE) Version Author's final manuscript Citable link http://hdl.handle.net/1721.1/80733 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike 3.0 Detailed Terms http://creativecommons.org/licenses/by-nc-sa/3.0/ Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild Daniel McDuffyz, Rana El Kalioubyyz, Thibaud Senechalz, May Amrz, Jeffrey F. Cohnx, Rosalind Picardyz z Affectiva Inc., Waltham, MA, USA y MIT Media Lab, Cambridge, MA, USA x Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA [email protected], fkaliouby,thibaud.senechal,[email protected], [email protected], [email protected] Abstract ing research and media testing [14] to understanding non- verbal communication [19]. With the ubiquity of cameras Computer classification of facial expressions requires on computers and mobile devices, there is growing interest large amounts of data and this data needs to reflect the in bringing these applications to the real-world. To do so, diversity of conditions seen in real applications. Public spontaneous data collected from real-world environments is datasets help accelerate the progress of research by provid- needed. Public datasets truly help accelerate research in an ing researchers with a benchmark resource. We present a area, not just because they provide a benchmark, or a com- comprehensively labeled dataset of ecologically valid spon- mon language, through which researchers can communicate taneous facial responses recorded in natural settings over and compare their different algorithms in an objective man- the Internet. To collect the data, online viewers watched ner, but also because compiling such a corpus and getting one of three intentionally amusing Super Bowl commercials it reliably labeled, is tedious work - requiring a lot of effort and were simultaneously filmed using their webcam. They which many researchers may not have the resources to do. answered three self-report questions about their experience. There are a number of publicly available labeled A subset of viewers additionally gave consent for their data databases for automated facial analysis, which have helped to be shared publicly with other researchers. This subset accelerate research in automated facial analysis tremen- consists of 242 facial videos (168,359 frames) recorded in dously. Databases commonly used for facial action unit real world conditions. The dataset is comprehensively la- and expression recognition include; Cohn-Kanade (in its beled for the following: 1) frame-by-frame labels for the extended edition know as CK+) [11], MMI [23], RU- presence of 10 symmetrical FACS action units, 4 asymmet- FACS [2], Genki-4K [24] and UNBC-McMaster Pain ric (unilateral) FACS action units, 2 head movements, smile, archive [12]. These datasets are reviewed in Section2. general expressiveness, feature tracker fails and gender; 2) However, all (except the Genki-4K and UNBC-McMaster the location of 22 automatically detected landmark points; Pain archives) were captured in controlled environments 3) self-report responses of familiarity with, liking of, and which do not reflect the the type of conditions seen in real- desire to watch again for the stimuli videos and 4) base- life applications. Computer-based machine learning and line performance of detection algorithms on this dataset. pattern analysis depends hugely on the number of training This data is available for distribution to researchers online, examples [22]. To date much of the work automating the the EULA can be found at: http://www.affectiva.com/facial- analysis of facial expressions and gestures has had to make expression-dataset-am-fed/. do with limited datasets for training and testing. As a result this often leads to over-fitting. 1. Introduction Inspired by other researchers who made an effort to share their data publicly with researchers in the field, we present The automatic detection of naturalistic and spontaneous a database of spontaneous facial expressions that was col- facial expressions has many applications, ranging from lected in naturalistic settings as viewers watched video con- medical applications such as pain detection [1], or monitor- tent online. Many viewers watched from the comfort of ing of depression [4] and helping individuals on the autism their homes, which meant that the facial videos contained spectrum [10] to commercial uses cases such as advertis- a range of challenging situations, from nonuniform lighting 1 and head movements, to subtle and nuanced expressions. To To the authors knowledge this dataset is the largest set collect this large dataset, we leverage Internet crowdsourc- labeled for asymmetric facial action units AU12 and AU14. ing, which allows for distributed collection of data very ef- In the remainder of this paper we describe the data col- ficiently. The data presented are natural spontaneous re- lection, labeling and label reliability calculation, and the sponses to ecologically valid online media (video advertis- training, testing and performance of smile, AU2 and AU4 ing) and labels of self-reported liking, desire to watch again detection on this dataset. and familiarity. The inclusion of self-reported labels is es- pecially important as it enables systematic research around 2. Existing Databases the convergence or divergence of self-report and facial ex- pressions, and allows us to build models that predict behav- The Cohn-Kanade (in its extended edition known as ior (e.g, watching again). CK+) [11] has been one of the mostly widely used re- While data collection is a major undertaking in and of source in the development of facial action unit and ex- itself, labeling that data is by far a much grander chal- pression recognition systems. The CK+ database, contains lenge. The Facial Action Coding System (FACS) [7] is the 593 recordings (10,708 frames) of posed and non-posed se- most comprehensive catalogue of unique facial action units quences, which are FACS coded as well as coded for the six (AUs) that correspond to each independent motion of the basic emotions. The sequences are recorded in a lab setting face. FACS enables the measurement and scoring of facial under controlled conditions of light and head motion. activity in an objective, reliable and quantitative way, and is The MMI database contains a large collection of FACS often used to discriminate between subtle differences in fa- coded facial videos [23]. The database consists of 1395 cial motion. One strength of FACS is the high level of detail manually AU coded video sequences, 300 also have onset- contained within the coding scheme, this has been useful in appex-offset annotions. A majority of these are posed and identifying new behaviors [8] that might have been missed all are recorded in laboratory conditions. if a coarser coding scheme were used. The RU-FACS database [2] contains data from 100 par- Typically, two or more FACS-certified labelers code for ticipants each engaging in a 2.5 minute task. In the task, the presence of AUs, and inter-observer agreement is com- the participants had to act to hide their true position, and puted. There are a number of methods of evaluating the re- therefore one could argue that the RU-FACS dataset is not liability of inter-observer agreement in a labeling task. As fully spontaneous. The RU-FACS dataset is not publicly the AUs differ in how easy they are identified, it is important available at this time. to report agreement for each individual label [3]. To give a The Genki-4K [24] dataset contains 4000 images la- more complete perspective on the reliability of each AU la- beled as either “smiling” or “non-smiling”. These images bel, we report two measures of inter-observer agreement for were collected from images available on the Internet and do the dataset described in this paper. mostly reflect naturalistic smiles. However, these are just The main contribution of this paper is to present a first static images and not video sequences making it impossi- in the world data set of labeled data recorded over the inter- ble to use the data to train systems that use temporal infor- net of people naturally viewing online media, the AM-FED mation. In addition, the labels are limited to presence or dataset contains: absence of smiles and therefore limiting their usefulness. The UNBC-McMaster Pain archive [12] is one of the 1. Facial Videos: 242 webcam videos recorded in real- largest databases of AU coded videos of naturalistic and world conditions. spontaneous facial expressions. This is labeled for 10 ac- 2. Labeled Frames: 168,359 frames labeled for the pres- tion units and the action units are coded with levels of in- ence of 10 symmetrical FACS action units, 4 asymmet- tensity making it very rich. However, although of natural- ric (unilateral) FACS action units, 2 head movements, istic and spontaneous expressions the videos were recorded smile, expressiveness, feature tracker fails and gender. with control over the lighting, camera position, frame rate and resolution. 3. Tracked Points: Automatically detected landmark Multi-PIE [9] is a dataset of static facial expression im- points for 168,359 frames. ages using 15 cameras in different locations and 18 flashes to create various lighting conditions. The dataset includes 6 4. Self-report responses: Familiarity with, liking of and expressions plus neutral.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us