Emotiw 2018: Audio-Video, Student Engagement and Group-Level Affect Prediction

Emotiw 2018: Audio-Video, Student Engagement and Group-Level Affect Prediction

EmotiW 2018: Audio-Video, Student Engagement and Group-Level Affect Prediction Abhinav Dhall Amanjot Kaur Indian Institute of Technology Ropar, India Indian Institute of Technology Ropar, India [email protected] [email protected] Roland Goecke Tom Gedeon University of Canberra, Australia Australian National University, Australia [email protected] [email protected] ABSTRACT This paper details the sixth Emotion Recognition in the Wild (EmotiW) challenge. EmotiW 2018 is a grand challenge in the ACM Interna- tional Conference on Multimodal Interaction 2018, Colarado, USA. The challenge aims at providing a common platform to researchers working in the affective computing community to benchmark their algorithms on ‘in the wild’ data. This year EmotiW contains three sub-challenges: a) Audio-video based emotion recognition; b) Stu- dent engagement prediction; and c) Group-level emotion recogni- tion. The databases, protocols and baselines are discussed in detail. KEYWORDS Emotion Recognition; Affective Computing; ACM Reference Format: Abhinav Dhall, Amanjot Kaur, Roland Goecke, and Tom Gedeon. 2018. EmotiW 2018: Audio-Video, Student Engagement and Group-Level Affect Prediction. In 2018 International Conference on Multimodal Interaction (ICMI ’18), October 16–20, 2018, Boulder, CO, USA. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3242969.3264972 1 INTRODUCTION The sixth Emotion Recognition in the Wild (EmotiW)1 challenge is a series of benchmarking effort focussing on different problems Figure 1: The images of the videos in the student engage- in affective computing in real-world environments. This year’s ment recognition sub-challenge [9]. Please note the varied EmotiW is part of the ACM International Conference on Multimodal backgrounds environment and illumination. Interaction (ICMI) 2018. EmotiW is a challenge series annually organised as a grand challenge in ICMI conferences. The aim is to provide a competing platform for researchers in affective computing. and Analysis [16]). Our focus is affective computing in ‘in the For details about the earlier EmotiW challenge, please refer to wild’ environments. Here ‘in the wild’ means different real-world EmotiW 2017’s baseline paper [3]. There are other efforts in the arXiv:1808.07773v1 [cs.CV] 23 Aug 2018 conditions, where subjects show head pose change, have varied affective computing community, which focus on different problems illumination on the face, show spontaneous facial expression, there such as depression analysis (Audio/Video Emotion Challenge [14]) is background noise and occlusion etc. An example of the data and continous emotion recognition (Facial Expression Recognition captured in different environments can be seen in Figure 1. EmotiW 2018 contains three sub-challenges: a) Student Engage- 1https://sites.google.com/view/emotiw2018 ment Prediction (EngReco); Audio-Video Emotion Recognition (VReco); Permission to make digital or hard copies of all or part of this work for personal or and c) Group-level Emotion Recognition (GReco). EngReco is a new classroom use is granted without fee provided that copies are not made or distributed problem introduced this year. In total there were over 100 registra- for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM tions in the challenge. Below, we discuss the three sub-challenges, must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, their baseline, data, evaluation protocols and results. to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. ICMI ’18, October 16–20, 2018, Boulder, CO, USA 2 STUDENT ENGAGEMENT RECOGNITION © 2018 Association for Computing Machinery. ACM ISBN 978-1-4503-5692-3/18/10...$15.00 Student engagement in MOOCs is a challenging task. Engagement https://doi.org/10.1145/3242969.3264972 is one of the affective state which is a link between the subject and resource. It has various aspects such as emotional, cognitive and be- number of frames. The video is then divided into segments with 25% havioral aspect. Challenge involved in engagement level detection overlap. For each segment statistical features are generated such as of a user is that it does not remain same always, while watching standard deviation of the 9 features (from OpenFace). As a result MOOC material. To help the students to retain their attention level each video has 100 segments, where each segment is represented or track those parts of the video where they loss the attention it is with the help of the 9 features. By learning a long short term mem- mandatory to track student engagement based on various social ory network the Mean Square Error (MSE) are 0.10 and 0.15 for the cues such as looking away from the screen, feeling drowsy, yawn- Validation and the Test sets, respectively. The performance of the ing, being restless in the chair and so on. User engagement tracking competing teams on the Test set can be viewed in Table 1. A total is vital for other application such as detecting vehicle driver 's at- of 6 teams submitted the labels for evaluation during the testing tention level while driving, customer engagement while reviewing phase. Please note that this list is preliminary as the evaluation of new product. With the advent of e-learning environment in the edu- code of the top three teams is underway. The same applies to the cation domain automatic detection of engagement level of students other two sub-challenges. based on computer vision and machine learning technologies is the need of the hour. An algorithmic approach for automatic detection 3 GROUP-LEVEL EMOTION RECOGNITION of engagement level requires dataset of student engagement. Due This sub-challenge is the continuation of EmotiW 2017’s GReco to unavailability of datasets for student engagement detection in sub-challenge [3]. The primary motivation behind this is to be able the wild new dataset for student engagement detection is created to predict the emotion/mood of a group of people. Given the large in this work. It will address the issue of creating automatic student increase in the number of images and videos, which are posted on engagement tracking software. It will be used for setting perfor- social networking platforms, there is an opportunity to analyze mance evaluation benchmark for student engagement detection affect conveyed by a group of people. The task of the sub-challenge algorithms. In the literature, various experiments are conducted for is to classify a group’s perceived emotion as Positive, Neutral or student engagement detection in constrained environment. Various Negative. The labeling is representation of the Valence axis. The features used for engagement detection are based on Action Units, images in this sub-challenge are from the Group Affect Database facial landmark points, eye movement, mouse clicks and motion of 3.0 [5]. The data is distributed into three sets: Train, Validation and head and body. Test. The Train, Validation and Test sets contain 9815, 4346 and 3011 images, respectively. As compared to the EmotiW 2017 the amount 2.1 Data Collection and Baseline of data has increased three folds. Our database collection details are discussed in the Kaur et al. [9]. For computing the baseline, we trained the Inception V3 network Student participants were asked to watch five minutes long MOOC followed by three fully connected layers (each having 4096 nodes) video. The data recording was done with different methods: through for the three classification task. We use stochastic gradient descent Skype, using a webcam on a laptop or computer and using a mobile optimizer without any learning rate decay to train the model. The phone camera. We endeavored to capture data in different scenar- classification accuracy for the Validation and Test sets are 65.00% ios. This is inorder to simulate different environments, in which and 61.00%, respectively. The performance of the competing teams students watch learning materials. The different environments used in this sub-challenge are reported in the Table 2. A total of 12 teams during the recording are computer lab, playground, canteen, hostel submitted labels for evaluation during the testing phase. rooms etc. In order to introduce unconstrained environment effect different lighting conditions are also used as dataset is recorded at 4 AUDIO-VIDEO BASED EMOTION different times of the day. Figure 1 shows the different environments RECOGNITION represented in the engagement database. The VReco sub-challenge is the oldest running task in the EmotiW The data was divided into three sub-sets: Train, Validation and challenge series. The task is based on the Acted Facial Expressions Test. In the dataset total 149 videos for training and 48 videos for in the Wild (AFEW) database [4]. AFEW database has been collected validation are released. Testing data contains 67 videos. Dataset split from movies and TV serials using a keyword search. Subtitles for follows subject independence i.e no subject is repeated among the hearing impaired contain keywords, which may correspond to the three splits. The class wise distribution of data is as follows: 9 videos emotion of the scene. The short sequences with subtitles contain- belong to level 0, 45 videos belong to level 1, 100 videos belong ing emotion related words were used as candidate samples. The to level 2 and remaining 43 videos belong to level 3. The dataset database is then curated with these candidate audio-video samples. has total 91 subjects (27 females and 64 males) in total. The age The database similar to the other two databases in EmotiW has range of the subjects is 19-27 years. The annotation of the dataset been divided into three subsets: Train, Validation and Test. is done by 6 annotators.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us