Chalearn Looking at People 2015 Challenges: Action Spotting and Cultural Event Recognition
Total Page:16
File Type:pdf, Size:1020Kb
ChaLearn Looking at People 2015 challenges: action spotting and cultural event recognition Xavier Baro´ Jordi Gonzalez` Universitat Oberta de Catalunya - CVC Computer Vision Center (UAB) [email protected] [email protected] Junior Fabian Miguel A. Bautista Computer Vision Center (UAB) Universitat de Barcelona [email protected] [email protected] Marc Oliu Hugo Jair Escalante Isabelle Guyon Universitat de Barcelona INAOE Chalearn [email protected] [email protected] [email protected] Sergio Escalera Universitat de Barcelona - CVC [email protected] Abstract CVPR workshop on action/interaction spotting and cultural event recognition. The recognition of continuous, natural Following previous series on Looking at People (LAP) human signals and activities is very challenging due to the challenges [6, 5, 4], ChaLearn ran two competitions to multimodal nature of the visual cues (e.g., movements of be presented at CVPR 2015: action/interaction spotting fingers and lips, facial expression, body pose), as well as and cultural event recognition in RGB data. We ran a technical limitations such as spatial and temporal resolu- second round on human activity recognition on RGB data tion. In addition, images of cultural events constitute a very sequences. In terms of cultural event recognition, tens challenging recognition problem due to a high variability of categories have to be recognized. This involves scene of garments, objects, human poses and context. Therefore, understanding and human analysis. This paper summa- how to combine and exploit all this knowledge from pixels rizes the two challenges and the obtained results. De- constitutes a challenging problem. tails of the ChaLearn LAP competitions can be found at http://gesture.chalearn.org/. This motivates our choice to organize a new work- shop and a competition on this topic to sustain the ef- fort of the computer vision community. These new com- 1. Introduction petitions come as a natural evolution from our previous workshops at CVPR 2011, CVPR 2012, ICPR 2012, ICMI The automatic analysis of the human body in still im- 2013, and ECCV 2014. We continued using our website ages and image sequences, also known as Looking at Peo- http://gesture.chalearn.org for promotion, while challenge ple, keeps making rapid progress with the constant improve- entries in the quantitative competition were scored on-line ment of new published methods that push the state-of-the- using the Codalab Microsoft-Stanford University platforms art. Applications are countless, like Human Computer In- (http://codalab.org/), from which we have already organized teraction, Human Robot Interaction, communication, en- international challenges related to Computer Vision and tertainment, security, commerce and sports, while having Machine Learning problems. an important social impact in assistive technologies for the handicapped and the elderly. In the rest of this paper, we describe in more detail the or- In 2015, ChaLearn1 organized new competitions and ganized challenges and obtained results by the participants 1www.chalearn.org of the competition. 2. Challenge tracks and schedule 3. Competition data The ChaLearn LAP 2015 challenge featured two quanti- This section describes the datasets provided for each tative evaluations: action/interaction spotting on RGB data competition and its main characteristics. and cultural event recognition in still images. The charac- teristics of both competition tracks are the following: 3.1. Action and Interaction dataset • Action/Interaction recognition: in total, 235 action We provided the HuPBA 8K+ dataset [15] with anno- samples performed by 17 actors were provided. The tated begin and end frames of actions and interactions. A selected actions involved the motion of most of the key frame example for each action/interaction category is limbs and included interactions among various actors. shown in Figure 1. The characteristics of the dataset are: • Cultural event recognition: Inspired by the Action • The images are obtained from 9 videos (RGB se- Classification challenge of PASCAL VOC 2011-12 quences) and a total of 14 different actors appear in the successfully organized by Everinghamet al. [8], we sequences. The image sequences have been recorded planned to run a competition in which 50 categories using a stationary camera with the same static back- corresponding to different world-wide cultural events ground. would be considered. In all the image categories, gar- • 235 action/interaction samples performed by 14 actors. ments, human poses, objects, illumination, and con- text do constitute the possible cues to be exploited for • Each video (RGB sequence) was recorded at 15 fps recognizing the events, while preserving the inherent rate, and each RGB image was stored with resolution inter- and intra-class variability of this type of images. 480 × 360 in BMP file format. Thousands of images were downloaded and manually labeled, corresponding to cultural events like Carnival • 11 action categories, containing isolated and collab- (Brasil, Italy, USA), Oktoberfest (Germany), San Fer- orative actions: Wave, Point, Clap, Crouch, Jump, min (Spain), Holi Festival (India) and Gion Matsuri Walk, Run, Shake Hands, Hug, Kiss, Fight. There is a (Japan), among others. high intra-class variability among action samples. The challenge was managed using the Microsoft Co- • The actors appear in a wide range of different poses dalab platform2. The schedule of the competition was as and performing different actions/gestures which vary follows. the visual appearance of human limbs. So there is a December 1st, 2014 Beginning of the quantitative com- large variability of human poses, self-occlusions and petition for action/interaction recognition track, release of many variations in clothing and skin color. development and validation data. • Large difference in length about the performed actions January 2nd, 2015 Beginning of the quantitative com- and interactions. Several distractor actions out of the petition for cultural event recognition track, release of de- 11 categories are also present. velopment and validation data. February 15th, 2015: Beginning of the registration pro- A list of data attributes for this track dataset is described cedure for accessing to the final evaluation data. in Table 1. Examples of images of the data set are shown in March 13th, 2015: Release of the encrypted final evalu- Figure 1. ation data and validation labels. Participants started training their methods with the whole dataset. 3.2. Cultural Event Recognition dataset March 13th, 2015: Release of the decryption key for the In this work, we introduce the first dataset based on cul- final evaluation data. Participants started predicting the re- tural events and the first cultural event recognition chal- sults on the final evaluation labels. This date was the dead- lenge. In this section, we discuss some of the works most line for code submission as well. closely related to it. March 20th, 2015: End of the quantitative competition. Action Classification Challenge [8] This challenge be- Deadline for submitting the predictions over the final eval- longs to the PASCAL - VOC challenge which is a bench- uation data. The organizers started the code verification by mark in visual object category recognition and detection. running it on the final evaluation data. In particular, the Action Classification challenge was intro- March 25th, 2015: Deadline for submitting the fact duced in 2010 with 10 categories. This challenge consisted sheets. on predicting the action(s) being performed by a person in March 27th, 2015: Publication of the competition re- a still image. In 2012 there were two variations of this com- sults. petition, depending on how the person (whose actions are 2https://www.codalab.org/competitions/ to be classified) was identified in a test image: (i) by a tight Training actions Validation actions Test actions Sequence duration FPS 150 90 95 9× 1-2 min 15 Modalities Num. of users Action categories interaction categories Labeled sequences RGB 14 7 4 235 Table 1. Action and interaction data characteristics. (a) Wave (b) Point (c) Clap (d) Crouch (e) Jump (f) Walk (g) Run (h) Shake hands (i) Hug (j) Kiss (k) Fight (l) Idle Figure 1. Key frames of the HuPBA 8K+ dataset used in the action/interaction recognition track, showing actions ((a) to (g)), interactions ((h) to (k)) and the idle pose (l). Dataset #Images #Classes Year Action Classification Dataset [8] 5,023 10 2010 Social Event Dataset [11] 160,000 149 2012 Event Identification Dataset [1] 594,000 24,900 2010 Cultural Event Dataset 11,776 50 2015 Table 2. Comparison between our cultural event dataset and others present in the state of the art. bounding box around the person; (ii) by only a single point located somewhere on the body. Social Event Detection [11] This work is composed of three challenges and a common test dataset of images with their metadata (timestamps,tags, geotags for a small subset Figure 2. Cultural events by country, dark green represents greater number of events. of them). The first challenge consists of finding technical events that took place in Germany in the test collection. In the second challenge, the task consists of finding all soc- tival, parade, event, etc.). Then, we removed duplicated cer events taking place in Hamburg (Germany) and Madrid URLs and downloaded the raw images. To ensure that (Spain) in the test collection. The third challenge aims at the downloaded images belonged to each cultural event, a finding demonstration and protest events of the Indignados process was applied to manually filter each of the images. movement occurring in public places in Madrid in the test Next, all exact duplicate and near duplicate images were re- collection. moved from the downloaded image set using the method Event Identification in Social Media [1] In this work described in[3]. While we attempted to remove all dupli- the authors introduce the problem of event identification in cates from the dataset, there may exist some remaining du- social media. They presented an incremental clustering al- plicates that were not found.