An Efficient Human Activity Recognition Technique Based on Deep Learning Aziz Khelalef, Fakhreddine Ababsa, Nabil Benoudjit

An Efficient Human Activity Recognition Technique Based on Deep Learning Aziz Khelalef, Fakhreddine Ababsa, Nabil Benoudjit

An Efficient Human Activity Recognition Technique Based on Deep Learning Aziz Khelalef, Fakhreddine Ababsa, Nabil Benoudjit To cite this version: Aziz Khelalef, Fakhreddine Ababsa, Nabil Benoudjit. An Efficient Human Activity Recognition Technique Based on Deep Learning. Распознавание образов и анализ изображе&#1085 / Pattern Recognition and Image Analysis: Advances in Mathematical Theory and Applications, MAIK Nauka/Interperiodica (МАИК Наука/Интерпериодика), 2019, 29 (4), pp.702-715. ⟨10.1134/s1054661819040084⟩. ⟨hal-02519700⟩ HAL Id: hal-02519700 https://hal.archives-ouvertes.fr/hal-02519700 Submitted on 26 Mar 2020 HAL is a multi-disciplinary open access archive L’archive ouverte pluridisciplinaire HAL, est des- for the deposit and dissemination of scientific re- tinée au dépôt et à la diffusion de documents scien- search documents, whether they are published or not. tifiques de niveau recherche, publiés ou non, émanant The documents may come from teaching and research des établissements d’enseignement et de recherche institutions in France or abroad, or from public or pri- français ou étrangers, des laboratoires publics ou vate research centers. privés. An Efficient Human Activity Recognition Technique Based on Deep Learning A. Khelalefa,*, F. Ababsab,**, and N. Benoudjit a,*** a Laboratoire d’Automatique Avancée et d’Analyse des Systèmes (LAAAS), University Batna-2, Batna, Algeria b Ecole Nationale d’Arts et Métiers, Institut image-Le2i, Paris, France * e-mail: [email protected] ** e-mail: [email protected] *** e-mail: [email protected] Abstract—In this paper, we present a new deep learning-based human activity recognition technique. First, we track and extract human body from each frame of the video stream. Next, we abstract human silhouettes and use them to create binary space-time maps (BSTMs) which summarize human activity within a defined time interval. Finally, we use convolutional neural network (CNN) to extract features from BSTMs and clas- sify the activities. To evaluate our approach, we carried out several tests using three public datasets: Weizmann, Keck Gesture and KTH Database. Experimental results show that our technique outperforms conventional state-of-the-art methods in term of recognition accuracy and provides comparable perfor- mance against recent deep learning techniques. It’s simple to implement, requires less computing power, and can be used for multi-subject activity recognition. Keywords: human activity recognition, deep learning, convolutional neural network (CNN), features extraction, classification DOI: 10.1134/S1054661819040084 1. INTRODUCTION features, Local features and Body modeling tech- niques. Nowadays, Human activity recognition is one of the most important fields in computer vision research; Many View-based human activity recognition it has large applications in industrial and common life methods were proposed in the literature. Earlier works routines; it is used in video surveillance, human- developed several methods using global features [1]. In machine interaction, monitoring systems, virtual real- [2] Blank et al. used silhouettes to create a space-time ity and many other applications. volume from which space-time saliency, shape struc- The challenge in human activity recognition is to ture and orientation are extracted. In [3] Dollar et al. efficiently recognize various actions in complex situa- proposed to extract the local region of interest from tions, to provide a high accuracy recognition rate, and space-temporal volume to create distinguishable fea- to simplify implementation in real time application tures used for recognition. In [4] Kumari and Mitra while using less computing power. proposed a transform-based technique by using dis- crete Fourier transforms (DFTs) of small image blocks View-based human activity recognition techniques as features. Furthermore, in [5] Tasweer et al. used use space-time information in the video stream to rec- motion history image (HMI) to extract features by ognize human actions by extracting specific features. using a blocked discrete cosine transform (DCT). In Generally, it consists of two steps: (1) pre-processing [6] Hafiz et al. used Fourier transform domain of and features extraction during which the aim is to pre- frames to extract spectral features and principal com- pare the data for the second step by applying different ponent analysis (PCA) to reduce the features dimen- operations like resizing, background subtraction, sion. extracting silhouettes or skeletons, applying trans- forms such as DCT (Discrete Cosine Transform) or Local features are also widely used in human activ- FT (Fourier Transform), (2) features extraction step ity recognition, in [7] Lowe introduced the SIFT consisting of features calculation from the pre-pro- descriptor (scale invariant feature transform) which cessed data. Features extraction techniques can be enable the extraction of a robust local features invari- classified in three categories: Methods using global ant to image scaling, translation and rotation. In [8] Dalal and al. proposed the oriented gradient descrip- tors (HOG) for human activity recognition, by calcu- lating the gradient orientation in portions of the image as features for recognition. In [9] Lu and Little pro- Video V(x, y, t) is obtained Then, binary foreground Then binary images are then images (lxy(t)) are combined to form binary The training and testing of extracted using GMM motion images convolutional neural network is done Fig. 1. (Color online) Overview of the proposed technique in [22]. posed the PCA-HOG which is an improvement of the extraction and action classification using convolu- HOG descriptor by using (principal component anal- tional neural network (CNN). ysis) PCA in order to create local descriptors invariant • The proposed technique offers the capability to to illumination, pose and viewpoint. In [10, 11]. Matti recognize multiple actions in the same video frame Pietikäinen et al. introduced Local Binary Patterns because the BSTMs are extracted only from the sil- (LBP) for texture classification, it consists of extract- houettes of segmented human body. ing histograms of quantized local binary patterns in a local region of the image. In [12] Lin et al. proposed a • Experimental investigations using multiple nonparametric weighted feature extraction (NWFE) benchmark databases (Weizmann, Keck Gesture and approach by using PCA (principal component analy- KTH databases) showing that our technique is effi- sis) and K-means clustering to build histogram vectors cient and outperforms conventional human activity from pose contour. recognition methods and gives comparable perfor- mance against recent deep learning-based techniques. Body modeling human activity recognition tech- niques are also widely used; here the human body is This paper is organized as follow, in section two, we modeled to be tracked and recognized. In [13] Naka- present state-of-the-art of deep learning-based tech- zawa et al. represent and track the human body by niques. Next, we give a brief introduction to CNN. We using an ellipse. In [14] Iwasawa et al. proposed to cre- present the proposed method in section four. Experi- ate human skeleton models using sticks. In [15] Huo et mental results will be given in the section that follows. al. proposed to model the human head, shoulder and Finally, section six contains concluding remarks and upper-body for recognition, in [16] Sedai et al. used a future perspectives. 3D human body modeling of 10 body parts (torso, head, arms, and legs…). 2. DEEP LEARNING-BASED Recently deep learning can be considered a revolu- TECHNIQUES – RELATED WORK tionary tool in computer vision research. The capabil- Deep learning capability to self-extract distin- ity of convolutional neural network to create distin- guishable features yields to open a new era in human guishable features directly from the input images using activity recognition field; in this section we review multiple hidden layers makes the introduction of this recent approaches. tool quite interesting in the domain of human activity recognition. More recently, most of the applications of In [22] Tushar D. et al. proposed a deep learning- deep learning in human activity recognition has been based technique using binary motion image (BMI), relying on the use of wearable sensors [17–21]. How- the authors used Gaussian Mixture Models (GMM) ever, research using view-based approaches remains to subtract binary backgrounds used to create BMIs scarce. (Fig. 1), and three (3) CNN layers to extract features and classify activities. BMIs are extracted from the In this paper, we present a new deep learning– frames, which make the use of this approach impossi- based human activity recognition technique. The ble for multi-human recognition. objective is to recognize human activities in a video stream using extracted binary space-time maps Moez B. et al. proposed in [23], a two-steps neural (BSTMs) as the input of the Convolutional neural net- recognition method (Fig. 2) using an extension of work (CNN). The main contributions of our paper are convolutional neural network to 3D to learn spatial- summarized as follows: temporal features. The authors proposed to extract the features using 10 layers of CNN (input layer, two com- •We propose a simple deep learning-based binations of convolution/rectification/sub-sampling method consisting of two steps: (1) binary space-time layers, a third convolution layer and two neuron lay- maps (BSTMs) extraction

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us