Hierarchical Recurrent Neural Network for Skeleton Based Action Recognition

Total Page:16

File Type:pdf, Size:1020Kb

Hierarchical Recurrent Neural Network for Skeleton Based Action Recognition Hierarchical Recurrent Neural Network for Skeleton Based Action Recognition Yong Du, Wei Wang, Liang Wang Center for Research on Intelligent Perception and Computing, CRIPAC Nat’l Lab of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences {yong.du, wangwei, wangliang}@nlpr.ia.ac.cn Abstract BRNN BRNN BRNN Human actions can be represented by the trajectories of BRNN BRNN skeleton joints. Traditional methods generally model the BRNN spatial structure and temporal dynamics of human skeleton BRNN BRNN Softmax Layer Softmax BRNN with hand-crafted features and recognize human actions by Layer Fully Connected BRNN well-designed classifiers. In this paper, considering that re- BRNN BRNN current neural network (RNN) can model the long-term con- textual information of temporal sequences well, we propose Layer1 Layer2 Layer3 Layer4 Layer5 Layer6 Layer7 Layer8 Layer9 an end-to-end hierarchical RNN for skeleton based action Figure 1: An illustrative sketch of the proposed hierarchi- recognition. Instead of taking the whole skeleton as the in- cal recurrent neural network. The whole skeleton is divided put, we divide the human skeleton into five parts accord- into five parts, which are fed into five bidirectional recur- ing to human physical structure, and then separately feed rent neural networks (BRNNs). As the number of layers them to five subnets. As the number of layers increases, the increases, the representations extracted by the subnets are representations extracted by the subnets are hierarchically hierarchically fused to be the inputs of higher layers. A fused to be the inputs of higher layers. The final represen- fully connected layer and a softmax layer are performed on tations of the skeleton sequences are fed into a single-layer the final representation to classify the actions. perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify man skeleton joints in the 3D space [37]. Currently, reliable the effectiveness of the proposed network, and also com- joint coordinates can be obtained from the cost-effective pare with several other methods on three publicly available depth sensor using the real-time skeleton estimation algo- datasets. Experimental results demonstrate that our model rithms [27, 28]. Effective approaches should be investigated achieves the state-of-the-art performance with high compu- for skeleton based action recognition. tational efficiency. Human skeleton based action recognition is generally considered as a time series problem [5, 17], in which the 1. Introduction characteristics of body postures and their dynamics over time are extracted to represent a human action. Most of As an important branch of computer vision, action recog- the existing skeleton based action recognition methods ex- nition has a wide range of applications, e.g., intelligent plicitly model the temporal dynamics of skeleton joints by video surveillance, robot vision, human-computer interac- using Temporal Pyramids (TPs) [19, 31, 33] and Hidden tion, game control, and so on [15, 36]. Traditional studies Markov Models (HMMs) [20, 34, 35]. The TPs methods about action recognition mainly focus on recognizing ac- are generally restricted by the width of the time windows tions from videos recorded by 2D cameras. But actually, and can only utilize limited contextual information. As for human actions are generally represented and recognized in HMMs, it is very difficult to obtain the temporal aligned se- the 3D space. Human body can be regarded as an articu- quences and the corresponding emission distributions. Re- lated system including rigid bones and hinged joints which cently, recurrent neural networks (RNNs) with Long-Short are further combined into four limbs and a trunk [31]. Hu- Term Memory (LSTM) [8, 10] neurons have been used for man actions are composed of the motions of these limbs action recognition [1, 11, 16]. All this work just uses sin- and trunk which are represented by the movements of hu- gle layer RNN as a sequence classifier without part-based feature extraction and hierarchical fusion. 2. Related Work In this paper, taking full advantage of deep RNN in mod- In this section, we briefly review the existing literature elling the long-term contextual information of temporal se- that closely relates to the proposed model, including three quences, we propose a hierarchical RNN for skeleton based categories of approaches representing temporal dynamics action recognition. Fig. 1 shows the architecture of the pro- by local features, sequential state transitions and RNN. posed network, in which the temporal representations of Approaches with local features By clustering the ex- low-level body parts are modeled by bidirectional recurrent tracted joints into five parts, Wang et al.[32] use the spatial neural networks (BRNNs) and combined into the represen- and temporal dictionaries of the parts to represent actions, tations of high-level parts. which can capture the spatial structure of human body and Human body can be roughly decomposed into five parts, movements. Chaudhry et al.[2] encode the skeleton struc- e.g., two arms, two legs and one trunk, and human actions ture with a spatial-temporal hierarchy, and exploit Linear are composed of the movements of these body parts. Given Dynamical Systems to learn the dynamic features. Vemu- this fact, we divide the human skeleton into the five corre- lapalli et al.[31] utilize rotations and translations to rep- sponding parts, and feed them into five bidirectionally re- resent the 3D geometric relationships of body parts in Lie currently connected subnets (BRNNs) in the first layer. To group, and then employ Dynamic Time Warping (DTW) model the movements from the neighboring skeleton parts, and Fourier Temporal Pyramid (FTP) to model the tempo- we concatenate the representation of the trunk subnet with ral dynamics. Instead of modelling temporal evolution of those of the other four subnets, respectively, and then in- features, Luo et al.[19] develop a novel dictionary learn- put these concatenated results to four BRNNs in the third ing method combined with Temporal Pyramid Matching, to layer as shown in Fig. 1. With the similar procedure, the keep the temporal dynamics. To represent both human mo- representations of the upper body, the lower body and the tions and correlative objects, Wang et al.[33] first extract whole body are obtained in the fifth and seventh layers, re- the local occupancy patterns from the appearance around spectively. Up to now, we have finished the representation skeleton joints, and then process them with FTP to obtain learning of skeleton sequences. Finally, a fully connected temporal structure. Zanfir et al.[38] propose a moving pose layer and a softmax layer are performed on the obtained rep- descriptor for capturing postures and skeleton joints. Us- resentation to classify the actions. It should be noted that, ing five joints coordinates and their temporal differences as to overcome the vanishing gradient problem when training inputs, Cho and Chen [4] perform action recognition with a RNN [8, 12], we adopt LSTM neurons in the last BRNN hybrid multi-layer perceptron. In the above methods, the lo- layer. cal temporal dynamics is generally represented within a cer- In the experiments, we compare with five other deep tain time window or differential quantities, it cannot glob- RNN architectures derived from our proposed model to ver- ally capture the temporal evolution of actions. ify the effectiveness of the proposed network, and compare Approaches with sequential state transitions Lv et with several methods on three publicly available datasets. al.[20] extract local features of individual and partial com- Experimental results demonstrate that our method achieves binations of joints, and train HMMs to capture the ac- the state-of-the-art performance with high computational tion dynamics. Based on skeletal joints features, Wu and efficiency. The main contributions of our work can be sum- Shao [34] adopt a deep forward neural network to estimate marized as follows. Firstly, to the best of our knowledge, the emission probabilities of the hidden states in HMM, we are the first to provide an end-to-end solution for skele- and then infer action sequences. To accurately calculate the ton based action recognition by using hierarchical recurrent similarity between two sequences with Dynamic Manifold neural network. Secondly, by comparing with other five de- Warping, Gong et al.[5] perform both temporal segmen- rived deep RNN architectures, we verify the effectiveness tation and alignment with structured time series represen- of the necessary parts of the proposed network, e.g., bidi- tations. Though HMM can model the temporal evolution rectional network, LSTM neurons in the last BRNN layer, of actions, the input sequences have to be segmented and hierarchical skeleton part fusion. Finally, we demonstrate aligned, which in itself is a very difficult task. that our proposed model can handle skeleton based action Approaches with RNN The combination of RNN and recognition very well without sophisticated preprocessing. perceptron can directly classify sequences without any seg- The remainder of this paper is organized as follows. In mentation. By obtaining sequential representations with a Section 2, we introduce the related work on skeleton based 3D convolutional neural network, Baccouche et al.[1] pro- action recognition. In Section 3, we first review the back- pose a LSTM-RNN to recognize actions. Regarding the ground of RNN and LSTM, and then illustrate the details of histograms of optical flow as inputs, Grushin et al.[11] use the proposed network. Experimental results and discussion LSTM-RNN for robust action recognition and achieve good are presented in Section 4. Finally, we conclude the paper results on KTH dataset. Considering that LSTM-RNNs em- in Section 5. ployed in [1] and [11] are both unidirectional with only one hidden layer, Lefebvre et al.[16] propose a bidirectional xt xt LSTM-RNN with one forward hidden layer and one back- t t ward hidden layer for gesture classification.
Recommended publications
  • PATTERN RECOGNITION LETTERS an Official Publication of the International Association for Pattern Recognition
    PATTERN RECOGNITION LETTERS An official publication of the International Association for Pattern Recognition AUTHOR INFORMATION PACK TABLE OF CONTENTS XXX . • Description p.1 • Audience p.2 • Impact Factor p.2 • Abstracting and Indexing p.2 • Editorial Board p.2 • Guide for Authors p.5 ISSN: 0167-8655 DESCRIPTION . Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition. Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition. Examples include: • Statistical, structural, syntactic pattern recognition; • Neural networks, machine learning, data mining; • Discrete geometry, algebraic, graph-based techniques for pattern recognition; • Signal analysis, image coding and processing, shape and texture analysis; • Computer vision, robotics, remote sensing; • Document processing, text and graphics recognition, digital libraries; • Speech recognition, music analysis, multimedia systems; • Natural language analysis, information retrieval; • Biometrics, biomedical pattern analysis and information systems; • Special hardware architectures, software packages for pattern recognition. We invite contributions as research reports or commentaries. Research reports should be concise summaries of methodological inventions and findings, with strong potential of wide applications. Alternatively, they can describe significant and novel applications
    [Show full text]
  • A Wavenet for Speech Denoising
    A Wavenet for Speech Denoising Dario Rethage∗ Jordi Pons∗ Xavier Serra [email protected] [email protected] [email protected] Music Technology Group Music Technology Group Music Technology Group Universitat Pompeu Fabra Universitat Pompeu Fabra Universitat Pompeu Fabra Abstract Currently, most speech processing techniques use magnitude spectrograms as front- end and are therefore by default discarding part of the signal: the phase. In order to overcome this limitation, we propose an end-to-end learning method for speech denoising based on Wavenet. The proposed model adaptation retains Wavenet’s powerful acoustic modeling capabilities, while significantly reducing its time- complexity by eliminating its autoregressive nature. Specifically, the model makes use of non-causal, dilated convolutions and predicts target fields instead of a single target sample. The discriminative adaptation of the model we propose, learns in a supervised fashion via minimizing a regression loss. These modifications make the model highly parallelizable during both training and inference. Both computational and perceptual evaluations indicate that the proposed method is preferred to Wiener filtering, a common method based on processing the magnitude spectrogram. 1 Introduction Over the last several decades, machine learning has produced solutions to complex problems that were previously unattainable with signal processing techniques [4, 12, 38]. Speech recognition is one such problem where machine learning has had a very strong impact. However, until today it has been standard practice not to work directly in the time-domain, but rather to explicitly use time-frequency representations as input [1, 34, 35] – for reducing the high-dimensionality of raw waveforms. Similarly, most techniques for speech denoising use magnitude spectrograms as front-end [13, 17, 21, 34, 36].
    [Show full text]
  • An Empirical Comparison of Pattern Recognition, Neural Nets, and Machine Learning Classification Methods
    An Empirical Comparison of Pattern Recognition, Neural Nets, and Machine Learning Classification Methods Sholom M. Weiss and Ioannis Kapouleas Department of Computer Science, Rutgers University, New Brunswick, NJ 08903 Abstract distinct production rule. Unlike decision trees, a disjunctive set of production rules need not be mutually exclusive. Classification methods from statistical pattern Among the principal techniques of induction of production recognition, neural nets, and machine learning were rules from empirical data are Michalski s AQ15 applied to four real-world data sets. Each of these data system [Michalski, Mozetic, Hong, and Lavrac, 1986] and sets has been previously analyzed and reported in the recent work by Quinlan in deriving production rules from a statistical, medical, or machine learning literature. The collection of decision trees [Quinlan, 1987b]. data sets are characterized by statisucal uncertainty; Neural net research activity has increased dramatically there is no completely accurate solution to these following many reports of successful classification using problems. Training and testing or resampling hidden units and the back propagation learning technique. techniques are used to estimate the true error rates of This is an area where researchers are still exploring learning the classification methods. Detailed attention is given methods, and the theory is evolving. to the analysis of performance of the neural nets using Researchers from all these fields have all explored similar back propagation. For these problems, which have problems using different classification models. relatively few hypotheses and features, the machine Occasionally, some classical discriminant methods arecited learning procedures for rule induction or tree induction 1 in comparison with results for a newer technique such as a clearly performed best.
    [Show full text]
  • Unsupervised Speech Representation Learning Using Wavenet Autoencoders Jan Chorowski, Ron J
    1 Unsupervised speech representation learning using WaveNet autoencoders Jan Chorowski, Ron J. Weiss, Samy Bengio, Aaron¨ van den Oord Abstract—We consider the task of unsupervised extraction speaker gender and identity, from phonetic content, properties of meaningful latent representations of speech by applying which are consistent with internal representations learned autoencoding neural networks to speech waveforms. The goal by speech recognizers [13], [14]. Such representations are is to learn a representation able to capture high level semantic content from the signal, e.g. phoneme identities, while being desired in several tasks, such as low resource automatic speech invariant to confounding low level details in the signal such as recognition (ASR), where only a small amount of labeled the underlying pitch contour or background noise. Since the training data is available. In such scenario, limited amounts learned representation is tuned to contain only phonetic content, of data may be sufficient to learn an acoustic model on the we resort to using a high capacity WaveNet decoder to infer representation discovered without supervision, but insufficient information discarded by the encoder from previous samples. Moreover, the behavior of autoencoder models depends on the to learn the acoustic model and a data representation in a fully kind of constraint that is applied to the latent representation. supervised manner [15], [16]. We compare three variants: a simple dimensionality reduction We focus on representations learned with autoencoders bottleneck, a Gaussian Variational Autoencoder (VAE), and a applied to raw waveforms and spectrogram features and discrete Vector Quantized VAE (VQ-VAE). We analyze the quality investigate the quality of learned representations on LibriSpeech of learned representations in terms of speaker independence, the ability to predict phonetic content, and the ability to accurately re- [17].
    [Show full text]
  • Training Generative Adversarial Networks in One Stage
    Training Generative Adversarial Networks in One Stage Chengchao Shen1, Youtan Yin1, Xinchao Wang2,5, Xubin Li3, Jie Song1,4,*, Mingli Song1 1Zhejiang University, 2National University of Singapore, 3Alibaba Group, 4Zhejiang Lab, 5Stevens Institute of Technology {chengchaoshen,youtanyin,sjie,brooksong}@zju.edu.cn, [email protected],[email protected] Abstract Two-Stage GANs (Vanilla GANs): Generative Adversarial Networks (GANs) have demon- Real strated unprecedented success in various image generation Fake tasks. The encouraging results, however, come at the price of a cumbersome training process, during which the gen- Stage1: Fix , Update 풢 풟 풢 풟 풢 풟 풢 풟 erator and discriminator풢 are alternately풟 updated in two 풢 풟 stages. In this paper, we investigate a general training Real scheme that enables training GANs efficiently in only one Fake stage. Based on the adversarial losses of the generator and discriminator, we categorize GANs into two classes, Sym- Stage 2: Fix , Update 풢 풟 metric GANs and풢 Asymmetric GANs,풟 and introduce a novel One-Stage GANs:풢 풟 풟 풢 풟 풢 풟 풢 gradient decomposition method to unify the two, allowing us to train both classes in one stage and hence alleviate Real the training effort. We also computationally analyze the ef- Fake ficiency of the proposed method, and empirically demon- strate that, the proposed method yields a solid 1.5× accel- Update and simultaneously eration across various datasets and network architectures. Figure 1: Comparison풢 of the conventional풟 Two-Stage GAN 풢 풟 Furthermore,풢 we show that the proposed풟 method is readily 풢 풟 풢 풟 풢 풟 training scheme (TSGANs) and the proposed One-Stage applicable to other adversarial-training scenarios, such as strategy (OSGANs).
    [Show full text]
  • Deep Neural Networks for Pattern Recognition
    Chapter DEEP NEURAL NETWORKS FOR PATTERN RECOGNITION Kyongsik Yun, PhD, Alexander Huyen and Thomas Lu, PhD Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, US ABSTRACT In the field of pattern recognition research, the method of using deep neural networks based on improved computing hardware recently attracted attention because of their superior accuracy compared to conventional methods. Deep neural networks simulate the human visual system and achieve human equivalent accuracy in image classification, object detection, and segmentation. This chapter introduces the basic structure of deep neural networks that simulate human neural networks. Then we identify the operational processes and applications of conditional generative adversarial networks, which are being actively researched based on the bottom-up and top-down mechanisms, the most important functions of the human visual perception process. Finally, Corresponding Author: [email protected] 2 Kyongsik Yun, Alexander Huyen and Thomas Lu recent developments in training strategies for effective learning of complex deep neural networks are addressed. 1. PATTERN RECOGNITION IN HUMAN VISION In 1959, Hubel and Wiesel inserted microelectrodes into the primary visual cortex of an anesthetized cat [1]. They project bright and dark patterns on the screen in front of the cat. They found that cells in the visual cortex were driven by a “feature detector” that won the Nobel Prize. For example, when we recognize a human face, each cell in the primary visual cortex (V1 and V2) handles the simplest features such as lines, curves, and points. The higher-level visual cortex through the ventral object recognition path (V4) handles target components such as eyes, eyebrows and noses.
    [Show full text]
  • Mixed Pattern Recognition Methodology on Wafer Maps with Pre-Trained Convolutional Neural Networks
    Mixed Pattern Recognition Methodology on Wafer Maps with Pre-trained Convolutional Neural Networks Yunseon Byun and Jun-Geol Baek School of Industrial Management Engineering, Korea University, Seoul, South Korea {yun-seon, jungeol}@korea.ac.kr Keywords: Classification, Convolutional Neural Networks, Deep Learning, Smart Manufacturing. Abstract: In the semiconductor industry, the defect patterns on wafer bin map are related to yield degradation. Most companies control the manufacturing processes which occur to any critical defects by identifying the maps so that it is important to classify the patterns accurately. The engineers inspect the maps directly. However, it is difficult to check many wafers one by one because of the increasing demand for semiconductors. Although many studies on automatic classification have been conducted, it is still hard to classify when two or more patterns are mixed on the same map. In this study, we propose an automatic classifier that identifies whether it is a single pattern or a mixed pattern and shows what types are mixed. Convolutional neural networks are used for the classification model, and convolutional autoencoder is used for initializing the convolutional neural networks. After trained with single-type defect map data, the model is tested on single-type or mixed- type patterns. At this time, it is determined whether it is a mixed-type pattern by calculating the probability that the model assigns to each class and the threshold. The proposed method is experimented using wafer bin map data with eight defect patterns. The results show that single defect pattern maps and mixed-type defect pattern maps are identified accurately without prior knowledge.
    [Show full text]
  • Introduction to Reinforcement Learning: Q Learning Lecture 17
    LLeeccttuurree 1177 Introduction to Reinforcement Learning: Q Learning Thursday 29 October 2002 William H. Hsu Department of Computing and Information Sciences, KSU http://www.kddresearch.org http://www.cis.ksu.edu/~bhsu Readings: Sections 13.3-13.4, Mitchell Sections 20.1-20.2, Russell and Norvig Kansas State University CIS 732: Machine Learning and Pattern Recognition Department of Computing and Information Sciences LLeeccttuurree OOuuttlliinnee • Readings: Chapter 13, Mitchell; Sections 20.1-20.2, Russell and Norvig – Today: Sections 13.1-13.4, Mitchell – Review: “Learning to Predict by the Method of Temporal Differences”, Sutton • Suggested Exercises: 13.2, Mitchell; 20.5, 20.6, Russell and Norvig • Control Learning – Control policies that choose optimal actions – MDP framework, continued – Issues • Delayed reward • Active learning opportunities • Partial observability • Reuse requirement • Q Learning – Dynamic programming algorithm – Deterministic and nondeterministic cases; convergence properties Kansas State University CIS 732: Machine Learning and Pattern Recognition Department of Computing and Information Sciences CCoonnttrrooll LLeeaarrnniinngg • Learning to Choose Actions – Performance element • Applying policy in uncertain environment (last time) • Control, optimization objectives: belong to intelligent agent – Applications: automation (including mobile robotics), information retrieval • Examples – Robot learning to dock on battery charger – Learning to choose actions to optimize factory output – Learning to play Backgammon
    [Show full text]
  • Image Classification by Reinforcement Learning with Two-State Q-Learning
    EasyChair Preprint № 4052 Image Classification by Reinforcement Learning with Two-State Q-Learning Abdul Mueed Hafiz and Ghulam Mohiuddin Bhat EasyChair preprints are intended for rapid dissemination of research results and are integrated with the rest of EasyChair. August 18, 2020 Image Classification by Reinforcement Learning with Two- State Q-Learning 1* 2 Abdul Mueed Hafiz , Ghulam Mohiuddin Bhat 1, 2 Department of Electronics and Communication Engineering Institute of Technology, University of Kashmir Srinagar, J&K, India, 190006 [email protected] Abstract. In this paper, a simple and efficient Hybrid Classifier is presented which is based on deep learning and reinforcement learning. Q-Learning has been used with two states and 'two or three' actions. Other techniques found in the literature use feature map extracted from Convolutional Neural Networks and use these in the Q-states along with past history. This leads to technical difficulties in these approaches because the number of states is high due to large dimensions of the feature map. Because our technique uses only two Q-states it is straightforward and consequently has much lesser number of optimization parameters, and thus also has a simple reward function. Also, the proposed technique uses novel actions for processing images as compared to other techniques found in literature. The performance of the proposed technique is compared with other recent algorithms like ResNet50, InceptionV3, etc. on popular databases including ImageNet, Cats and Dogs Dataset, and Caltech-101 Dataset. Our approach outperforms others techniques on all the datasets used. Keywords: Image Classification; ImageNet; Q-Learning; Reinforcement Learning; Resnet50; InceptionV3; Deep Learning; 1 Introduction Reinforcement Learning (RL) [1-4] has garnered much attention [4-13].
    [Show full text]
  • Chapter 1 Pattern Recognition in Time Series
    1 Chapter 1 Pattern Recognition in Time Series Jessica Lin Sheri Williamson Kirk Borne David DeBarr George Mason University Microsoft Corporation [email protected] [email protected] [email protected] [email protected] Abstract Massive amount of time series data are generated daily, in areas as diverse as astronomy, industry, sciences, and aerospace, to name just a few. One obvious problem of handling time series databases concerns with its typically massive size—gigabytes or even terabytes are common, with more and more databases reaching the petabyte scale. Most classic data mining algorithms do not perform or scale well on time series data. The intrinsic structural characteristics of time series data such as the high dimensionality and feature correlation, combined with the measurement-induced noises that beset real-world time series data, pose challenges that render classic data mining algorithms ineffective and inefficient for time series. As a result, time series data mining has attracted enormous amount of attention in the past two decades. In this chapter, we discuss the state-of-the-art techniques for time series pattern recognition, the process of mapping an input representation for an entity or relationship to an output category. Approaches to pattern recognition tasks vary by representation for the input data, similarity/distance measurement function, and pattern recognition technique. We will discuss the various pattern recognition techniques, representations, and similarity measures commonly used for time series. While the majority of work concentrated on univariate time series, we will also discuss the applications of some of the techniques on multivariate time series. 1.
    [Show full text]
  • Training Multilayered Perceptrons for Pattern Recognition: a Comparative Study of Four Training Algorithms D.T
    International Journal of Machine Tools & Manufacture 41 (2001) 419–430 Training multilayered perceptrons for pattern recognition: a comparative study of four training algorithms D.T. Pham a,*, S. Sagiroglu b a Intelligent Systems Laboratory, Manufacturing Engineering Centre, Cardiff University, P.O. Box 688, Cardiff CF24 3TE, UK b Intelligent Systems Research Group, Computer Engineering Department, Faculty of Engineering, Erciyes University, Kayseri 38039, Turkey Received 25 May 2000; accepted 18 July 2000 Abstract This paper presents an overview of four algorithms used for training multilayered perceptron (MLP) neural networks and the results of applying those algorithms to teach different MLPs to recognise control chart patterns and classify wood veneer defects. The algorithms studied are Backpropagation (BP), Quick- prop (QP), Delta-Bar-Delta (DBD) and Extended-Delta-Bar-Delta (EDBD). The results show that, overall, BP was the best algorithm for the two applications tested. 2001 Elsevier Science Ltd. All rights reserved. Keywords: Multilayered perceptrons; Training algorithms; Control chart pattern recognition; Wood inspection 1. Introduction Artificial neural networks have been applied to different areas of engineering such as dynamic system identification [1], complex system modelling [2,3], control [1,4] and design [5]. An important group of applications has been classification [4,6–13]. Both supervised neural networks [8,11–13] and unsupervised networks [9,14] have proved good alternatives to traditional pattern recognition systems because of features such as the ability to learn and generalise, smaller training set requirements, fast operation, ease of implementation and tolerance of noisy data. However, of all available types of network, multilayered perceptrons (MLPs) are the most commonly adopted.
    [Show full text]
  • Clustering Driven Deep Autoencoder for Video Anomaly Detection
    Clustering Driven Deep Autoencoder for Video Anomaly Detection Yunpeng Chang1, Zhigang Tu1?, Wei Xie2, and Junsong Yuan3 1 Wuhan University, Wuhan 430079, China 2 Central China Normal University, Wuhan 430079, China 3 State University of New York at Buffalo, Buffalo, NY14260-2500, USA ftuzhigang,[email protected], [email protected], [email protected] Abstract. Because of the ambiguous definition of anomaly and the com- plexity of real data, video anomaly detection is one of the most chal- lenging problems in intelligent video surveillance. Since the abnormal events are usually different from normal events in appearance and/or in motion behavior, we address this issue by designing a novel convolu- tion autoencoder architecture to separately capture spatial and temporal informative representation. The spatial part reconstructs the last indi- vidual frame (LIF), while the temporal part takes consecutive frames as input and RGB difference as output to simulate the generation of opti- cal flow. The abnormal events which are irregular in appearance or in motion behavior lead to a large reconstruction error. Besides, we design a deep k-means cluster to force the appearance and the motion encoder to extract common factors of variation within the dataset. Experiments on some publicly available datasets demonstrate the effectiveness of our method with the state-of-the-art performance. Keywords: video anomaly detection; spatio-temporal dissociation; deep k-means cluster 1 Introduction Video anomaly detection refers to the identification of events which are deviated to the expected behavior. Due to the complexity of realistic data and the limited labelled effective data, a promising solution is to learn the regularity in normal videos with unsupervised setting.
    [Show full text]