Neural Decision Tree and Forest Classifiers: with Application to Electroymyogram
Total Page:16
File Type:pdf, Size:1020Kb
Neural Decision Tree and Forest Classifiers: With Application to Electroymyogram (EMG) Gesture Classification A thesis presented to the faculty of San Francisco State University In partial fulfilment of _ , The Requirements for The Degree MATU Master of Arts • R G 3 In Mathematics by David Rodriguez San Francisco, California May 2016 Copyright by David Rodriguez 2016 CERTIFICATION OF APPROVAL I certify that I have read Neural Decision Tree and Forest Classifiers: With Application to Electroymyogram (EMG) Gesture Classification by David Rodriguez and that in my opinion this work meets the criteria for approving a thesis submitted in partial fulfillment of the requirements for the degree: Mas ter of Arts in Mathematics at San Francisco State University Alexandra Piryatinska Associate Professor of Mathematics Xiaorong Zhang Assistant Professor of School of Engineering Arek (Joetz Professor of Mathematics Neural Decision Tree and Forest Classifiers: With Application to Electroymyogram (EMG) Gesture Classification David Rodriguez San Francisco State University 2016 As the amount of data increases from sensors, networks, and data stored in vector form so does the need to detect anomolies and classify it. My research goals in this thesis is to develop learning models that can be applied on raw signals and assist and automate the process of feature generation and selection. The goal of such models is to synthesize multiple signals sampled simultaneously for the purpose of classifying human movement. There has been great success in using multi-resolution techniques like wavelet decomposition and hand-crafted features in classifying human movement from elec tromyogram (EMG) signals. However, these methods either make simplifying as sumptions about the features and or introduce redundancies that require subsequent dimensionality reduction techniques before the data can be used as input to train a model. Additionally, an emphasis on tuning a single model is usually applied. My goal is to develop an ensemble of learners (i.e. models) capable of taking raw signals as input and finding the most defining features, that when aggregated perform bet ter than any single individual. The models in this thesis, therefore, address previous shortcomings by providing effective signal processing and generalization to unseen data not used in training. The new model in this thesis is a variant of the Neural Decision Forests intro duced by Bulo and Kontschieder. The model is also a generalization of the more widely known random forest algorithm introduced by Breiman because it is a mul tivariate decision tree capable of embedding feature generation and selection. This variant of neural decision trees and forests are competitive with the CART trees and random forest counter-parts in classification problems found in the UCI ma chine learning repository datasets. Chapter 2 is an introductory chapter that introduces decision trees and neu ral networks - these form the building blocks of neural decision trees and forests. Chapter 3 focuses on neural decision trees with varying neural network architectures compared to a baseline CART style decision tree. Here it is shown that the neural decision tree improves upon the classification performance of the CART decision tree on the iris dataset and is competitive in the image-segmentation dataset. Chapter 4 focuses on neural decision forests. The accuracy perforance is com pared among a variety neural decision forests with varying neural networks architec tures, but this time by inducing randomization, so that the neural networks within a tree are not uniform. Here it is shown that the neural decision forests are com petitive with the classificaiton performance of the random forest algorithm on the iris, image-segmentation, and handwritten datasets. Chater 5 focuses on applying neural decision forests on EMG signals and clas sifying hand gestures recorded in the signals. The highlight here is that the neural decision forest can be trained on raw signals by incorporating a random window technique effectively embedding the feature generation and selection. It is shown that the neural decision forest trained on raw signals and signals with features ex tracted prior outperform the random forest algorithm with smaller forest sizes and are competitive at larger forest sizes. In addition, the neural decision forest proves robust under a variety of classification tasks where signals are sampled at high and low frequencies and under gesture recognition between and within individuals. I certify that the Abstract is a correct representation of the content of this thesis. D H l£ ± [Z D /& Chair, Thesis Committee Date vi ACKNOWLEDGMENTS This thesis would not have been possible without the inspiration, review, and support from my peers, advisors, and role-models. Thank you Dr. Piryatinska for your expertise in time series and your belief in me down the many paths where this research began. And Dr. Zhang, thank you for your expertise in neural machine interfaces and helping hone my ability to research topics methodically and to iterate and refine code and writing over time. I also want to thank all my professors from San Francisco State University, you have trained me to think hard and articulate questions in writing in the pursuit of research. Of course without my family and friends I would not have had the energy or the passion to undertake the steps towards this culminating experi ence. I love you. TABLE OF CONTENTS 1 Introduction............................................................................................................ 1 1.1 Overview.......................................................................................................... 1 1.2 Contributions and Outline of This Thesis................................................. 4 2 Decision Tree / Neural Network Background......................................... 8 2.1 Neural Decision Trees and Deep Learning................................................. 9 2.1.1 The Need for Feature Generation and S e le c tio n ......................... 9 2.1.2 Trees of Different Features............................................................. 9 2.1.3 Ensembles and Feature G en eration ..................................................10 2.1.4 Computational Challenges............................................................. 11 2.2 Decision T rees................................................................................................. 11 2.3 Neural Networks ........................................................................................... 15 2.4 Loss Functions and O ptim ization .............................................................. 18 2.5 Error Backpropogaion O ptim izations........................................................... 25 2.6 Conclusions........................................................................................................ 27 3 Neural Decision Trees .............................................................................................28 3.1 Introduction........................................................................................................28 3.1.1 Related W o r k ........................................................................................31 3.2 Neural Decision Trees....................................................... 32 3.2.1 Inference................................................................................................. 33 viii 3.2.2 Neural Split Function........................................................................... 34 3.2.3 Training Neural N etw ork s..................................................................36 3.3 Learning in Neural Decision T rees................................................................37 3.4 Neural Network Topology S election .............................................................38 3.5 Neural Decision Tree Experiments................................................................ 39 3.5.1 Iris D a ta se t........................................................................................... 40 3.5.2 Image Segmentation D ataset.............................................................. 41 3.5.3 Artificial P a tte r n ..................................................................................42 3.6 Conclusion..........................................................................................................44 4 Neural Decision Forests............................................................................................. 46 4.1 Introduction...................................................................................................... 46 4.1.1 Related W o r k ........................................................................................ 50 4.2 Neural Decision Forests................................................................................... 51 4.2.1 Inference..................................................................................................52 4.2.2 Neural Split Function........................................................................... 54 4.2.3 Training rNets........................................................................................ 56 4.3 Neural Decision Tree and rNet T o p o lo g y ................................................... 57 4.4 Learning in Neural Decision Forests.............................................................58 4.5 Neural Decision Forest Experiments.............................................................59