A Thesis entitled A Supervised Machine Learning Approach Using Object-Oriented Programming Principles by Merl J. Creps Jr Submitted to the Graduate Faculty as partial fulfillment of the requirements for the Masters of Science Degree in Computer Science and Engineering Dr. Jared Oluoch, Committee Chair Dr. Weiqing Sun, Committee Member Dr. Henry Ledgard, Committee Member Dr. Amanda C. Bryant-Friedrich, Dean College of Graduate Studies The University of Toledo May 2018 Copyright 2018, Merl J. Creps Jr This document is copyrighted material. Under copyright law, no parts of this document may be reproduced without the expressed permission of the author. An Abstract of A Supervised Machine Learning Approach Using Object-Oriented Programming Principles by Merl J. Creps Jr Submitted to the Graduate Faculty as partial fulfillment of the requirements for the Masters of Science Degree in Computer Science and Engineering The University of Toledo May 2018 Artificial Neural Networks (ANNs) can be defined as a collection of interconnected layers, neurons and weighted connections. Each layer is connected to the previous layer by weighted connections that couple each neuron in a layer to the next neuron in the adjacent layer. An ANN resembles a human brain with multiple interconnected synapses which allow for the signal to easily flow from one neuron to the next. There are two distinct constructs which an ANN can assume; they can be thought of as Supervised and Unsupervised networks. Supervised neural networks have known outcomes while Unsupervised neural networks have unknown outcomes. This thesis will primarily focus on Supervised neural networks. The contributions of this thesis are two-fold. One, it offers a comprehensive study of Object-oriented Programming (OOP) design principles that have been employed to design ANNs. Two, it designs and implements a scalable OOP solution to ANNs. The ANN solution presented in this thesis provides an efficient and accurate statistical prediction model. A Multi-layer feed-forward neural network using back-propagation is used to demonstrate OOP design techniques. The neural network consists of three layers: one input layer, one hidden layer, and one output layer. The input layer consists of two neurons; the hidden layer consists of three neurons; while the output layer consists of one neutron. The neural network utilizes the Sigmoid function as iii the activation function so that all data points are normalized to values between [0,1]. Compared to two existing models (Encog and Neuroph), the approach in this work produces a more accurate prediction. iv To my family for their love, support and endless dedication and also in the loving memory of my mother Carol Creps. Acknowledgments First and foremost, I would like to express my sincere gratitude and appreciation to my advisor Dr. Jared Oluoch for the guidance, patience, motivation and support he provided during the course of my graduate studies. I would like to thank Dr. Henry Ledgard for the encouragement to purse my graduate studies, and Dr. Weiqing Sun for all the support and guidance that he provided during my studies. I would also like to extend a special thank you to Linda Beal, Richard Springman, Myrna Rudder and Christie Hennen for the exceptional dedication to student success and mentorship that they graciously and willingly provided during my graduate studies. Secondly, I would like to thank the College of Engineering at The University of Toledo for providing me access to the equipment, laboratory, and supplies that were required for my research. Most importantly, I would like to thank my wife who has supported me throughout this entire journey. Along with my wife, I would like to thank my children for their understanding and support. Finally, I would like to thank my mother and father for their unwavering love and support. vi Contents Abstract iii Acknowledgments vi Contents vii List of Tables xi List of Figures xii List of Abbreviations xiii List of Symbols xiv 1 Introduction 1 1.1 Inheritance . 2 1.2 Polymorphism . 2 1.2.1 Static or Compile-time Polymorphism . 3 1.2.2 Dynamic Polymorphism . 4 1.3 Encapsulation . 5 1.3.1 Access Modifiers . 8 1.3.2 Public Access Modifier . 9 1.3.2.1 Protected Access Modifier . 10 1.3.2.2 Default Access Modifier . 10 1.3.2.3 Private Access Modifier . 11 vii 1.4 Abstraction . 12 1.4.1 Objects . 13 1.4.2 Classes . 13 1.5 Neural Network Using Back-propagation . 15 1.5.1 Back-propagation with Object-oriented Programming . 16 1.6 Other Object-oriented Programming Approaches for Artificial Neural Networks . 18 1.7 OOP Neural Network Tools . 19 2 Proposed Object-oriented Programming Solution for Artificial Neu- ral Networks 21 2.1 Artificial Neural Network . 23 2.2 Supervised Learning . 24 2.3 Unsupervised Learning . 24 2.4 Multilayer Perceptron . 24 2.4.1 Classifier . 25 2.4.2 Input Layer . 26 2.4.3 Hidden Layer . 26 2.4.4 Output Layer . 27 2.5 Activation Function . 27 2.5.1 Sigmoid Function . 27 2.5.2 tanH Function . 29 2.5.3 Rectified Linear Unit (ReLU) Function . 30 2.5.4 Leaky ReLU Function . 31 2.6 Gradient . 32 2.7 Back-propagation . 33 2.7.1 Problem Definition . 33 viii 2.8 Neural Network Forwardfeed General Notations . 34 2.9 Backpropagation Algorithm . 35 2.10 Feedforward ANN with Back-propagation Illustration . 37 2.10.1 Incoming Hidden Layer Calculations . 38 2.10.2 Applying f(x) . 39 2.10.3 Hidden Layer Sum of Products . 39 2.10.4 Apply Activation Function . 39 2.10.5 Compute Output Margin of Error . 39 2.10.6 Compute the Rate of Change . 40 2.10.7 Compute Delta Output Weight Changes . 40 2.10.8 Compute Delta Hidden Sum . 40 2.10.9 Calculate Hidden Layer Incoming Weight Changes . 40 2.10.10 Update Incoming Hidden Layer Weights . 41 2.11 Contributions . 42 3 Results 49 3.1 Framework Setup . 49 3.1.1 Learning Rate vs MSE . 50 3.1.2 Sample Size vs MSE . 51 3.1.3 Max Error vs MSE . 51 3.1.4 Pseudo-code . 52 4 Conclusion and Future Work 54 References 57 A Artificial Neural Network Java Code 64 B Activation Function Interface and Implementation 68 ix C Neuron Utility Class 73 D Display Utility Class 75 x List of Tables 1.1 Encapsulation Access Modifiers . 9 xi List of Figures 1-1 Object Orientated Programming Concepts . 2 1-2 Unified Modeling Language Diagram of OOP . 12 1-3 Neural Network Model for XOR gate . 16 2-1 Multilayer Perceptron Neural Network Model . 25 2-2 Linear Separability . 26 2-3 Non-linear Separable . 26 2-4 Sigmoid non-linearity squashes numbers to range between [0,1] . 28 2-5 tanh non-linearity squashes real numbers to range between [-1,1] . 29 2-6 Rectified Linear Unit (ReLU) activation function, which is zero when x < 0 and then linear with slope 1 when x > 0 . 30 2-7 A plot indicating the 6x improvement in convergence with the ReLU unit compared to the tanh unit. 31 2-8 Gradient Decent . 32 2-9 XOR Neural Network with weighted connections, for input [1,1] . 42 3-1 Learning Rate vs MSE . 50 3-2 Sample Size vs MSE . 51 3-3 Maximum Error vs MSE . 53 xii List of Abbreviations ANN . Artificial Neural Network DeeBNet . Deep Belief Networks DT . Decision Trees GRNN . General Regression Neural Network GUI . Graphical User Interface HDOD . High Dimension Omics Data LM . Levenberg-Marquardt algorithm LVM . Latent Variable Models MLP . Multilayer Perceptron (target−actual)2 MSE . Mean Squared Error MSE = datapoints NN . Neural Network NPC . Nasopharyngeal Carcinoma OOP . Object-oriented Programming POJO . Plain Old Java Object QP . Quick Propagation SOM . Self-Organizing Map SVM . Support Vector Machine ReLU . Rectified Linear Unit RP . Resilient Propagation ROI . Return on Investment TTM . Time to Market UML . Unified Modeling Language xiii List of Symbols f(x) . A relation between a set of inputs and a set of permissible outputs f0(x) . The derivation of a given function with respects to x xiv Chapter 1 Introduction Object-oriented Programming (OOP) is a software design paradigm that utilizes objects to produce meaningful programs. The programs are organized by classes, which specify the behaviour and properties of objects. These behaviour, known as methods, control the actions of the program. Figure 1-1, is a visual representation of the OOP concepts. An Object-oriented program is organized in classes, and one class can have multiple objects. Objects are entities in the real world that can be distinctly identified; for example a smart phone, car, pen, table, and so on. A class is the building block for creating objects of the same type [1]. Figure 1-2 illustrates the relationship between OOP concepts. Consider a cow class. A cow has many objects: legs, eyes, and mouth. A cow class can manipulate its various objects to produce some kind of behavior. This behavior is a method. For instance, a cow class can use the mouth object to make a sound (moo); the eyes objects to see; and the legs objects to walk. The three design principles that undergird OOP are inheritance, polymorphism, and encapsulation. Figure 1-1 illustrates these features. These features of OOP can be exploited to build Artificial Neural Networks (ANNs). 1 Figure 1-1: Object Orientated Programming Concepts 1.1 Inheritance Inheritance is a feature in OOP that makes it possible for a child class or sub-class to take over the attributes and methods of its parent class without having to create an entirely new class. The child class inherits common or generalized features from the parent class but also makes its own specialized features.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages90 Page
-
File Size-