Boosting of Neural Networks Over MNIST Data

Boosting of Neural Networks Over MNIST Data

Boosting of Neural Networks over MNIST Data Eva Volna, Vaclav Kocian and Martin Kotyrba Department of Informatics and computers, University of ostrava, 30 dubna 22, Ostrava, Czech Republic Keywords: Boosting, Adaboost, MNIST Data, Pattern Recognition. Abstract: The methods proposed in the article come out from a technique called boosting, which is based on the principle of combining a large number of so-called weak classifiers into a strong classifier. The article is focused on the possibility of increasing the efficiency of the algorithms via their appropriate combination, and particularly increasing their reliability and reducing their time exigency. Time exigency does not mean time exigency of the algorithm itself, nor its development, but time exigency of applying the algorithm to a particular problem domain. Simulations and experiments of the proposed processes were performed in the designed and created application environment. Experiments have been conducted over the MNIST database of handwritten digits that is commonly used for training and testing in the field of machine learning. Finally, a comparative experimental study with other approaches is presented. All achieved results are summarized in a conclusion. 1 BOOSTING REVIEW some label set , where ∈1, 1. AdaBoost calls a given weak learning algorithm The two most popular methods for creating repeatedly in a series of rounds 1,…,. One of ensembles are boosting (Schapire, 1999) and the main ideas of the algorithm is to maintain a bagging (Breiman, 1996). Boosting is reported to distribution of weights over the training set. The give better results than bagging (Quinlan, 1996). weight of this distribution on -th training example Both of them modify a set of training examples to on round is denoted . Initially, all weights are achieve diversity of weak learners in the ensemble. set equally, but on each round, the weights of As alternative methods we can mention incorrectly classified examples are increased so that randomization based on a random modification of the weak learner is forced to focus on the hard base decision algorithm (Dietterich, 2000) or examples in the training set. The weak learner's job Bayesian model averaging, which can even is to find a weak hypothesis : →1,1 outperform boosting in some cases (Davidson and appropriate for the distribution . The goodness of Fan, 2006). Boosting has its roots in a theoretical a weak hypothesis is measured by its error (1): framework for studying machine learning called the ‘PAC’ learning model, due to Valiant (Valiant, (1) : 1984). Schapire (Schapire, 1990) came up with the first provable polynomial-time boosting algorithm in Notice that the error is measured with respect 1989. A year later, Freund (Freund, 1995) developed to the distribution on which the weak learner was a much more efficient boosting algorithm which, trained. In practice, the weak learner may be an although optimal in a certain sense, nevertheless algorithm that can use the weights on the training suffered from certain practical drawbacks. The first examples. Alternatively, when this is not possible, a experiments with these early boosting algorithms subset of the training examples can be sampled were carried out by (Drucker, Schapire, and Simard, according to , and these (unweighted) resampled 1993) on an OCR task. examples can be used to train the weak learner. Boosting is a general method for improving the The most basic theoretical property of AdaBoost accuracy of any given learning algorithm. The concerns its ability to reduce the training error. Let algorithm takes as input the training set us write the error of as 0.5 . Since a ,,…,, where each belongs to some hypothesis that guesses each instance’s class at domain of the space , and each label belongs to random has an error rate of 0.5 (on binary 256 Volna E., Kocian V. and Kotyrba M.. Boosting of Neural Networks over MNIST Data. DOI: 10.5220/0005131802560263 In Proceedings of the International Conference on Neural Computation Theory and Applications (NCTA-2014), pages 256-263 ISBN: 978-989-758-054-3 Copyright c 2014 SCITEPRESS (Science and Technology Publications, Lda.) BoostingofNeuralNetworksoverMNISTData problems), thus measures how much better than MNIST database (LeCun et al., 2014). The MNIST random are ’s predictions. Freund and Schapire database is a large database of handwritten digits (Freund and Schapire, 1997) proved that the training that is commonly used for training and testing in the error (the fraction of mistakes on the training set) of field of machine learning. the final hypothesis is at most (2): We have proposed several approaches to improve performance of boosting (Iwakura et al., 21 14 2010). In our experiment, we try to increase (2) diversity of classifiers by the following methods. It specifically relates to such innovations concerning exp2 training sets. . Filtering of inputs; Thus, if each weak hypothesis is slightly better . Randomly changing the order of the training than random so that for some 0, then the examples; training error drops exponentially fast. However, . Doubling occurrences of incorrectly classified boosting algorithms required that such a lower examples in a training set; bound be known a priori before boosting begins. All these neural networks used the algorithm In practice, knowledge of such a bound is very for an elimination of irrelevant inputs as difficult to obtain. AdaBoost, on the other hand, is proposed in (Kocian et al., 2011). adaptive in that it adapts to the error rates of the individual weak hypotheses. This is the basis of its 2.1.1 Proposed Neural-Networks-based name ‘Ada’ is short for ‘adaptive’. (Freund and Classifiers Schapire, 1999). In consequent text, we have used the following nomenclature refers to neural networks: 2 ENSEMBLES OF CLASSIFIERS . – input value; . – required (expected) output value; The goal of ensemble learning methods is to . – input of neuron y; construct a collection (an ensemble) of individual . – output of neuron y; classifiers that are diverse and yet accurate. If it is . – learning parameter; achieved, then highly accurate classification . – formula for calculating a neuron output decisions can be obtained by voting the decisions of value (activation function) yout = φ(yin ); the individual classifiers in the ensemble. ∆ – formula for calculating a change of a We have proposed the method of learning on the weight value. basis of the property of neural networks, which have We have used a total of five types of neural been noticed during another work (Kocian and networks in the study. We have used codes N1-N4 Volná, 2012), e.g. a major part of the adaptation is for single-layer networks, and N5 for a two-layer performed during the first pass. We used neural network. The proposed ensembles of neural- networks as generators of weak classifiers only, i.e. networks-based classifiers are basically a set of m such classifiers which are slightly better than a classifiers. All the m classifiers work with the same random variable with uniform distribution. For weak set of n inputs. Each of the m classifiers tries to learn classifiers, their diversity is more important than to recognize objects of one class in the input patterns their accuracy. Therefore, it seemed appropriate to of size n. Details about the parameters of the use a greedy way in order to propose classifiers. networks are shown in Table 1. All the neural This approach uses only the power of the neural networks used the winner-takes-all strategy for network adaptation rule in the early stages of its output neurons (Y1,.....,Yn) when they worked in the work and thus time is not lost due to a full active mode. So only one output neuron with the adaptation of the classifier. highest value could be active. The Yi is considered the winner if and only if ∀: 2.1 Performance and Diversity ⋁ ⋀ , i.e. the winner is the neuron with the highest output value . In the case that The most significant experimental part of the article more neurons have the same output value, the focused on dealing with text (machine readable) in winner is considered the first one in the order. Since particular. However, we were not limited to printed Adaline did not perform well with the basic learning text. The experiment has been conducted over the rule (Fausett, 1994), we assume that the 257 NCTA2014-InternationalConferenceonNeuralComputationTheoryandApplications cause lays in the relatively big number of patterns diversity of classifiers that were used in the proposal and inputs and therefore possibly the big value of of a specific ensemble. We used 6 bases of . That is, why we have normalized value of algorithms in total: N1 represents Adaline, N2 by the sigmoid function. represents delta rule, N3 represents Hebbian network, N4 represents perceptron, N5 represents Table 1: Parameters of classifiers. Back Propagation network and the sixth base N1-N5 represents all ensembles contain 20 specific Type (x) Δ instances of a specific type. In the test, each N1 Modified Adaline Rule combination of three methods was tested on 50 ensembles that were composed from classifiers 1 N2 Delta Rule formed over a given base of algorithms. Figure 1 1exp shows briefly the logical structure of the experiment. N3 Hebb Rule Base of … Base of N4 Perceptron Rule algorithms 1 algorithms 6 1 N5 Back Propagation Rule 1exp Diversity enhancing … Diversity enhancing configuration 1 configuration 12 The classifier is an alias for an instance of a neural network. Each classifier was created separately and adapted by only one pass through the Ensemble 1 … Ensemble 50 training set. After that the test set was presented and Base of algorithms Base of algorithms Enhancing config. Enhancing config. all achieved results were recorded in detail: . Lists of correctly and incorrectly classified patterns; . Time of adaptation. Classifier 1 Classifier 100 Network type … Network type Ensemble is a set of 100 classifiers, i.e.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us