IOSR Journal of Engineering May. 2012, Vol. 2(5) pp: 990-993

Pattern Recognition System using MLP Neural Networks

1 2 3 Sarvda Chauhan , Vikas Goel and Shalini Dhingra 1(EIE Deptt., IET, Bhaddal, Ropar, Punjab INDIA,Punjab INDIA,3(ECE Deptt., IET, Bhaddal, Ropar, Punjab INDIA,

ABSTRACT Pattern recognition can be used to recognize and/or locate specific objects in an image. The pattern recognition approaches are based on analysis of statistical parameters computed using image processing tools. The parameters may be compared with the standard pattern parameters to identify the pattern or a neural network may be trained using the statistical parameters to identify a given pattern. In the presented work, a neural network approach has been worked out in identifying a pattern. The neural approach applies biological concepts to machines to recognize patterns. A neural network is an processing system. It consists of massive simple processing units with a high degree of interconnection between each unit. The processing units work cooperatively with each other and achieve massive parallel distributed processing. The design and function of neural networks simulate some functionality of biological brains and neural systems. The advantages of neural networks are their adaptive-learning, self-organization and fault-tolerance capabilities. For these outstanding capabilities, neural networks are used for pattern recognition applications. In the presented work, multi- (MLP) neural network is used for pattern recognition. Here, each neuron computes a so called “net input” from the outputs of previous neurons and from the weights of the connections. Typically, such a net input is a weighted sum, and a numerical value, called “bias”, is added to the net input. In MLPs, a function called “”, is applied to the net input. In our experiments, we used a .

I. INTRODUCTION Layered neural networks involve a set of input cells pattern recognition problem is understandable due to their connected to a collection of output cells by means of one or capacity to accommodate the nature of human brain more layers of modifiable intermediate connections [1]. The learning capacity, and the fact that their structure can be most natural use of this architecture is to implement formulated mathematically. The functionality of the associatively by associating an input pattern, the topology of the MLP is determined by a learning algorithm representation of a concept or a situation, with another item, able to modify the parameters of the net. The algorithm of either of the same kind or totally different. For this reason, back propagation, based on the method of steepest descent networks with multiple layers are described as associative [4] in the process of upgrading the connection weights, is networks. Pattern recognition is the primary use of this the most commonly used by the scientific community. mechanism. Pattern recognition is a popular application in Multi-Layer (MLP) are fully connected feed that it enables the full set of human to be forward nets with one or more layers of nodes between the acquired by a machine. In the context of image processing, input and the output nodes. Each layer is composed of one the different stages are: acquisition, concern with digitizing or more artificial neurons in parallel. the coming from a sensor such as camera or scanner, localizing, which involves extracting the object from its III. background, and representation. Some features are Feature extraction is a process of studying and deriving extracted form the input pattern and that becomes the input useful information from the input patterns. In image to neural network. The final stage, the decision stage, recognition, the extracted features contain information consists of dividing the set of object representations into about statistical parameters and are computed about the number of classes. The last two phases, representation and centre of gravity of input pattern. The statistical parameters decision, are the associative phases. Layered neural used in neural network training process are: Perimeter, networks can play a part in each of these phases. Area, Maximum and Minimum radii in each quadrant, Intercepts on axes, Mean Radius, Standard Deviation and II. BRIEF LITERATURE SURVEY Figure Aspect. All the parameters discussed here are Multilayer Feed-forward Neural Networks (MFNNs) is an normalized with respect to the mean radius. The ideal means of tackling a whole range of difficult tasks in normalization step make the parameters size independent. pattern recognition and regression because of its highly The statistical parameters so computed above are also adaptable non-linear structure. The trend of using multi- made rotation independent using the orthogonal layer perceptron neural network [1] for the solution of transformation of the coordinates .

ISSN: 2250-3021 www.iosrjen.org 990 | P a g e IOSR Journal of Engineering May. 2012, Vol. 2(5) pp: 990-993

IV. METHODOLOGY V. EXPERIMENT An input pattern is presented to the first stage of the The learning process takes place through adjustment of forward paths, the input layer K, which consists of a two- weight connections between the input and hidden layers and dimensional array of receptor cells. The second stage of between the hidden and output layers. The number of input forward paths is called the hidden layer J. The third stage of nodes at the input layer was set at 17 according to the the forward paths is the recognition layer I as shown in statistical features discussed in section II. Followings are below figure. After the process of learning ends, the final the input nodes: result of the pattern recognition shows- in the response of 1. Mean Radius the cells of FA. Let x1,x2,…,xn be the input signals and 2. Standard Deviation of Radii w1, w2, w3, ,…, wn be the synaptic weights, and u the 3. Normalized Area activation potential, & the threshold and y the output signal 4. Perimeter and f the activation function: 5. Mean of Max. Radii 6. Mean of Min Radii The number of output nodes in this study was set at 6 as circle, half circle, equilateral triangle, square and hexagon. The number of nodes in the hidden layer was chosen based

on trial and error. Each pattern presentation is tagged with its respective label as shown in below table. The maximum MLP learn through an iterative process of adjustments value in each row (0.9) identifies the corresponding node applied to their free parameters. The most common learning expected to secure the highest output for a pattern to be algorithms are the standard back-propagation and faster- considered correctly classified. The output values are learning variations . They use a gradient search technique to denoted as O1, O2, O3, O4, O5 and O6. minimize a cost function equal to the mean square error (MSE) between the desired and the actual net outputs.

Target Des. Nodes Class 1 2 3 4 5 6 1 Square 0.9 0.1 0.1 0.1 0.1 0.1 2 Circle 0.1 0.9 0.1 0.1 0.1 0.1 The net is trained by initially selecting small random 3 Half 0.1 0.1 0.9 0.1 0.1 0.1 weights and internal thresholds, and presenting all training Circle data repeatedly. Weights are adjusted after every trial using 4 Eq. 0.1 0.1 0.1 0.9 0.1 0.1 Information specifying the correct class until weights Triangle converges and the cost function is reduced to an acceptable 5 Reg. 0.1 0.1 0.1 0.1 0.9 0.1 value. Hexagon 6 Reg. 0.1 0.1 0.1 0.1 0.1 0.9 Pentagon

The activation functions used were hyperbolic tangent for the hidden layer and sigmoid for the output layer. The hyperbolic tangent function is given by:

The sigmoid function is given by:

The input node parameters are normalized to the level [0, 1]. 30 no. of patterns (5 from each category) data sets are used to train the NN with a target of MSE < 0.01 as acceptable criteria for stopping of the training process. Typical Architecture of a MLP Neural Network System VI. RESULTS The NN is trained using the ANN tool box in MATLAB. The set of statistical parameters used are extracted using the ISSN: 2250-3021 www.iosrjen.org 991 | P a g e IOSR Journal of Engineering May. 2012, Vol. 2(5) pp: 990-993

feature extraction process. Table-2 gives the set of REFERENCES parameters for the respective shapes. Following results are [1] A Versatile System for Complex obtained after the training is complete and new test pattern Industrial Parts (IEEE Trans. On , Sep. data sets are provided for testing of the classifier: 1977, No. 9, Vol. C-27) [2] Wani, M. A. and Pham, D. T., 1999, Efficient control Percentage Correct Target Class MSE chart pattern recognition through synergistic and Classification distributed artificial neural networks. Proceedings of the Institution of Mechanical Engineers: Part B, 1 - Square 96.78 0.0099 213, 157–169 [3] Anil k. Jain Fellow IEEE, Robert P.W Duin & Jian 2 - Circle 97.45 0.0098 chang Mao, Senior Member IEEE “IEEE Transactions on Pattern Analysis and Machine VOL 22 3 – Half Circle 98.34 0.0101 No.1, January 2000 [4] Enrique Frias –Marrtinez, Angel Sanchez & Jose 4 – Eq. 96.64 0.0090 Velez “Support Vector Machines Vs Multi-Layer Triangle Perceptrons for Efficient Off-Line Signature, Bio- 5 – Reg. Metry & Artificial Vision Group 97.34 0.0092 Hexagon [5] Aki Vehtari and Juko Lampinen Bayesian MLP Neural Networks for ” Laboratory Of 6 – Reg. Computational engineering, Helsinki University Of 98.56 0.0097 Pentagon Technology, Finland 2000 [6] Yas Abbas Alsultanny, Musbah M. Aqel , 2002, Pattern recognition using multilayer neural-genetic Conclusion Algorithm.. Elsevier Science B.V. Nuerocomputing Pattern recognition can be done both in normal computers 51(2003) 237-247 and neural networks. Computers use conventional [7] Richard Bridge, “ Image Processing.” arithmetic algorithms to detect whether the given pattern Tessella support services PLC, Oxon, England, Issue matches an existing one. It is a straight forward method. It V1.R2.M0, June, 2003 will say either yes or no. It does not tolerate noisy patterns. [8] Raul C. Mauresan “Pattern Recognition Using Pulse On the other hand, neural networks can tolerate noise and, coupled Neural Neural Network & Discrete Fourier if trained properly, will respond correctly for unknown Transforms “Personal Research patterns. Neural networks may not perform miracles, but if [9] NI Vision Based Automatic Optical Inspection (AOI), constructed with the proper architecture and trained Proceedings of 2009 IEEE International Conference correctly with good data, they will give amazing results, not on Applied Superconductivity and Electromagnetic only in pattern recognition but also in other scientific and Devices Chengdu, China, September 25-27, 2009 commercial applications.

ISSN: 2250-3021 www.iosrjen.org 992 | P a g e IOSR Journal of Engineering May. 2012, Vol. 2(5) pp: 990-993

Table - 2 for Input Statistical Parameters

ISSN: 2250-3021 www.iosrjen.org 993 | P a g e