GENERAL I ARTICLE Artificial Neural Networks· t\ BrieT In!TOQuclion Jitendra R Raol and Sunilkumar S Mankame Artificial neural networks are 'biologically' inspired net­ works. They have the ability to learn from empirical datal information. They find use in computer science and control engineering fields. In recent years artificial neural networks (ANNs) have fascinated scientists and engineers all over the world. They have the ability J R Raol is a scientist with to learn and recall - the main functions of the (human) brain. A NAL. His research major reason for this fascination is that ANNs are 'biologically' interests are parameter inspired. They have the apparent ability to imitate the brain's estimation, neural networks, fuzzy systems, activity to make decisions and draw conclusions when presented genetic algorithms and with complex and noisy information. However there are vast their applications to differences between biological neural networks (BNNs) of the aerospace problems. He brain and ANN s. writes poems in English. A thorough understanding of biologically derived NNs requires knowledge from other sciences: biology, mathematics and artifi­ cial intelligence. However to understand the basics of ANNs, a knowledge of neurobiology is not necessary. Yet, it is a good idea to understand how ANNs have been derived from real biological neural systems (see Figures 1,2 and the accompanying boxes). The soma of the cell body receives inputs from other neurons via Sunilkumar S Mankame is adaptive synaptic connections to the dendrites and when a neuron a graduate research student from Regional is excited, the nerve impulses from the soma are transmitted along Engineering College, an axon to the synapses of other neurons. The artificial neurons Calicut working on direct are called neuron cells, processing elements or nodes. They neural network based attempt to simulate the structure and function of biological (real) adaptive control. neurons. Artificial neural models are loosely based on biology since a complete understanding of the behaviour of real neuronal systems is lacking. The point is that only a part of the behaviour of real neurons is necessary for their information capacity. Also, ---------------- ---- RESONANCE ·I February. 1996 47 GENERAL ! ARTICLE input signals ~ndritiC spine l where synapse n ~+axons ~ is possible p -----~ f t u ____-.J/ S P soma t summation u synapses soma threshold t (weights) Model of Artificial Neuron Figure 1 Biological and ar­ flflclal neuron models have Biological Neuron System certain features in common. Weights in ANN modeVrep­ • Dendrites input branching tree of fibres - connect to a set of other resent synapses of BNN. neurons-receptive surfaces for input signals • Soma cell body -all the logical functions ofthe neurons are realised here • Synapse specialized contacts on a neuron - interfaces some axons to the spines of the input dendrites - can increase/dampen the neuron excitation • Axon nerve fibre - final output channel - signals converted into nerve pulses (spikes) to target cells Artificial Neuron System • Input layer the layer of nodes for data entering on ANN • Hidden layer the layer between input and output layers • Output layer the layer of nodes that produce the network's output Artjfjcial neural responses networks have the • Weights strength or the (gain) value of the connection between appamnt obility to nodes imitate the brain's activity to make it is easier to implement/simulate simplified models rather than decisions and draw complex ones. conclusions when presented with The first model of an elementary neuron was outlined by complex and noisy McCulloch and Pitts in 1943. Their model included the elements information. needed to perform necessary computations, but they could not \..-~-------------.-- --- 48 RESONANCE I February 1996 GENERAL I ARTICLE --------- SignallInfonnation ------> flow output bias inputs input layer I hidden layer I output layer Feed Forward Neural Network Non-linear activation' f ' realise the model using the bulky vacuum tubes of that era. FIguf'fI2 A fyplCtlI arllfldtll McCulloch and Pitts must nevertheless be regarded as the pio­ neural network /fHd for­ neers in the field of neural networks. ward} does nofusIHII/y have fHdbtlck - fltIW of slglJQV Information Is Dnly In the ANNs are designed to realise very specific computational tasks/ folWtlrd dll'fldlon. A multi­ problems. They are highly interconnected, parallel computa­ layer fHd foIward neural tional structures with many relatively simple individual process­ nlltwork /FFNN} Is shown ing elements (Figure 2). A biological neuron either excites or here. The non-linear char­ inhibits all neurons to which it is connected, whereas in ANNs acterIsNcs of ANNs aM dUll to thll non-linear adMlflDn either excitatory or inhibitory neural connections are possible. fundlon f. This Is USBfuI for There are many kinds of neurons in BNNs, whereas in ANNs only accurate mtHllllllng ofnon­ certain types are used. It is possible to implement and study linear sysffIms.. various ANN architectures using simulations on a personal com­ puter (PC). Characteristics of non-linear systems Types of ANN s In non-linear systems, the output variables do not There are mainly two types of ANNs: feed forward neural net­ depend on the input vari­ works (FFNNs) and recurrent neural networks (RNN s). In FFNN ables in a linear manner. there are no feedback loops. The flow of signals/information is The dynamic characteris­ only in the forward direction. The behaviour of FFNN does not tics of the system itself depend on past input. The network responds only to its present would depend on either input. In RNN there are feedback loops (essent~ally FFNN with one or more of the follow­ output fed back to input). Different types of neural network ing: amplitude of the input architectures are briefly described next. signal. its wave form. its frequency. This is not so in • Single-layer feed forward networks: It has only one layer of the case of a linear system . 49 GENERAL I ARTICLE Figure 3 Recun-ent neural network has feedback loops unit with output fed back to in­ delay put. computational nodes. It is a feed forward network since it does not have any feedback. • Multi-layer feed forward networks: It is a feed forward network with one or more hidden layers. The source nodes in the input layer supply inputs to the neurons of the first hidden layer. The outputs of the first hidden layer neurons are applied as inputs to the neurons of the second hidden layer and so on (Figure 2). If The first model of an every node in each layer of the network is connected to every other elementary neuron node in the adjacent forward layer, then the network is called fully was outlined by connected. If however some of the links are missing, the network McCulloch and Pitts is said to be partially connected. Recall is instantaneous in this in 1943. The model type of network (we will discuss this later in the section on the included the uses of ANNs). These networks can be used to realise complex elements to perform input/output mappings. necessary computations , but • Recurrent neural networks: A recurrent neural network is one in they could not which there is at least one feedback loop. There are different kinds realisE ~ the model of recurrent networks depending on the way in which the feed­ using the bulky back is used. In a typical case it has a single layer of neurons with vacuum tubes of each neuron feeding its output signal back to the inputs of all that era. McCulloch other neurons. Other kinds of recurrent networks may have self­ and Pitts must feedback loops and also hidden neurons (Figure 3). nevertheless be rega rded as the • Lattice networks: A lattice network is a feed forward network pioneer s in the with the output neurons arranged in rows and columns. It can fie id of neural have one-dimensional, two-dimensional or higher dimensional netWorks. arrays of neurons with a corresponding set of source nodes that --------------------------------~-=~----------------------------- 50 RESONANCE I February 1996 GENERAL ! ARTICLE Input_yer of source :rerinput nodes source nodes One Dimensional Two-Dimensional supply the input signals to the array. A one-dimensional network Figure 4 tafflce networks of three neurons fed from a layer of three source nodes and a two­ are FFNNs with output neu­ rons a"anged In rows and dimensional lattice of 2-by-2 neurons fed from a layer of two columns. The one-dimen­ source nodes are shown in Figure 4. Each source node is connected sional nefwDrk offhree neu­ to every neuron in the lattice network. rons shown here Is fed from a layer of three source There are also other types of ANNs: Kohonen nets, adaptive nodes. The fwD-dimensional resonance theory (ART) network, radial basis function (RBF) lafflce Is fed from a layer of network, etc., which we will not consider now. two source nodes. Training an ANN A learning scheme for updating a neuron's connections (weights) was proposed by Donald Hebb in 1949. A new powerful learning law called the Widrow-H offleaming rule was developed by Bernard Widrow and Marcian Hoff in 1960. How does an ANN achieve 'what it achieves'? In general, an ANN structure is trained with known samples of data. As an example, if a particular pattern is to be recognised, then the ANN is first trained with the known pattern/information (in the form of digital signals). Then the ANN is ready to recognise a similar ANNs are used for pattern when it is presented to the network. If an ANN is trained pattern recognition, with a character 'H' , then it must be able to recognise this image processing, character when some noisy/fuzzy H is presented to it. A Japanese signal processing/ optical character recognition (OCR) system has an accuracy of prediction, adaptive about 99% in recognising characters from thirteen fonts used to control and other train the ANN.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-