Radial Basis Function Artificial Neural Networks and Fuzzy Logic

Total Page:16

File Type:pdf, Size:1020Kb

Radial Basis Function Artificial Neural Networks and Fuzzy Logic PERIODICA POLYTECHNICA SER. EL. E.'iG. VOL. 42, NO. 1, PP. 155-172 (1998) RADIAL BASIS FUNCTION ARTIFICIAL NEURAL NETWORKS AND FUZZY LOGIC i'\. C. STEELEx and J. Go DJ EVAC xX • Control Theory and Application Centre Coventry University email: [email protected] •• EPFL IVlicrocomputing Laboratory IN-F Ecublens, CH-101.5 Lausanne, Switzerland Received: :VIay 8, 1997; Revised: June 11, 1998 Abstract This paper examines the underlying relationship between radial basis function artificial neural networks and a type of fuzzy controller. The major advantage of this relationship is that the methodology developed for training such networks can be used to develop 'intelligent' fuzzy controlers and an application in the field of robotics is outlined. An approach to rule extraction is also described. Much of Zadeh's original work on fuzzy logic made use of the MAX/MIN form of the compositional rule of inference. A trainable/adaptive network which is capable of learning to perform this type of inference is also developed. Keywords: neural networks, radial basis function networks, fuzzy logic, rule extraction. 1. Introduction In this paper we examine the Radial Basis Function (RBF) artificial neural network and its application in the approximate reasoning process. The paper opens with a brief description of this type of network and its origins, and then goes on to shmv one \vay in which it can be used to perform approximate reasoning. vVe then consider the relation between a modified form of RBF network and a fuzzy controller, and conclude that they can be identical. vVe also consider the problem of rule extraction and we discuss ideas which were developed for obstacle avoidance by mobile robots. The final part of the paper will consider a novel type of network, the Artificial Neural Inference (ANI) network, which is related to the RBF network and can perform max/min compositional inference. Making use of some new results in analysis, it is also possible to propose an adaptive form of this network, and it is on this network that current work is focused. 156 N, C, STEELE and J, GODJEVAC 2. The RBF Network The radial basis function network has its origins in approximation theory, and such an approach to multi-variable approximation has been developed as a neural network paradigm notably by BROOMHEAD and LOWE [2]. Surpris­ ingly perhaps, techniques of multi-variable approximation have only recently been studied in any great depth. A good account of the state of the art at the start of this decade is given by POWELL in [1]. In the application to the approximation of functions of a single variable, it is assumed that values of the function to be approximated are known at a number, n. of distinct sam­ ple points, and the approximation task is to construct a continuous function through these points. If n centres are chosen to coincide with the known data points then an approximation can be constructed in the form n where ri = Ilx - cill is the radial distance of x from the centre Ci, i = 1 ... n, and Wi, i = 1 ... n are constants. Each of the n centres is associated with one of the n radial basis functions 0;, i = 1 ... n. The method extends easily to higher dimensions when r is usually taken as the Euclidean distance. With the centres chosen to coincide with the sample or data points, and with one centre corresponding to each such point. then the approximation generated will be a function 'which reproduces the function values at the sample points exactly. In the absence of 'noise' this is desirable. but in situations when this is present, steps have to be taken to prevent the noise being modelled at the expense of the underlying data. In fact. in most neural net\\'ork applications, it is usual to work with a lesser number of centres than of data points in order to prod uce a network \\'ith good 'generalisation' as opposed to noise modelling properties, and the problem of how to choose the centres has then to be addressed. Some users take the vie\v that this is part of the network training process, but to take this view from the outset means that possible advantages may be lost. The 'default' method for 'prior' centre selection is the use of the l'C-means algorithm, although other methods are available. In a number of applications. notably those where the data occur in clusters, this use of a set of fixed centres is adequate for the task in hand. In other situations, this approach is either not successfuL or is not appropriate, and the network has to be given the ability to adapt these locations, either as part of the training process. or in a later tuning operation. The exact nature of the radial basis functions Oi. i = 1 ... n is not usually of major importance, and good results have been achieved in a number of applications using Gaussian basis functions when RADIAL BASIS FUNCTION... 157 Y2 w cMr;) = e-{ ~ V rj = Ilx - cill Xl X2 Xn Fig. 1. The Radial Basis Function network where ai, i = 1 ... n are constants. This type of set of basis functions, with the property that 9 -+ 0 as l' -+ x is knmvn as a set of localised basis functions, and although this appears to be a natural choice, there is no requirement for this to be the case. There are a number of examples of non-localised basis functions in use, including the thin-plate spline function, <7'>(1') = 1'2 In (1') and the multi-quadric function. A major advantage of the RBF neural network, when the centres have been pre-selected, is the speed at which it can be trained. In Fig. 1, we show a network with two output nodes and we note that with the centres fixed, 158 N. C. STEELE and J. GODJEVAC the only parameters which have to be determined are the weights on the connections between the single hidden layer and the output layer. This means that the parameters can be found by the solution of a system of linear equations, and this can be made both efficient and robust by use of the Singular Value Decomposition (SVD) algorithm. The algorithm has further advantages when the number of centres is less than the number of data points since a least squares approximation is then found automatically. If centres are not pre-selected, then some proced ure for selection, per­ haps based on gradient descent, must be incorporated into the training algorithm. In this case, either all parameters are determined by gradient descent, or a hybrid algorithm may be used, with the parameters of the centres being updated by this means and new sets of weights found (not necessarily at each iteration) again as the solution of a system of linear equations. 3. A Fuzzy Controller (4) w (:3) (2) (1) Fig. 2. A fuzzy controller In Fig. 2 we show the architecture of a fuzzy controller based on the Takagi­ Sugeno design, with a crisp output. This controller is a variant of that discussed by JANG and SUN [4]. In general. this fuzzy controller has m crisp RADIAL BASIS FUNCTION... 159 inputs Xl, X2,"" Xm and, in this case, one output y. There are n linguistic rules of the form: Ri: IF Xl is Ai,l AND X2 is Ai,2 AND ... AND Xm is Ai,m THEN Y is Yi i = 1, ... , n where i is the index of the rule, Ai,j is a fuzzy set for i-th rule and j-th linguistic variable defined over the universe of discourse for j-th variable, and Wi is a real number. The fuzzy sets Ai,j,i = 1, ... , n, j = 1, ... , m are each defined by Gaussian membership functions j..li,j(Xj), where -( x -c )'/- (r j..li,j(Xj) = e ) t,) t,). (1) Here Ci,j and ai,j are the constant Gaussian parameters. In the first layer of the structure shown in Fig. 2, fuzzification of the crisp inputs Xl and X2 takes place. This means that the degree of mem­ bership of the inputs Xl and X2 is calculated for each of the corresponding fuzzy sets Ai,j. These values are then supplied to be nodes in the second layer, where a t-norm is applied, and here we use the algebraic product. The output of the kth unit in this layer is then the firing strength UI; of rule k, and in this case, with two inputs, it is given by k = 1, .. ,n. In the general case this is m UI; = IT j..lk,j(Xj) , k = 1, .. ,n. (2) j=1 The rule firing strength may also be taken as a measure of the degree of compliance of the current input state with a rule in the rule base, taking the value 1 when compliance is total. The overall network output at the fourth layer has to be a crisp value, and this is generated by the height method. The Wi, i = L ... , n are the weights between the third and fourth layers and are in some sense 'partial conseq uents'. The out pu t y is finally formed as (3) This can be ..vritten as n - 111 , -L 112 ' • -L Un ,'-.L:- ,. y - ,\,n ..1(; 1 I ,\,n .. /1;2 . .. I ,\,n . Un - U, U, . (4) L.,i=1 U, L.,i=1 U, L.,i=l 11, i=l Eq. (4) suggests that layer three should perform a normalisation function, and if this is the case the output of unit I in layer three is given by _Ut U/ = n .
Recommended publications
  • Identification of Nonlinear Systems Using Radial Basis Function Neural Network C
    World Academy of Science, Engineering and Technology International Journal of Mechanical and Mechatronics Engineering Vol:8, No:9, 2014 Identification of Nonlinear Systems Using Radial Basis Function Neural Network C. Pislaru, A. Shebani parameters (centers, width and weights). Other researchers Abstract—This paper uses the radial basis function neural used the genetic algorithm to improve the RBFNN network (RBFNN) for system identification of nonlinear systems. performance; this will be a more complicated identification Five nonlinear systems are used to examine the activity of RBFNN in process [4]–[15]. system modeling of nonlinear systems; the five nonlinear systems are dual tank system, single tank system, DC motor system, and two academic models. The feed forward method is considered in this II. ARTIFICIAL NEURAL NETWORKS (ANNS) work for modelling the non-linear dynamic models, where the K- A. Simple Perceptron Means clustering algorithm used in this paper to select the centers of radial basis function network, because it is reliable, offers fast The simple perceptron shown in Fig. 1, it is a single convergence and can handle large data sets. The least mean square processing unit with an input vector X = (x0,x1, x2…xn). This method is used to adjust the weights to the output layer, and vector has n elements and so is called n-dimensional vector. Euclidean distance method used to measure the width of the Gaussian Each element has its own weights usually represented by function. the n-dimensional weight vector, e.g. W = (w0,w1,w2...wn). Keywords—System identification, Nonlinear system, Neural networks, RBF neural network.
    [Show full text]
  • Diagnosis of Brain Cancer Using Radial Basis Function Neural Network with Singular Value Decomposition Method
    International Journal of Machine Learning and Computing, Vol. 9, No. 4, August 2019 Diagnosis of Brain Cancer Using Radial Basis Function Neural Network with Singular Value Decomposition Method Agus Maman Abadi, Dhoriva Urwatul Wustqa, and Nurhayadi observation and analysis them. Then, they consult the results Abstract—Brain cancer is one of the most dangerous cancers of the analysis to the doctor. The diagnosis using MRI image that can attack anyone, so early detection needs to be done so can only categorize the brain as normal or abnormal. that brain cancer can be treated quickly. The purpose of this Regarding the important of the early diagnosis of brain study is to develop a new procedure of modeling radial basis cancer, many researchers are tempted doing research in this function neural network (RBFNN) using singular value field. Several methods have been applied to classify brain decomposition (SVD) method and to apply the procedure to diagnose brain cancer. This study uses 114 brain Magnetic cancer, such as artificial neural networks based on brain MRI Resonance Images (MRI). The RBFNN model is constructed by data [3], [4], and backpropagation network techniques using steps as follows; image preprocessing, image extracting (BPNN) and principal component analysis (PCA) [5], using Gray Level Co-Occurrence Matrix (GLCM), determining probabilistic neural network method [6], [7]. Several of parameters of radial basis function and determining of researches have proposed the combining several intelligent weights of neural network. The input variables are 14 features system methods, those are adaptive neuro-fuzzy inference of image extractions and the output variable is a classification of system (ANFIS), neural Elman network (Elman NN), brain cancer.
    [Show full text]
  • Radial Basis Function Network
    Radial basis function network In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.[1][2][3] Contents Network architecture Normalized Normalized architecture Theoretical motivation for normalization Local linear models Training Interpolation Function approximation Training the basis function centers Pseudoinverse solution for the linear weights Gradient descent training of the linear weights Projection operator training of the linear weights Examples Logistic map Function approximation Unnormalized radial basis functions Normalized radial basis functions Time series prediction Control of a chaotic time series See also References Further reading Network architecture Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non- linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbers . The output of the network is then a scalar function of the input vector, , and is given by where is the number of neurons in the hidden layer, is the center vector for neuron , and is the weight of neuron in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function.
    [Show full text]
  • RBF Neural Network Based on K-Means Algorithm with Density
    RBF Neural Network Based on K-means Algorithm with Density Parameter and Its Application to the Rainfall Forecasting Zhenxiang Xing, Hao Guo, Shuhua Dong, Qiang Fu, Jing Li To cite this version: Zhenxiang Xing, Hao Guo, Shuhua Dong, Qiang Fu, Jing Li. RBF Neural Network Based on K-means Algorithm with Density Parameter and Its Application to the Rainfall Forecasting. 8th International Conference on Computer and Computing Technologies in Agriculture (CCTA), Sep 2014, Beijing, China. pp.218-225, 10.1007/978-3-319-19620-6_27. hal-01420235 HAL Id: hal-01420235 https://hal.inria.fr/hal-01420235 Submitted on 20 Dec 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License RBF Neural Network Based on K-means Algorithm with Density Parameter and its Application to the Rainfall Forecasting 1,2,3,a 1,b 4,c 1,2,3,d 1,e Zhenxiang Xing , Hao Guo , Shuhua Dong , Qiang Fu , Jing Li 1 College of Water Conservancy &Civil Engineering, Northeast Agricultural University, Harbin 2 150030, China; Collaborative Innovation Center of Grain Production Capacity Improvement 3 in Heilongjiang Province, Harbin 150030, China; The Key lab of Agricultural Water resources higher-efficient utilization of Ministry of Agriculture of PRC, Harbin 150030,China; 4 Heilongjiang Province Hydrology Bureau, Harbin, 150001 a [email protected], [email protected], [email protected], [email protected], e [email protected] Abstract.
    [Show full text]
  • Ensemble of Radial Basis Neural Networks with K-Means Clustering for Radiša Ž
    Ensemble of Radial Basis Neural Networks With K-means Clustering for Radiša Ž. Jovanovi ć Assistant Professor Heating Energy Consumption Prediction University of Belgrade Faculty of Mechanical Engineering Department of Automatic Control For the prediction of heating energy consumption of university campus, neural network ensemble is proposed. Actual measured data are used for Aleksandra A. Sretenovi ć training and testing the models. Improvement of the predicton accuracy Teaching Assistant using k-means clustering for creating subsets used to train individual University of Belgrade Faculty of Mechanical Engineering radial basis function neural networks is examined. Number of clusters is Department of Thermal Science varying from 2 to 5. The outputs of ensemble members are aggregated using simple, weighted and median based averaging. It is shown that ensembles achieve better prediction results than the individual network. Keywords: heating consumption prediction, radial basis neural networks, ensemble 1. INTRODUCTION description of the relationship between the independent variables and the dependent one. The data-driven The study of the building energy demand has become a approach is useful when the building (or a system) is topic of great importance, because of the significant already built, and actual consumption (or performance) increase of interest in energy sustainability, especially data are measured and available. For this approach, after the emanation of the EPB European Directive. In different statistical methods can be used. Europe, buildings account for 40% of total energy use Artificial neural networks (ANN) are the most used and 36% of total CO 2 emission [1]. According to [2] artificial intelligence models for different types of 66% of the total energy consumption of residential prediction.
    [Show full text]
  • Radial Basis Function Neural Networks: a Topical State-Of-The-Art Survey
    Open Comput. Sci. 2016; 6:33–63 Review Article Open Access Ch. Sanjeev Kumar Dash, Ajit Kumar Behera, Satchidananda Dehuri*, and Sung-Bae Cho Radial basis function neural networks: a topical state-of-the-art survey DOI 10.1515/comp-2016-0005 puts and bias term are passed to activation level through a transfer function to produce the output (i.e., p(~x) = Received October 12, 2014; accepted August 24, 2015 PD ~ fs w0 + i=1 wi xi , where x is an input vector of dimen- Abstract: Radial basis function networks (RBFNs) have sion D, wi , i = 1, 2, ... , D are weights, w0 is the bias gained widespread appeal amongst researchers and have 1 weight, and fs(o) = 1+e−ao , a is the slope) and the units are shown good performance in a variety of application do- arranged in a layered feed-forward topology called Feed mains. They have potential for hybridization and demon- Forward Neural Network [2]. The network is then trained strate some interesting emergent behaviors. This paper by back-propagation learning scheme. An MLP with back- aims to oer a compendious and sensible survey on RBF propagation learning is also known as Back-Propagation networks. The advantages they oer, such as fast training Neural Networks (BPNNs). In BPNNs a common barrier is and global approximation capability with local responses, the training speed (i.e., the training speed increases as the are attracting many researchers to use them in diversied number of layers, and number of neurons in a layer grow) elds. The overall algorithmic development of RBF net- [3].
    [Show full text]