<<

computers

Article Air Condition’s PID Controller Fine-Tuning Using Artificial Neural Networks and Genetic Algorithms

Maryam Malekabadi, Majid Haghparast * ID and Fatemeh Nasiri Department of Computer Engineering, Yadegar-e-Imam Khomeini (RAH) Shahre Rey Branch, Islamic Azad University, Tehran, Iran; [email protected] (M.M.); [email protected] (F.N.) * Correspondence: [email protected] or [email protected]; Tel.: +98-912-646-9737

 Received: 9 February 2018; Accepted: 11 May 2018; Published: 21 May 2018 

Abstract: In this paper, a Proportional––Derivative (PID) controller is fine-tuned through the use of artificial neural networks and evolutionary algorithms. In particular, PID’s coefficients are adjusted on line using a multi-layer. In this paper, we used a feed forward multi-layer perceptron. There was one hidden layer, activation functions were sigmoid functions and of network were optimized using a genetic algorithm. The data for validation was derived from a desired results of system. In this paper, we used genetic algorithm, which is one type of evolutionary algorithm. The proposed methodology was evaluated against other well-known techniques of PID parameter tuning.

Keywords: genetic algorithms; optimization; artificial neural network

1. Introduction In general, optimization is a set of methods and techniques that is used to achieve the minimum and maximum values of mathematical functions, including linear and nonlinear functions. Essentially, optimization methods are divided in two forms: evolutionary separation methods and gradient-based methods. Nowadays, due to advances in science and technology, many methods have been developed so that the traditional gradient-based optimization methods became obsolete. Through evolutionary optimization methods, we introduce a genetic algorithm optimization method that can be applied to a neural network and Fuzzy nerve training. The first simulation efforts were conducted by Mac Klvk and Walter Pitts using the logical model of neuronal function that has formed the basic building blocks of most of today’s artificial neural networks [1–4]. The performance of this model is based on inputs and outputs. If the sum of entries is greater than the threshold value, the so-called neurons will be stimulated. The result of this model was the implementation of simple functions such as AND and OR [5]. The neuron adaptive linear model is another system, and was the first neural network used by Vydrv and Hough (Stanford University) in 1960 for real problems [6]. In 1969, Misky and Papert wrote a book where the perceptron plan was unable to solve any problems and they stopped conducting research in this field for several years [5]. In 1974, Vrbas set up learning with backward propagation as if it was a multi-layered instruction with even stronger rules education [7]. There are many studies on the tuning of PID controller parameters. For example [8] is used for parameter tuning of a PID controller [9,10], and the IMC method has been used for the tuning of PID. The PID controller has been used in some studies involving air conditioning systems. The CARLA algorithm has been used in the tuning of a PID controller in an air controller [11]. This algorithm is an online algorithm that has a good performance level in practical applications. Nowadays, too many changes in artificial neural network technology have occurred. The main contribution of this work is to create an online neural network for control that is based on an online network where it is normal to expect better results.

Computers 2018, 7, 32; doi:10.3390/computers7020032 www.mdpi.com/journal/computers Computers 2018, 7, 32 2 of 14

In Section2, we describe the system model and related works. In Section3, we provided the proposed methodology. Finally, in Section4, we present the simulation results, and we provide a general conclusion in Section5. A complete list of references is also provided.

2. Related Work The Earth system model always displays many of changes in the earth’s climate. International agreements such as the Kyoto Protocol and the Copenhagen Accord emphasize the reduction of carbon dioxide propagation as one of the main ways to deal with risk of climate change [12]. The suggestions made by the Chinese Government’s Electric and Mechanical Services Organization (25.5 ◦C), EU expansion and Innovation of China (26 ◦C), and the Ministry of Economy and Knowledge Republic of Korea (26–28 ◦C) are related to the thermal environment and its optimization [13]. It is also possible to refer to the suggestion of the Carrier 1 software (25–26.2 ◦C) in relation to the internal temperature [14]. The result of this research is logical and presents an acceptable increase in the average temperature of the air in summer. On the other hand, one of the most important human environmental requirements is the provision of thermal comfort conditions. The most common definition of thermal comfort has been defined by the ASHRAE Standard 55, which defines thermal comfort is that condition of mind that expresses satisfaction with the thermal environment [15]. Furthermore, researchers have concluded that humans spend between 80 and 90 percent of their lives in an indoor environment [16]. Hence, indoor air quality like thermal comfort, has a direct impact on the health, productivity, and morale of humans [17]. An increase in the average indoor air temperature in summer can improve the energy consumption of air conditioning systems and consequently is a solution against undesirable climate changes [18]. However, this action may disturb the residents’ thermal comfort conditions, especially in the summer. Therefore, optimizing energy consumption in air conditioning systems is a useful action to maintain thermal comfort conditions [19]. Among the most important studies in this area is the research that has been undertaken by Tian et al. [20]. By assessing the mean local air age, Tian et al. showed the average of the people’s vote indicator, relative to the thermal conditions of the environment and the percentage of dissatisfaction of the individuals. The “Layer Air Conditioning System” is able to meet the requirements of an increased internal air temperature by providing thermal comfort conditions only in the inhalation area of individuals [13]. Morovvat [21] examined the “Layer Air Conditioning System” in conjunction with the Hydronic Radiation Cooler, comprehensively in terms of thermal comfort, air quality, and energy consumption, and as part of his findings, he confirmed the conclusions of Tian et al. [20]. The combination of genetic algorithms with neural networks has been utilized, and provided prominent results in many real life applications [22–24]. A genetic algorithm is used by the strategy to solve the on-line optimization problem of multiple parameters [25].Marefat and Morovvat [26] introduced the combination of “Layer Air Conditioning System” with ceiling radiant cooling as a new way to provide thermal comfort. Lane et al. used a numerical simulation to compare the thermal comfort conditions and indoor air quality with a displacement and mixing air conditioning system in a room [27–29]. The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing [30]. Lane et al. compared annual energy consumption by mixing layer and displacement air conditioning systems and concluded that for a case study, the energy savings of layer air conditioning systems were 25% and 44% higher than displacement and mixing air conditioning systems. The proposed dynamic model is as Equation (1):

a(k)·y(k − 1) y(k) = + u(k − 1) + 0.2u(k − 2) (1) 1 + y2(k − 1) Computers 2018, 7, 32 3 of 14

Computers 2018In Equation, 7, x; doi: FOR (1), PEER a(k) isREVIEW a parameter changeable with time and willwww.mdpi.com/journal/ be given as Equationcomputers (2):

 −0.3k a(ka)(=k)1=.4(1.41 −10.−9e0.9e−0.3k) (2) (2)

3. The Proposed Methodology 3. The Proposed Methodology Our work maintains an emphasis close loop control systems. In particular, the proposed approach Our work maintains an emphasis close loop control systems. In particular, the proposed updates in real time an A/C’s PID parameters using an optimized ANN. In this section, the following approach updates in real time an A/C’s PID parameters using an optimized ANN. In this section, the points should be considered: following points should be considered: 1. 1.The inputThe input data datato networks to networks is designed is designed to train to train the neural the neural network network error, error, an error an error derived derived from from the inputthe input and andoutput. output. 2. 2.The designedThe designed network network has four has fourinputs inputs and andthree three outputs. outputs. 3. 3.Simulation Simulationss were were executed executed using using MatLab MatLab software software and andthe neural the neural network network toolbox. toolbox. 4. 4.ComparisonComparison performance performance of the of neural the neural network network system system is the is squared the squared square square of system of system error error displayeddisplayed as Eq asuation Equation (3): (3): N N N 1 2N 1 2 F = (ei) = (ti − ai) (3) 1 ∑2 1 ∑ 2 F = ∑(Ne )i=1= ∑(tN−i=a1) (3) N i N i i i=1 i=1 In the above equation, (N) is the number of tested samples; (ti) represents the neural network outputs;In the above and (aequatioi) representsn, (N) is the the actual number and of available tested samples outputs.; (ti) Forrepresents more information the neural network on all types outputsof activation; and (ai) represents functions, the in the actual MATLAB and available software, outputs. three activationFor more information functions—a on sigmoid all types tangent, of activationa sigmoid functions, log function in the and MATLAB a linear function—are software, three displayed activation in Figure functions1. —a sigmoid tangent, a sigmoid log function and a linear function—are displayed in Figure 1.

FigureFigure 1. (a) 1. Sigmoid(a) Sigmoid tangent tangent;; (b) sigmoid (b) sigmoid log function log function;; and ( andc) purelin (c) purelin function function charts charts..

Computers 2018, 7, 32 4 of 14

3.1. ControlComputation Systems 2018, 6, x FOR PEER REVIEW 4 of 14 Computation 2018, 6, x FOR PEER REVIEW 4 of 14 3.1. Control Systems AfterComputation expressing 2018, 6, x FOR the PEER structure REVIEW and genetic algorithm, in this section our intention was4 of to 14 use a 3.1. Control Systems powerfulAfter optimization expressing method the structure to train and the genetic neural algorithm, network in and this extractsection theour controllerintention wa PIDs to parameters. use a First,powerfulAfter3.1. we Control explain expressing optimization Systems the structurethe structure method of to theand train PIDgenetic the controller. neural algorithm, network in thisand sectionextract theour controllerintention PIDwas parameters.to use a First, we explain the structure of the PID controller. powerful optimizationAfter expressing method the structureto train the and neural genetic network algorithm, and extract in this the section controller our intention PID parameters. was to use a 3.1.1. Expression of the Performance of Classical PID Controllers First,powerful we explain optimization the structure method of the toPID train controller. the neural network and extract the controller PID parameters. 3.1.1. Expression of the Performance of Classical PID Controllers ControlFirst, we systemsexplain the are structure available of inthe both PID controller. open loop control and closed loop control methods. Both 3.1.1. Expression of the Performance of Classical PID Controllers methodsControl of control systems depend are onavai thelable system in both architecture open loop control and the and control closed method loop control used formethods that architecture.. Both 3.1.1. Expression of the Performance of Classical PID Controllers ControlmethodsControl systems systems of control with are dependavailable on canin theboth be system dividedopen loop architecture into control two and types. and closed the The controlloop first control of method these methods is used single-input. Both for that and methodsarchitecture. of control Control depend systems on with the systemfeedback architecture can be divided and into the two control types. method The first usedof these for is that single - single-outputControl systems, systems and are the avai secondlable inone both is open the multi-inputloop control and closed multi-output loop control control methods systems,. Both often input and single-output systems, and the second one is the multi-input and multi-output control architecture.methods Control of control systems depend with feedback on the system can be divided architecture into two and types. the controlThe first method of these usedis single for- that referredsystems, to as often multi-variable referred to as control multi-variable systems. control systems. inputarchitecture. and single- outputControl systems, systems with and thefeedback second can one be isdivided the multi into- inputtwo types. and multiThe first-output of these control is single -

systems,input often and referred single-output to as multi systems,-variable and controlthe second systems. one is the multi-input and multi-output control Open-LoopOpen-Loop Control Control Systems Systems systems, often referred to as multi-variable control systems. Open-Loop Control Systems A loopA loop control system is is designed designed to produce an an optimal optimal output output value value without without receiving receiving a a feedbackfeedbackAOpen loop system.-Loop controlsystem. Control In system thisIn this Systems case, case, is designed the the reference reference to produce signal signal isan is givengiven optimal toto thethe output actuator value and and without the the actuator actuator receiving controls controls a the the process. Figure 2 shows the structure of such a control system. process.feedback Figure system.A loop2 showsIn control this case, the system structure the isreference designed of such signal to a produce controlis given an system.to the optimal actuator output and vthealue actuator without controls receiving a the process.feedback Figure system. 2 shows In this the case, structure the reference of such signala control is givensystem. to the actuator and the actuator controls the process. FigureDesir ed2 shows output the structure of such a control system. Desired output Activator process Output response Activator process Output responseDesired output FigureFigure 2.2Activator. OpenOpen-loop-loop control control sys systems.processtems. Output responseFigure 2. Open-loop control systems. Closed Loop Control Systems Closed Loop Control Systems Figure 2. Open-loop control systems. Closed Loop Control Systems In closed loop control systems, the difference between the actual output and the desired output isInInClosed sent closed to Loop the loop loop controller, Control control control Systems systems, and systems, this the difference thedifference difference is oftenbetween between amplified the actual the and actualoutput is known outputand asthe an anddesired error the signal. output desired The output is sentstructure to the of controller, such a con andtrol system this difference is shown in is Figure often amplified3. and is known as an error signal. The is sent to theIn closed controller, loop controland this systems, difference the is difference often amplified between and the is actual known output as an and error the signal. desired The output structure of such a control system is shown in Figure3. structureis sent of tosuch the a controller,control system and thisis shown difference in Figure is often 3. amplified and is known as an error signal. The structureReference of such a control system is shown in Figure 3. Process Reference Subtractor controller process Process input Subtractor controller process Output inputReference OutputProcess Subtractor controller process input Output Sensor Sensor Figure 3. Closed loop controlSensor system. Figure 3. Closed loop control system. Multi-Input, Multi-Output ControlFigure Systems 3. Closed loop control system. Figure 3. Closed loop control system. Multi-Input, Multi-Output Control Systems Multi-Input,By increasing Multi-Output the complexity Control of Systems the control systems as well as increasing the number of control variables,ByMulti increasing-Input, it was Multi the necessary complexity-Output to Control use of thethe Systemscontrolstructure systems of a multi as well-variable as increasing control system,the number which of iscontrol the same asBy shown increasing in Figure the 4 complexity. of the control systems as well as increasing the number of control variables, Byit was increasing necessary the tocomplexity use the structure of the control of a multi systems-variable as well control as increasing system, which the number is the sameof control variables,as shownvariables, in it wasFigure it necessarywas 4. necessary to use to use the the structure structure of of a multi-variablea multi-variable controlcontrol system, which which is is the the same same as shownas inshown Figure in 4Figure. 4.

Figure 4. Multi-input and multi-output control systems. Figure 4. Multi-input and multi-output control systems. FigureFigure 4. Multi-input4. Multi-input and and multi-outputmulti-output control control systems. systems. Computers 2018, 7, 32 5 of 14

Computers 2018, 7, x; doi: FOR PEER REVIEW www.mdpi.com/journal/computers Proportional–Integral–Derivative Control (PID) Proportional–Integral–Derivative Control (PID) The PID control logic is widely used in controlling industrial processes. Due to the simplicity and flexibilityThe PID of thesecontrol controllers, logic is widely control used engineers in cont arerolling using industrial more of themprocesses. in industrial Due to the processes simplicity and andsystems. flexibility A PID of controllerthese controllers, has proportional, control engineers integral, are and using derivative more of terms, them and in industrial its transfer processes function andcan be systems. represented A PID as controller Equation (4), has proportional, integral, and derivative terms, and its can be represented as Equation (4), Ki K(s) = Kp + Ki + Kds (4) K(s) = K + s + K s (4) p s d where K represents the proportional gain, K represents the integral gain, and K represents the where Kpp represents the proportional gain, Ki i represents the integral gain, and Kd represents the derivativederivative gain. gain. By By adjusting adjusting the the controller controller gain, gain, the the controller controller can can perform perform the the desired desired control, control, which which isis the the same same as as that that shown shown in in Figure Figure 5..

Kp e(t)

u(t)

e(t) r(t) t process y(t) + Ki ʃ e(t)dt - 0

Kd de(t)/dt

FigureFigure 5 5.. ClosedClosed loop loop control control system system with with PID PID controller. controller.

3.2.3.2. Neural Neural Network Network TheThe structure structure of of the the artificial artificial neural neural network network model model is is inspired inspired by by biologic biologicalal neurons. neurons. According According toto the the characteristics characteristics of of biological biological neurons, neurons, we we can can mention mention nonlinearity, nonlinearity, simplicity simplicity of of computational computational units,units, and and learning learning capabilities. capabilities. In In an an artificial artificial neuron, neuron, each each of of the the input input values values is is influenced influenced by by ,weight, which which is is a a function function of of this this weight, weight, similar similar to to the the synaptic synaptic connection connection in in a a normal normal neuron. neuron. The The processorprocessor elements elements consist consist of of two two parts. parts. The The first first part part adds adds the the weighted weighted inputs, inputs, and and the the second second part part isis a a non non-linear-linear filter filter called called the the active active function function of of the the neuron. neuron. This This function function co compressesmpresses the the output output valuesvalues of of an an artificial artificial neuron neuron between between asymptotic asymptotic values. values. We We use use multi multi-layer-layer perceptron perceptron with with one one hiddenhidden layer, layer, and and inputs inputs of of network network are are derived derived from from error error of of system. system. O Outpututput of of system system is is controller controller coefficients,coefficients, and and performance performance loss loss fun functionction is is error error from from desired desired value in the next step. ThisThis compression compression allows allows the the output output of of the the processor processor elements elements to to remain remain within within a a reasonable reasonable range.range. Among Among the the features features of of artificial artificial neural neural networks, networks, we we can can note note learning learning capability, capability, information information dispersion,dispersion, generalizability, generalizability, parallel parallel processing, processing, and and resistance. resistance. One One of of the the applications applications of of this this type type ofof network network is is classification, classification, identification identification and and pattern pattern recognition, recognition, signal signal processing, processing, time time series series prediction,prediction, modeling modeling and and control, control, optimization, optimization, financial financial issues, issu insurance,es, insurance, security, security, the stock the market, stock market,and entertainment. and entertainment. Neural N networkseural networks have the have ability the toability change to change online; online; however, however, in the genetic in the geneticalgorithm, algorithm, which which is essentially is essentially an online an online algorithm, algorithm, this this is not is not possible. possible. It It is is normalnormal that if if the the controllercontroller is is onlin online,e, it it will will provide provide better better answers answers.. InIn the the design design of of neural neural networks, networks, and and especially especially the the neural neural network network of of delayed delayed time, time, the the designer designer dealsdeals with with the the problem problem of of selecting selecting the the right right network network for for the the design. design. In general, In general, if a network if a network with with the leastthe least complexity complexity and and the the minimum minimum parameter parameter has has the the greatest greatest accuracy accuracy in in identifying identifying the the input input patterns,patterns, it it is is called called an an appropriate appropriate network. network. A variety of network training is supervised learning, unsupervised learning, and reinforcement learning, which introduce a number of ways of learning in the following:

Computers 2018, 7, 32 6 of 14

A variety of network training is supervised learning, unsupervised learning, and reinforcement learning, which introduce a number of ways of learning in the following:

(A) Perceptron training

Computation 2018, 6, x FOR PEER REVIEW 6 of 14 For example, the perceptron learning algorithm includes:

(A)1. PerceptronAssigning training random values to weights For2. example,Applying the theperceptron perceptron learning to each algorithm training exampleincludes: 3. Ensuring that all training examples have been evaluated correctly [Yes: the end of the algorithm, 1. Assigning random values to weights No: go back to step 2] 2. Applying the perceptron to each training example 3. Ensuring(B) Back that propagation all training algorithm examples have been evaluated correctly [Yes: the end of the algorithm, No: go back to step 2] Perceptron have been found in artificial neural network designs, which are characterized by the (B) Back propagation algorithm formation of a multi-layered feed forward structure. The neural synaptic coefficient was configured to Perceptronbe able to estimatehave been ideal found processing in artificial through neural the network method designs, of back which propagation are characterized error. by the formation of a multi-layered feed forward structure. The neural synaptic coefficient was configured to be able(C) toBack-propagation estimate ideal processing algorithm through the method of back propagation error.

(C) BackTo- obtainpropagation more informationalgorithm on a multi-layer network weight, the back propagation method was Toconducted obtain more by usinginformation the gradient on a multi descent-layer method network to minimize weight, the the back error propagation square between method the outputswas of conductedthe network by using and the objective gradient function. descent method to minimize the error square between the outputs of the networkThe proposedand objective control function. structure in this designed method is presented as a block diagram of Figure2 Theand proposed its control control equation structure is Equation in this (5): designed method is presented as a block diagram of Figure 2 and its control equation is Equation (5): u(k) = u(k − 1) + k e (k) − e (k − 1) + k e (k) p y y i y (5) u(k) = u(k − 1+)k+ke {(ek)(k−) −2ee((kk−−11))+} +e k(ke−(k2)) d py y yy y i y (5) + kd{ey(k) − 2ey(k − 1) + ey(k − 2)} where u is the controller output in the PID controller designed method using the neural network. where u is Thethe controller overall shape output of in the the closed PID controller loop structure designed with method the controller using the and neural the system network. conversion Thefunction overall is shownshape of in the Figures closed6–8 .loop structure with the controller and the system conversion function is shown in Figures 6–8.

Error BP Alghorithm Display4 Kp To Workspace2 r Display y Kp Ki Ki To Workspace3 e Kd Display1 e De de Kd Neural Network To Workspace4 Differential Error Display2

Y I To Workspace1 To Workspace5 kp ki y kd u u e Scope Step PID Controller1 Plant Display5 error To Workspace

FigureFigure 6. The structure 6. The structure of the simulinked of the simulinked closed closedloop with loop the with neural the neuralnetwork network and the and PID the controller PID controller.. Computers 2018, 7, 32 7 of 14 Computers 2018, 7, x; doi: FOR PEER REVIEW www.mdpi.com/journal/computers Computers 2018, 7, x; doi: FOR PEER REVIEW www.mdpi.com/journal/computers 021 To Work space0212 021 IN31 KP 1 1 r 011 IN21 022To Work space2 1 r IN21 021 IN31 KP 1 r 011 To Work space0221 r 022 To Work spaceIN132 022 IN32 2 u 012 IN22 023 y 2 u 012 IN22 023 023 To Work space3 y To Work spaceIN333 KI 2 023 IN33 KI 2 3 e 013 IN23 024 e IN23 To Work space0244 e 3 013 024 IN34 e To Work space4 024 025 IN34 025 4 de 04 IN24 025 To Work space5 To Work spaceIN535 KD 3 de 4 de 04 IN24 025 IN35 KD 3 de Input Layer Hidden Layer Output Layer Input Layer Hidden Layer Output Layer FigureFigure 7. MLP 7. (MultilayerMLP (Multilayer Perceptron) Perceptron) neural neural network network structure structure with three with threelayers. layers. Figure 7. MLP (Multilayer Perceptron) neural network structure with three layers.

[y1] From6 [y1] From6

u 1 [k] [k] K- e K- + x x 1 u / y Clock Go to 5 [k] [k] MathK- e K- + x x From5 Gain1 Gain2 + Product1 Divide / + y Clock Go to 5 From5 FunctionGain1 Math Gain2 1 [U] + Product1 [UDivide1] + + [y] U 1 [U] 1.4 Function Go to 5 [U1] + + Go to 4 [y] U 1.4 From1 Go to 5 Constant1 From1 + Go to 4 [U] 1/Z [U1] Constant1 [U2] [U] 1/Z [U1] [U2] From Unit Delay Go to 1 [y1] x From2 [y1] + From Unit Delay Go to 1 From4 Product x From2 + From4 Product+ 1/Z 1/Z K- [U2] 1 Unit 1Unit/Z 1/Z K- [U2] + Gain Go to 2 Constant 1 Delay2 DelayUnit 1 Unit Gain Go to 2 Constant Delay2 Delay1 [y] 1/Z [y1] From3 Unit[y] Delay31/GoZ to 3[y1] From3 Unit Delay3 Go to 3 Figure 8. Plant dynamic structure. Figure 8. Plant dynamic structure. Figure 8. Plant dynamic structure. 3.3. Genetic Algorithms 3.3. Genetic Algorithms We3.3. introduce Genetic Algorithms the genetic algorithm optimization method and its basic concepts in this section. We introduce the genetic algorithm optimization method and its basic concepts in this section. The geneticWe algorithm introduce is the an geneticevolutionary algorithm solution optimization for optimiz methoding the andproblem. its basic In the concepts neural in network this section. The genetic algorithm is an evolutionary solution for optimizing the problem. In the neural network designThe, we genetic used algorithmgenetic algorithm is an evolutionary to optimize solution the weights for optimizing of the neural the network. problem. According In the neural to networkthe design, we used genetic algorithm to optimize the weights of the neural network. According to the case studydesign,, our we ne usedtwork genetic is an algorithm on-line network to optimize that is the trained weights byof passing the neural time. network. The cost According function of to the case study, our network is an on-line network that is trained by passing time. The cost function of GA iscase based study, on systems our network error isof an desired on-line value network and this that algorithm is trained tries by passing to update time. weights The cost of neural function of GA is based on systems error of desired value and this algorithm tries to update weights of neural networkGAs is in based a manner on systems that the error error of of desired desire value value reaches and this the algorithm minimum tries amo tount. update weights of neural networks in a manner that the error of desire value reaches the minimum amount. networks in a manner that the error of desire value reaches the minimum amount. 3.3.1. Genetic Algorithm Optimization Method 3.3.1. Genetic Algorithm Optimization Method The genetic algorithm is based on a biological process to solve optimization problems with and The genetic algorithm is based on a biological process to solve optimization problems with and without constraints.The genetic The algorithm basis of the is basedgenetic on algorithms a biological work process is based to on solve population optimization, which problems includes with without constraints. The basis of the genetic algorithms work is based on population, which includes the factorand withouts of population constraints. generation The basis and of optimization. the genetic algorithms In this algorithm, work is basedan initial on population, population is which the factors of population generation and optimization. In this algorithm, an initial population is generated,includes then the the factors individuals of population that have generation the least and amountoptimization. are selected In this algorithm, randomly an from initial the population first generated, then the individuals that have the least amount are selected randomly from the first generatedis generated, population. then The the genetic individuals algorithm that have is an the algorithm least amount based areon selectedthe reproduction randomly process from the of first generated population. The genetic algorithm is an algorithm based on the reproduction process of the creatures.generated This population. algorithm The is genetic obtained algorithm from the is an generation algorithm of based chromosomes on the reproduction and chromosomal process of the the creatures. This algorithm is obtained from the generation of chromosomes and chromosomal changescreatures. made This in the algorithm reproduction is obtained process from and the with generation the production of chromosomes and variation and chromosomal of numbers changes of changes made in the reproduction process and with the production and variation of numbers of chromosomesmade in the, the reproduction defined interval process will and try withto find the an production optimal value. and variation of numbers of chromosomes, chromosomes, the defined interval will try to find an optimal value. Inthe this defined method, interval the first will values try to findof the an variables optimal are value. randomly generated as the initial population. In this method, the first values of the variables are randomly generated as the initial population. Then, by usingIn this mutation method, and the firstintersection, values of the the parents variables are are generated randomly from generated the initial as population, the initial population. then Then, by using mutation and intersection, the parents are generated from the initial population, then new numbersThen, by usingare produced mutation as andnew intersection, populations. the The parents population are generated with the frominitial the population initial population, is placed then new numbers are produced as new populations. The population with the initial population is placed in thenew cost numbers function, are and produced the values as new are populations. extracted Theand population arranged according with the initial to the population minimum is placedand in in the cost function, and the values are extracted and arranged according to the minimum and maximumthe cost values. function, Finally, and theafter values proper are algorithmic extracted and repetition, arranged the according optimal to value the minimum of the functi and maximumon is maximum values. Finally, after proper algorithmic repetition, the optimal value of the function is

Computers 2018, 7, 32 8 of 14 values. Finally, after proper algorithmic repetition, the optimal value of the function is extracted. In this section, we deal with the definitions and common terminology in the genetic algorithm as follows:

• Individual: In genetics, each individual is referred to as a member of the population who can participate in the reproduction process and create a new member population that will create an increased population of these individuals. • Population and production: In genetics, the population is a collection of all individuals that are present in the population, are capable of generating a generation and creating a new one and, consequently, producing a new population. • Parents: This refers to individuals that intend to participate in the reproduction process and produce a new individual.

3.3.2. The Mathematical Structure of the Genetic Algorithm Genetic algorithms can be used to solve optimization problems using the following steps: Step 1: At this stage, an initial population has been created randomly in proportion to the number of variables that optimize its function. After creating the initial population, the function value is calculated for each individual in the first population. Step 2: In this step, the intersection operation is run to create new chromosomes from the current population in the proceeding step. First, the rate of intersection Pc is chosen (the selection rate was optional) to perform the intersection, then appropriate chromosomes are selected proportional to the rate of the selected intersections before the intersection operation is run according to Equation (6):

V1new = V1 + (1 − µ)·V2 (6) V2new = V2 + (1 − µ)·V1

In the above equations, V2new and V1new are new chromosomes that have been created by the intersection operation, V2 and V1 are the parent chromosomes, and µ is a random number. Step 3: In this step, the mutation action is run. First, a random number between zero and one is created according to the size or number of genes in the population (number of genes = population size * number of variables), then the Pm mutation rate is set (the selection of a random value). Then, the mutation action is run according to the following equation where Vknew is a new chromosome that has been caused by mutation action and Xknew is new gene has been altered by mutation action. Equation (7) is:

 U  Vknew = [X1,...,Xknew,...,Xn],Xknew = Xk + ∆ t, Xk − Xk OR (7)   t b X = X − ∆ t, X − X1 , ∆(t, y) = y·r·(1 − ) knew k k k T

U 1 In Equations (1) and (2), Xk , Xk are the upper and lower borders of Xk, respectively; r is a random number between zero and one; ∆(t, y) with increasing t will tend towards zero; T is the production maximum in the algorithm; and t is the number of production at the moment. Step 4: In this step, chromosomes in the initial population and chromosomes in the new population in terms of size function based on the values in ascending order are arranged. In proportion to the size of the population, they enter the production cycle. Finally, the minimum function is selected and introduced as the minimum function. Step 5: The algorithm is a repetition-based algorithm, so the algorithm must be repeated to the maximum desired value so that the function is moved to the optimal point (maximal or minimal) or stop conditions in the algorithm are created. Computers 2018, 7, 32 9 of 14

4. Experimental Result

4.1. Dataset Description In general, the studies and the results obtained with different numbers of hidden layers for the neural network are summarized in the table below. It is worth mentioning that the chosen training algorithm used a genetic algorithm. Furthermore, the size of the hidden layers in all cases was a single layer. The size of the database was initially 10 samples. As the network was online, the dates changed over time. The criterion for selecting the training and testing data was the lowest error. The number of hidden layers was the layer between the input and output layers. 10 samples are initial samples for training. Then new samples are generated by system as time passes and are used in training. The genetic algorithm selected the best weights that avoided unwanted conditions such as over fitting.

4.2. Optimal Networks Topology By examining Table1, it is clear that a hidden layer led to better results. For this reason and to continue the design of the neural network, a number of hidden layers were considered as one layer. In the next step, selecting the best training algorithm for designing a neural network is considered.

Table 1. Consideration of the results of the neural network with a different number of hidden layers.

The Number of Hidden Neural Network of Performance Criterion Layer Neural Network 21.56 No hidden layer 1.34 A hidden layer 14.38 Two hidden layers 7.66 Three hidden layers 5.21 Four hidden layers

According to different training algorithms in MATLAB software, neural network training is considered with two hidden layers. The activator function that is used in the hidden layers in all tested networks, is the tansig activation function. The activation function of the output layer is also considered as the linear. According to the results presented in Table2, the best training method for the existing data was the training method with the genetic algorithm. Therefore, the training algorithm was finally considered as the genetic algorithm. In the next step, the choice of the desired neural network parameters, the types of activator functions in the hidden layers as well as the out layer of the designed neural network were considered. According to the results shown in Table3, the lowest error rate and best performance occurred when the activation function on the hidden layer, the logsig type, and the output layer were of the logsig type. The final stage of selecting the neural network parameters was the selection of the size of the hidden layers. By using the results from the previous section, there were two layers with logsig activator functions. Furthermore, the output activator function was the logsig function. Now, by searching for a number of results for different sizes of hidden layers, we reviewed and selected the best available options. In this step, we simulate the system for hidden layers with different size from 1 to 10, and we obtain the results in Table4. According to the results obtained in Table4, the best solution for the designed neural network occurred when the hidden layer size was 5. It is notable that the proposed method for choosing the best mode for the neural network was merely a local choice, and that it was possible to obtain better results by changing and selecting the parameters out of the range investigated. Computers 2018, 7, 32 10 of 14

Table 2. Consideration of the results of testing different training algorithms on the desired neural network.

Training Algorithm Neural Network of Performance Criterion Trainlm 1.384 Trainbfg 2.214 Trainrp 2.562 Trainscg 4.651 Traincgb 3.563 Traincgf 2.201 Traincgp 1.493 Traionoss 3.452 Traingdx 2.856 Traingd 1.132 Traingdm 1.081 Traingda 2/342 Genetic algorithm 0.596

Table 3. The results of the consideration of different activator functions in the proposed design of the neural network.

Neural Network of Performance Activator Functions Criterion Output Layer Hidden Layer 6.371 purelin Purelin 6.526 logsig Purelin 4.411 tansig Purelin 2.321 purelin Logsig 0.931 logsig Logsig 2.504 tansig logsig 0.956 purelin tansig 1.732 logsig tansig 3.043 tansig tansig

Table 4. The neural network performance results with different hidden layer size.

Neural Network of Neural Network of Hidden Layer Size Hidden Layer Size Performance Criterion Performance Criterion 0.932 1 2.788 6 1.215 2 1.546 7 1.368 3 1.012 8 1.116 4 0.994 9 0.874 5 1.016 10

4.3. Optimal PID Outputs According to the discussion in the previous section, the final designed neural network is as follows. The number of hidden layers of the designed network was considered to be two layers. The hidden layer consisted of a logsig activator function and the output layer of the activator function was also the logsig function. Additionally, the training algorithm was the genetic algorithm. The size of the layers by using the values obtained in the table was 5, therefore there were 5 neurons for the hidden layer. The output waveforms along with the output of the PID controller are shown in Figures9–12. Computation 2018, 6, x FOR PEER REVIEW 10 of 14

searching for a number of results for different sizes of hidden layers, we reviewed and selected the best available options. In this step, we simulate the system for hidden layers with different size from 1 to 10, and we obtain the results in Table 4.

Table 3. The results of the consideration of different activator functions in the proposed design of the neural network.

Activator Functions Neural Network of Performance Criterion Output Layer Hidden Layer 6.371 purelin Purelin 6.526 logsig Purelin 4.411 tansig Purelin 2.321 purelin Logsig 0.931 logsig Logsig 2.504 tansig logsig 0.956 purelin tansig 1.732 logsig tansig 3.043 tansig tansig

Table 4. The neural network performance results with different hidden layer size.

Neural Network of Performance Hidden Layer Neural Network of Performance Hidden Layer Criterion Size Criterion Size 0.932 1 2.788 6 1.215 2 1.546 7 1.368 3 1.012 8 1.116 4 0.994 9 0.874 5 1.016 10

According to the results obtained in Table 4, the best solution for the designed neural network occurred when the hidden layer size was 5. It is notable that the proposed method for choosing the best mode for the neural network was merely a local choice, and that it was possible to obtain better results by changing and selecting the parameters out of the range investigated.

4.3. Optimal PID Outputs According to the discussion in the previous section, the final designed neural network is as follows. The number of hidden layers of the designed network was considered to be two layers. The hidden layer consisted of a logsig activator function and the output layer of the activator function was also the logsig function. Additionally, the training algorithm was the genetic algorithm. The size of the layers by using the values obtained in the table was 5, therefore there were 5 neurons for the hidden Computerslayer. The2018 ,output7, 32 waveforms along with the output of the PID controller are shown in Figures 9–12. 11 of 14

FigureFigure 9. 9.Output Output of of the the controlled controlled system system withwith thethe PID controller designed designed with with the the neural neural network. network. ComputersComputers 20182018,, 77,, xx;; doi:doi: FORFOR PEERPEER REVIEWREVIEW www.mdpi.com/journal/www.mdpi.com/journal/computerscomputers Computers 2018, 7, x; doi: FOR PEER REVIEW www.mdpi.com/journal/computers

Figure 10. PID controller output was designed with the neural network and genetic training. FigureFigureFigure 10. 10.10.PID PIDPID controller controllercontroller output outputoutput was waswas designed designeddesigned with withwith the thethe neuralneural network network andand and geneticgenetic genetic training.training. training.

FigureFigure 11. 11. MinimizationMinimization chart chart of of the the objective objective function function (sum (sum of of squared squared errors) errors) by by the the ge geneticnetic FigureFigure 11. Minimization 11. Minimization chart chart of the of objective the objective function function (sum of (sum squared of squared errors) by errors) the genetic by the algorithm. genetic algorithm. algorithm.algorithm.

FigureFigure 12. 12.Convergence Convergence of of the the PID PID controller controller parameters parameters designeddesigned by the the neural neural network. network. FigureFigure 12.12. ConvergenceConvergence ofof thethe PIDPID controllercontroller parametersparameters designeddesigned byby thethe neuralneural network.network. As we can see in Figure 11, the objective function was the sum of the squares of the error. The AsAs wewe cancan seesee inin FigureFigure 1111,, thethe objectiveobjective functionfunction waswas thethe sumsum ofof thethe squaressquares ofof thethe error.error. TheThe objective function had some defects that, consequently caused the over mutation in the transient objectiveobjective function function had had some some defects defects that, that, consequentl consequentlyy caused caused the the over over mutation mutation in in the the transient transient response of the system. To solve this problem, the objective function, in terms of the sum of the responseresponse of of the the system. system. To To solve solve this this problem, problem, the the objective objective function, function, inin terms terms of of the the sum sum of of the the squares of the error plus the weight function of the over mutation, was chosen as Equation (8). squaressquares ofof thethe errorerror plusplus thethe weightweight functionfunction ofof thethe overover mutation,mutation, waswas chosenchosen asas EquationEquation (8).(8). n n n 2 Fitness function = ∑ error(i)2 + W ∗ Mp (8) (8) FitnFitnessess functionfunction ==∑∑errorerror((ii))2 ++WW∗∗MMp ((88)) ((88)) i=1 p i=1 i=1 The output waveform of the closed-loop system with the genetic algorithm training in TheThe output output waveform waveform of of the the closed closed--looploop system system with with the the genetic genetic algorithm algorithm training training in in comparison with the BP algorithm is shown in Figures 13 and 14. comparisoncomparison withwith thethe BPBP algorithmalgorithm isis shownshown inin FiguresFigures 1313 andand 14.14.

Computers 2018, 7, 32 12 of 14

As we can see in Figure 11, the objective function was the sum of the squares of the error. The objective function had some defects that, consequently caused the over mutation in the transient response of the system. To solve this problem, the objective function, in terms of the sum of the squares of the error plus the weight function of the over mutation, was chosen as Equation (8).

n 2 Fitness function = ∑ error(i) + W ∗ Mp (8) i=1

The output waveform of the closed-loop system with the genetic algorithm training in comparison withComputation the BP 2018 algorithm, 6, x FOR isPEER shown REVIEW in Figures 13 and 14. 12 of 14

Figure 13. The output waveform of the controlled system with controllers designed by the neural FigureFigure 13. 13.The The outputoutput waveformwaveform ofof thethe controlledcontrolled systemsystem withwith controllerscontrollers designeddesigned byby thethe neuralneural networksnetworks BPNN BPNN (Back (Back Propagation Propagation Neural Neural Network) Network)and and GANN GANN (Genetic (Genetic Algorithm Algorithm coupled coupled with with NeuralNeural Network). Network).

Figure 14. Neural network controller outputs designed by BPNN and GANN. FigureFigure 14. 14.Neural Neural network network controller controller outputs outputs designed designed by by BPNN BPNN and and GANN. GANN.

5. Conclusions 5. Conclusions In this paper, we discussed the design of a PID controller with a neural network and the use of In this paper, we discussed the design of a PID controller with a neural network and the use of genetic algorithms for an air conditioning system. First, the modeling of the air conditioning system genetic algorithms for an air conditioning system. First, the modeling of the air conditioning system was carried out. At this stage, based on the available information, the system equations were was carried out. At this stage, based on the available information, the system equations were obtained. obtained. In the second stage, a discussion about the PID controller and how to determine its In the second stage, a discussion about the PID controller and how to determine its coefficients was coefficients was completed. After that, we examined the neural networks and how to determine the completed. After that, we examined the neural networks and how to determine the PID controller PID controller coefficients with this method. Different conditions for the neural network were coefficients with this method. Different conditions for the neural network were considered, and in considered, and in these conditions, we examined the designed network. Finally, we determined the these conditions, we examined the designed network. Finally, we determined the best network, which best network, which has the lowest operating system error criterion. has the lowest operating system error criterion. Out of all of the training algorithms, the genetic algorithm was selected to most effectively train the neural network according to the obtained results. The objective function was minimized by the genetic algorithm, and the sum of the squares of errors was considered to be defective. These defects caused an over mutation in the transient response system. To solve this problem, the objective function was defined as the sum of the squares of error plus the over mutation weighting function. By defining the appropriate cost function, we tried to obtain values that were acceptable with a high ratio. Finally, by simulating a final controlled system, we examined the system response, control signal, and compared the results of the trained neural network with the back propagation algorithm. By comparing the results, it could be seen that the proposed method had advantages over other methods such as the back-propagation algorithm.

Author Contributions: The authors contributed equally to the reported research and writing of the paper. All authors have read and approved the final manuscript.

Conflicts of Interest: The authors declare no conflict of interest. Computers 2018, 7, 32 13 of 14

Out of all of the training algorithms, the genetic algorithm was selected to most effectively train the neural network according to the obtained results. The objective function was minimized by the genetic algorithm, and the sum of the squares of errors was considered to be defective. These defects caused an over mutation in the transient response system. To solve this problem, the objective function was defined as the sum of the squares of error plus the over mutation weighting function. By defining the appropriate cost function, we tried to obtain values that were acceptable with a high ratio. Finally, by simulating a final controlled system, we examined the system response, control signal, and compared the results of the trained neural network with the back propagation algorithm. By comparing the results, it could be seen that the proposed method had advantages over other methods such as the back-propagation algorithm.

Author Contributions: The authors contributed equally to the reported research and writing of the paper. All authors have read and approved the final manuscript. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Winter, R.; Widrow, B. MADALINE RULE II: A training algorithm for neural networks (PDF). In Proceedings of the IEEE International Conference on Neural Networks, San Diego, CA, USA, 24–27 July 1988; pp. 401–408. 2. Waibel, A.; Hanazawa, T.; Hinton, G.; Shikano, K.; Lang, K.J. Phoneme Recognition Using Time-Delay Neural Networks. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 328–339. [CrossRef] 3. Krauth, W.; Mezard, M. Learning algorithms with optimal stabilty in neural networks. J. Phys. A Math. Gen. 2008, 20, L745–L752. [CrossRef] 4. Guimarães, J.G.; Nóbrega, L.M.; da Costa, J.C. Design of a Hamming neural network based on single-electron tunneling devices. Microelectron. J. 2006, 37, 510–518. [CrossRef] 5. Mui, K.W.; Wong, L.T. Evaluation of neutral criterion of indoor air quality for air-conditioned offices in subtropical climates. Build. Serv. Eng. Res. Technol. 2007, 28, 23–33. [CrossRef] 6. ASHRAE. Design for Acceptable Indoor Air Quality; ANSI/ASHRAE Standard 62-2007; American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc.: Atlanta, GA, USA, 2007. 7. Levermore, G.J. Building Energy Management Systems: An Application to Heating and Control; E & FN Spon: London, UK; New York, NY, USA, 1992. 8. Rahmati, V.; Ghorbani, A. A Novel Low Complexity Fast Response Time Fuzzy PID Controller for Antenna Adjusting Using Two Direct Current Motors. Indian J. Sci. Technol. 2018, 11.[CrossRef] 9. Dounis, A.I.; Bruant, M.; Santamouris, M.J.; Guarracino, G.; Michel, P. Comparison of conventional and fuzzy control of indoor air quality in buildings. J. Intell. Fuzzy Syst. 1996, 4, 131–140. 10. Tan, W. Unified tuning of PID load frequency controller for power systems via IMC. IEEE Trans. Power Syst. 2010, 25, 341–350. [CrossRef] 11. Howell, M.N.; Best, M.C. On-line PID tuning for engine idle- control using continuous action reinforcement learning automata. Control Eng. Pract. 2000, 8, 147–154. [CrossRef] 12. European Union. The New Directive on the Energy Performance of Buildings, 3rd ed.; European Union: Brussels, Belgium, 2002. 13. Tian, L.; Lin, Z.; Liu, J.; Yao, T.; Wang, Q. The impact of temperature on mean local air age and thermal comfort in a stratum ventilated office. Build. Environ. 2011, 46, 501–510. [CrossRef] 14. Hui, P.S.; Wong, L.T.; Mui, K.W. Feasibility study of an express assessment protocol for the indoor air quality of air conditioned offices. Indoor Built Environ. 2006, 15, 373–378. [CrossRef] 15. The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE). Thermal Environment Conditions for Human Occupancy; ASHRAE Standard 55: Atlanta, GA, USA, 2004. 16. Wyon, D.P. The effects of indoor air quality on performance and productivity. Indoor Air 2004, 14, 92–101. [CrossRef][PubMed] 17. Fanger, P.O. What is IAQ. Indoor Air 2006, 16, 328–334. [CrossRef][PubMed] 18. Lundgren, K.; Kjellstrom, T. Sustainability Challenges from Climate Change and Air Conditioning Use in Urban Areas. Sustainability 2013, 5, 3116–3128. [CrossRef] 19. Maerefat, M.; Omidvar, A. Thermal Comfort; Kelid Amoozesh: Tehran, Iran, 2008; pp. 1–10. Computers 2018, 7, 32 14 of 14

20. Tian, L.; Lin, Z.; Liu, J.; Wang, Q. Experimental investigation of thermal and ventilation performances of stratum ventilation. Build. Environ. 2011, 46, 1309–1320. [CrossRef] 21. Morovat, N. Analysis of Thermal Comfort, Air Quality and Energy Consumption in a Hybrid System of the Hydronic Radiant Cooling and Stratum Ventilation. Master’s Thesis, Department of Mechanical Engineering, Tarbiat Modares University, Tehran, Iran, 2013. 22. Protopapadakis, E.; Schauer, M.; Pierri, E.; Doulamis, A.D.; Stavroulakis, G.E.; Böhrnsen, J.; Langer, S. A genetically optimized neural classifier applied to numerical pile integrity tests considering concrete piles. Comput. Struct. 2016, 162, 68–79. [CrossRef] 23. Oreski, S.; Oreski, G. Genetic algorithm-based heuristic for feature selection in credit risk assessment. Expert Syst. Appl. 2014, 41, 2052–2064. [CrossRef] 24. Ghamisi, P.; Benediktsson, J.A. Feature selection based on hybridization of genetic algorithm and particle swarm optimization. IEEE Geosci. Remote Sens. Lett. 2015, 12, 309–313. [CrossRef] 25. Wang, S.; Jin, X. Model-based optimal control of VAV air-conditioning system using genetic algorithms. Build. Environ. 2000, 35, 471–487. [CrossRef] 26. Maerefat, M.; Morovat, N. Analysis of thermal comfort in space equipped with stratum ventilation and radiant cooling ceiling. Modares Mech. Eng. 2013, 13, 41–54. 27. Yu, B.F.; Hu, Z.B.; Liu, M. Review of research on air conditioning systems and indoor air quality control for human health. Int. J. Refrig. 2009, 32, 3–20. [CrossRef] 28. Yang, S.; Wu, S.; Yan, Y.Y. Control strategies for indoor environment quality and energy efficiency—A review. Int. J. Low-Carbon Technol. 2015, 10, 305–312. 29. Mui, K.W.; Wong, L.T.; Hui, P.S.; Law, K.Y. Epistemic evaluation of policy influence on workplace indoor air quality of Hong Kong in 1996–2005. Build. Serv. Eng. Res. Technol. 2008, 29, 157–164. [CrossRef] 30. Chen, S.; Cowan, C.F.N.; Grant, P.M. Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks. IEEE Trans. Neural Netw. 2010, 2, 302–309. [CrossRef][PubMed]

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).