applied sciences

Article An Algorithm for Solving Robot Inverse Based on FOA Optimized BP Neural Network

Yonghua Bai 1,2, Minzhou Luo 1,2,* and Fenglin Pang 1,2

1 College of Mechanical & Electrical Engineering, Hohai University, Changzhou 213022, China; [email protected] (Y.B.); [email protected] (F.P.) 2 Jiangsu Key Laboratory of Special Robot Technology, Hohai University, Changzhou 213022, China * Correspondence: [email protected]; Tel.: +86-159-9508-7859

Abstract: The solution of robot has a direct impact on the control accuracy of the robot. Conventional inverse kinematics solution methods, such as numerical solution, algebraic solution, and geometric solution, have insufficient solution speed and solution accuracy, and the solution process is complicated. Due to the mapping ability of the neural network, the use of neural networks to solve robot inverse kinematics problems has attracted widespread attention. However, it has slow convergence speed and low accuracy. This paper proposes the FOA optimized BP neural network algorithm to solve inverse kinematics. It overcomes the shortcomings of low convergence accuracy, slow convergence speed, and easy to fall into local minima when using BP neural network to solve inverse kinematics. The experimental results show that using the trained FOA optimized BP neural network to solve the inverse kinematics, the maximum error range of the output joint angle is [−0.04686, 0.1271]. The output error of the FOA optimized BP neural network algorithm is smaller  than that of the ordinary BP neural network algorithm and the PSO optimized BP neural network  algorithm. Using the FOA optimized BP neural network algorithm to solve the can Citation: Bai, Y.; Luo, M.; Pang, F. An improve the control accuracy of the robot. Algorithm for Solving Robot Inverse Kinematics Based on FOA Optimized Keywords: inverse kinematics; FOA algorithm; PSO algorithm; BP neural network BP Neural Network. Appl. Sci. 2021, 11, 7129. https://doi.org/10.3390/ app11157129 1. Introduction Academic Editor: Solving the problem of robot inverse kinematics is the basis of robot trajectory plan- Alessandro Gasparetto ning and motion control. Inverse kinematics is the mapping from Cartesian space to joint space [1,2]. Its essence is to obtain the value of each joint variable from the position and Received: 7 July 2021 posture of the end of the robot. Traditional methods for solving robot inverse kinemat- Accepted: 30 July 2021 Published: 2 August 2021 ics include geometric [3,4], algebraic [5–7], and iterative [8,9] methods. The geometric method has strict requirements on the configuration of the robot, poor versatility, certain

Publisher’s Note: MDPI stays neutral limitations, and the modeling and solving process is more complicated. The algebraic with regard to jurisdictional claims in method can obtain closed solutions of simple mechanisms. However, it is difficult to obtain published maps and institutional affil- closed solutions of complex mechanisms. This method involves a quantity of coordinate iations. transformations in the solution process, and the solution accuracy is relatively low. The iterative method solves the problem through repeated iterations. This method requires numerous calculations and is difficult to meet the requirements of real-time control. In view of the shortcomings of traditional methods, more and more intelligent algo- rithms are widely used in robot control, such as neural network algorithms [10,11], genetic Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. algorithms [12–15], and particle swarm algorithms [15–18]. Artificial neural network has This article is an open access article powerful approximation ability when solving the problem of nonlinear mapping. Taking distributed under the terms and a large number of positive kinematics information of the robot as samples, through the conditions of the Creative Commons training and learning of the BP neural network, the mapping from the Cartesian space to Attribution (CC BY) license (https:// the joint space is realized. As a result, complicated calculation and derivation processes creativecommons.org/licenses/by/ are avoided. The trained neural network has a fast calculation speed and can meet the 4.0/). requirements of real-time control.

Appl. Sci. 2021, 11, 7129. https://doi.org/10.3390/app11157129 https://www.mdpi.com/journal/applsci Appl. Sci. 2021, 11, x FOR PEER REVIEW 2 of 12

Appl. Sci. 2021, 11, 7129 2 of 12 are avoided. The trained neural network has a fast calculation speed and can meet the requirements of real-time control. InIn [19[19],], KarlikKarlik andand AydinAydin proved proved that that the the double double hidden hidden layer layer BP BP neural neural network network structurestructure has has advantagesadvantages inin solvingsolving thethe inverseinverse kinematicskinematics ofof thethe manipulator.manipulator. However,However, thethe traditional traditional BP BP neural neural network network has has the the disadvantages disadvantages ofof largelarge outputoutput error,error, easy easy to to fall fall intointoa a local local extreme extreme value, value, and and slow slow convergence convergence speed. speed. In In [ 15[15],], Mustafa Mustafa et et al. al. designed designed twotwo scenarios scenarios of of randomly randomly selected selected points points and and following following a a specific specific trajectory trajectory in in the the robot robot workspace,workspace, compared compared four four different different optimization optimization algorithms, algorithms, and and proved proved the effectivenessthe effective- ofness the QPSOof the algorithmQPSO algorithm to solve theto solve inverse the kinematics inverse kinematics of a 4-DOF serialof a robot.4-DOF Ozgoren serial robot. [20] usedOzgoren the analytical [20] used methodthe analytical to solve method the inverse to solve kinematics the inverse of thekinema redundanttics of the manipulator redundant andmanipulator optimized and it accordingly, optimized it but accordingly, the solution but process the solution was complicated, process was andcomplicated, the solution and accuracythe solution was accuracy not high. was not high. InIn [ 21[21],], Dereli Dereli et et al. al. proposedproposed global-local global-local best best inertia inertia PSO PSO algorithm algorithm and and proved proved the the effectivenesseffectiveness of of the the PSO PSO algorithm algorithm in solvingin solvin theg the optimal optimal solution solution of redundant of redundant robots. robots. Ren andRen Ben-Tzvi and Ben-Tzvi [22] proposed [22] proposed a series a of series improved of improved GANs that GANs use limitedthat use data limited to approximate data to ap- theproximate real model, the real effectively model, solvingeffectively the solving inverse the kinematics inverse kinematics problem of problem high-dimensional of high-di- nonlinearmensional systems. nonlinear Dereli systems. and Köker Dereli [ 23and] proposed Köker [23] using proposed the firefly using algorithm the firefly to algorithm solve the inverseto solve kinematics the inverse of kinematics a 7-DOF robot, of a 7-DOF but the robo populationt, but the sizepopulation has a great size influencehas a great on influ- the speedence on and the quality speed of and the quality solution. of In the [24 solution.], Zhou et In al. [24], proposed Zhou et an al. intelligent proposed algorithm an intelligent that initializesalgorithm the that inverse initializes kinematics the inverse solution kinemati withcs extreme solution learning with extreme machine learning and optimizes machine itand with optimizes a genetic it algorithmwith a genetic based algorithm on sequence based mutation. on sequence This mutation. algorithm This improves algorithm the efficiencyimproves of the the efficiency solution of while the so ensuringlution while accuracy. ensuring accuracy. FruitFruit flyfly optimizationoptimization algorithm has few applications applications in in the the field field of of robot robot kinematics kinemat- ics[25–28]. [25–28 The]. The FOA FOA algorithm algorithm has has the the characteri characteristicsstics of of simple simple operation operation and fastfast conver-conver- gencegence speed.speed. ThisThis paperpaper usesuses thethe FOAFOA algorithmalgorithm toto optimizeoptimize thethe threshold threshold and and weight weight of of thethe BP BP neural neural network network to to improve improve the the convergence convergence speed speed and and output output accuracy accuracy of theof the neural neu- network.ral network. The The optimized optimized neural neur networkal network is used is toused solve to solve the inverse the inverse kinematics kinematics of the robotof the shownrobot shown in Figure in1 Figure, and compared 1, and compared with the with ordinary the ordinary BP neural BP network neural algorithmnetwork algorithm and the PSOand optimizedthe PSO optimized BP neural BP network neural network algorithm. algo Therithm. robot The structure robot structure shown shown in Figure in 1Figureis a part1 is a of part the serviceof the service robot. robot. It can It be can installed be inst onalled a mobile on a mobile chassis chassis with laserwith radarlaser radar [29] and [29] equippedand equipped with roboticwith robotic arms arms to form to form a complete a complete service service robot. robot.

. FigureFigure 1. 1.The The robotrobot prototype.prototype

ThisThis paperpaper proposes an FOA FOA optimized optimized BP BP neural neural network network algorithm algorithm to toimprove improve the theaccuracy accuracy of solving of solving robot robot inverse inverse kinematics. kinematics. This This paper paper is organized is organized as follows: as follows: In Sec- In Sectiontion 2, 2the, the robot robot kinematics kinematics model model is establis is established.hed. In Section In Section 3, the3, theFOA FOA optimized optimized BP neu- BP neuralral network network algorithm algorithm is introduced. is introduced. The The distan distancece and and direction direction of th ofe the individual individual in the in theFOA FOA algorithm algorithm are areregarded regarded as the as threshold the threshold and andweight weight of the of neur the neuralal network network for itera- for iterativetive optimization. optimization. In Section In Section 4, the4, theproposed proposed FOA FOA optimized optimized BP neural BP neural network network is com- is comparedpared with with the the ordinary ordinary BP BP neural neural network network and and the the PSO PSO optimized optimized BP neural network,network, and the error of the inverse kinematics of the robot is compared. In Section5, the results Appl. Sci. 2021, 11, x FOR PEER REVIEW 3 of 12

Appl. Sci. 2021, 11, 7129 3 of 12

and the error of the inverse kinematics of the robot is compared. In Section 5, the results showshow that that the the FOA FOA optimized optimized BP BP neural neural network network algorithm algorithm can can improve improve the the accuracy accuracy of of solvingsolving robot robot inverse inverse kinematics. kinematics.

2.2. Robot Robot Kinematics Kinematics TheThe robot robot torso torso coordinate coordinate system system is is esta establishedblished according according to to the the D-H D-H method, method, as as shownshown in in Figure Figure 2.2.

FigureFigure 2. 2. RobotRobot kinematics kinematics model model.. α TheThe system system has has four four linkages linkages and and five five coordinate coordinate systems. systems. Here, Here, αi i,, aii,, did,i and, andθ i θrespectively represent the connecting rod torsion angle, connecting rod length, connecting i respectively represent the connecting rod torsion angle, connecting rod length, con- nectingrod offset, rod andoffset, joint and angle. joint angle. According According to the builtto the robot built kinematicsrobot kinematics model, model, Table1 Table shows 1 the link parameters of the robot. shows the link parameters of the robot.

TableTable 1. 1. LinkLink parameters parameters of of the the robot. robot.

i ααi−1 (rad) ai−1 (mm) di (mm) θθi (rad) i i−1 (rad) ai−1 (mm) di (mm) i (rad) 1 −π/2 0 0 θθ1 1 −π /2 0 0 1 2 π/2 0 l1 θ2 π θ 23 −/2π/2 0 0l1 0 θ3 2− π 4 0 l2 0θπ− 0 3 −π /2 0 0 3 /

4 0 l2 0 0 After defining the linkage coordinate system and the corresponding linkage parame- i−1T ters,After the kinematic defining the equation linkage can coordinate be established. system The and D-H the transformationcorresponding linkage matrix iparam-can be obtained by right multiplying the following four transformation matrices: i−1 eters, the kinematic equation can be established. The D-H transformation matrix iT can be obtained by righti− multiplying1 the following four transformation matrices: T = Rx(αi−1)Dx(ai−1)Rz(θi)Dz(di) i   i−1 cθi −sθi 0 ai−1 TR = (αθ−− ) DaR ( ) ( ) Dd ( ) iiiiisθx1x1zcα cθ cα − zsα −sα d (1) =  i i−1 i i−1 i−1 i−1 i   csθθ− 0 a−   sθisαi−ii1 cθisαi−1 cαi−1 c iα1i−1di  θαθααα−− s 0cccs− 0 −−− 0 sd 1 (1) = iiiiiii1 111 θα θα α α s iiscsc−−−−1111 ii i cd i i where i−1T is the D-H transformation matrix transformed from coordinate system i to i − 1, i 0001 Rx(αi−1) is the rotation matrix representing a rotation of αi−1 around the X axis, Dx(ai−1) − is the translationi 1 matrix representing a translation of ai−1 along the X axis, Rz(θi) is the where iT is the D-H transformation matrix transformed from coordinate system i to i- rotationα matrix representing a rotation of θi around the Z axis,αDz(di) is the translation 1, Rx1()i− is the rotation matrix representing a rotation of i−1 around the X axis, matrix representing a translation of di around the Z axis, cθi is short for cos θi, sθi is short D ()a − is the translation matrix representing a translation of a − along the X axis, forx1sini θi, etc. The transformation matrices of each link determined byi 1 the link parameters Appl. Sci. 2021, 11, 7129 4 of 12

are multiplied to obtain the position and posture of the center point of the robot’s chest in the base coordinate system:   r11 r12 r13 px  0 0  0 0 1 2 3 4R3×3 4P3×1  r21 r22 r23 py  4T = 1T2T3T4T = =   (2) 0 1  r31 r32 r33 pz  0 0 0 1

0 where 4R3×3 is the rotation transformation matrix transformed from coordinate system 0 4 to 0, 4P3×1 is the position vector transformed from coordinate system 4 to 0, rij is the 0 0 element of 4R3×3, and px, py, py is the component of the translation vector 4P3×1. 0 The rotation matrix 4R3×3 can be described by rotating around the axis of a fixed reference coordinate system, as shown in Formula (3):

 cαcβ cαsβsγ − sαcγ cαsβcγ + sαsγ  0 4RXYZ(γ, β, α) =  sαcβ sαsβsγ + cαcγ sαsβcγ − cαsγ  (3) −sβ cβsγ cβcγ

0 where 4RXYZ(γ, β, α) is the equivalent rotation matrix that rotates γ, β, and α around the X, Y, and Z axes of the fixed reference coordinate system. Equivalently derived X-Y-Z fixed angle from the rotation matrix: q 2 2 β = Atan2(−r31, r11 + r21) (4) α = Atan2(r21/cβ, r11/cβ) γ = Atan2(r32/cβ, r33/cβ)

where Atan2(y, x) is the arctangent function of a two-parameter variable. These equations can provide the pose of the robot relative to the world coordinate system. The equation of the robot can be expressed as: Fforword kinematics θ1, θ2, θ3) = (px, py, pz, α, β, γ) (5)

As shown in Equation (5), the end pose of the robot can be obtained through the joint variables of the robot. However, in practical applications, it is often necessary to obtain the joint angles of the robot through the end pose of the robot, so the robot inverse kinematics is solved by the formula:

Finverse kinematics(px, py, pz, α, β, γ) = (θ1, θ2, θ3) (6)

In order to avoid the complicated process of solving robot inverse kinematics, this paper proposes an FOA optimized BP algorithm. The BP neural network is used to fit Equation (5). The fruit fly optimization algorithm is used to optimize the initial weights and thresholds of the BP neural network to improve the global optimization ability of the neural network. The data set is used to continuously train the FOA optimized BP neural network to find the optimal neural network. Finally, the optimal network is used to solve the robot inverse kinematics. The model of the algorithm is shown in Figure3. Appl. Sci. 2021, 11, x FOR PEER REVIEW 5 of 12 Appl. Sci. 2021, 11, 7129 5 of 12 Appl. Sci. 2021, 11, x FOR PEER REVIEW 5 of 12

Figure 3. The model of the FOA optimized BP neural network. Figure 3. The model of the FOA optimized BP neural network. 3. Algorithm Principle 3. Algorithm Principle 3.1. BP Neural Network Algorithm 3.1. BP Neural Network Algorithm In 1985, the scholars Rumelhart and McCelland proposed a multi-layer feedforward artificialartificialIn 1985, neural the networkscholars usingRumelhart the error and backMcCelland propagationpropagation proposed algorithm,algorithm, a multi-layer namely feedforward BP neuralneural artificialnetwork [neural[30].30]. TheThe network ordinaryordinary using BP BP neural neuralthe error network networ backk contains propagation contains two two parts: algorithm, parts: the the forward namelyforward propagation BP propaga- neural tionnetworkof the of signal the [30]. signal and The the andordinary back the back propagation BP neuralpropagation networ of the ofk error. containsthe error. The two actual The parts: actual output the output isforward calculated is calculated propaga- from fromtionthe input of the the input to signal the to output, theand output, the and back weightsand propagation weights and thresholdsand of thresholds the error. are corrected areThe corrected actual from output from the output theis calculated output to the to thefrominput. input. the A singleinput A single to basic the basic artificialoutput, artificial and neuron weightsneuron model modeland is thresholds shown is shown in Figure arein Figure corrected4. 4. from the output to the input. A single basic artificial neuron model is shown in Figure 4. x 1 w x1 i1 w wi1 x2 i2 x wi2 2 y  Σ θ i i f () y x  Σ θ f () i j  i x j wiN wiN xN x N Figure 4. Single artificial artificial neuron model.model. Figure 4. Single artificial neuron model. Among them, xxj((1,2,,)j jN==1, 2,··· , N )isis the the input input of of the the jj-th-th node, wwij is is the the weight, weight,θ i θis j = ij θi the threshold,Among them, and y ixisj (1,2,,) thejN output of neuronis the inputi. The of output the j-thyi cannode, be expressedwij is the as:weight, i is the threshold, and yi is the output of neuron i. The output yi can be expressed as: is the threshold, and yi is the output of neuronN i. The output yi can be expressed as: N yi = f (neti) = f ( wijxj + θi) (7) yf==(net ) f∑ (N wx +θ ) iij=1 ijji (7) ==j=1 +θ yfii(net ) f ( wx ijji ) (7) j=1 Among them, the f functionfunction is the activation function, and the most commonly usedused formAmong is the Sigmoid them, the function, f function and is its the expressionexpression activation isis function, Formulaformula and(8): (8): the most commonly used form is the Sigmoid function, and its expression is formula (8): =∈11 f (x)fx() = − , ,a a∈ C (8) 1=∈+1+e−1e axax fx()− , a C (8) 1+ e ax If the net activation f is positive, the neuron is in an excitedexcited state.state. Otherwise,Otherwise, thethe neuronIf the is in net a suppressedactivation f state. is positive, the neuron is in an excited state. Otherwise, the neuronThis is paperin paper a suppressed uses uses the the BP BPstate. algorithm algorithm to solve to solve inverse inverse kinematics. kinematics. A neural A neural network network struc- structureThis ispaper established, uses the andBP algorithm the input to layer solve has inverse six input kinematics. signals, correspondingA neural network to pstruc-, p , ture is established, and the input layer has six input signals, corresponding to px , p y x, pyz turep , α is, β established,, γ. The output and th layere input has layer three has output six input signals, signals, respectively, corresponding corresponding to p , p to, thep , zα , β , γ . The output layer has three output signals, respectively, correspondingx yto thez γ ,robot’s α , β three, . jointsThe outputθθ1, θ2θ, layerθ3. θ has three output signals, respectively, corresponding to the robot’sAccording three joints to Kolmogorov’s1 , 2 , 3 . theorem, a BP neural network with one hidden layer robot’s three joints θ , θ , θ . can completeAccording any to n-dimensionalKolmogorov’s1 2 3 theorem, to m-dimensional a BP neural mapping network [31 with]. Increasing one hidden the layer number can completeof layersAccording ofany the n-dimensional to neural Kolmogorov’s network to m-dimensional cantheorem, improve a BP the neural mapping accuracy. network [31]. However, withIncreasing one it hidden will the complicatenumber layer can of complete any n-dimensional to m-dimensional mapping [31]. Increasing the number of Appl. Sci. 2021, 11, x FOR PEER REVIEW 6 of 12

Appl. Sci. 2021, 11, 7129 6 of 12

layers of the neural network can improve the accuracy. However, it will complicate the theneural neural network network and andextend extend the training the training time time.. Using Using two hidden two hidden layers layers can make can make the map- the mappingping effect effect of the of neural the neural network network better, better, and andincreasing increasing the number the number of neurons of neurons in the in hid- the hiddenden layer layer can can improve improve the the accuracy. accuracy. In Inthe the BP BP neural neural network, network, the the number number of ofneurons neurons in ineach each layer layer has has a agreat great influence influence on on the the networ networkk performance. performance. According According to Equation (6)(6),, afterafter experimentsexperiments and and comparisons, comparisons, this this paper paper chooses chooses a network a network structure structure containing containing two hiddentwo hidden layers. layers. The The first first hidden hidden layer layer has has 24 24 neurons, neurons, and and the the second second hidden hidden layer layer hashas 4343 neurons.neurons. The designeddesigned BPBP neuralneural networknetwork topologytopology isis shownshown inin FigureFigure5 5..

FigureFigure 5.5. BP neural network topology.topology.

FigureFigure5 5 shows shows the the topology topology of of the the designed designed BP BP neural neural network. network. The The layers layers are are fully fully interconnected,interconnected, andand therethere isis nono interconnectioninterconnection betweenbetween thethe samesame layers.layers. TheThe corecore of of the the BP BP neural neural network network algorithm algorithm is to is solve to solve the minimumthe minimum value value of the of error the function.error function. BP neural BP neural network network adjusts adjusts the weights the weights and and thresholds thresholds of each of each layer layer according accord- toing the to errorthe error gradient gradient descent descent method. method. There There are weaknesses, are weaknesses, such assuch slow as convergence,slow conver- low learning efficiency, weak generalization ability, and easy to fall into local minimums. gence, low learning efficiency, weak generalization ability, and easy to fall into local min- Therefore, the optimization algorithm is combined for improvement. imums. Therefore, the optimization algorithm is combined for improvement. 3.2. Fruit Fly Optimization Algorithm 3.2. Fruit Fly Optimization Algorithm The scholar Pan Wenchao proposed a fruit fly optimization algorithm after long-term observationThe scholar of fruit Pan flies Wenchao [32]. Compared proposed with a fruit other fly organisms,optimization fruit algorithm flies have after a particularly long-term sensitiveobservation sense of fruit of vision flies and[32]. smell. Compared During with their other activities, organisms, fruit fliesfruit perceiveflies have the a particu- taste of foodlarly throughsensitive smell, sense andof vision use their and smell. unique During vision their to move activities, in the directionfruit flies whereperceive other the fruittaste fliesof food gather. through smell, and use their unique vision to move in the direction where other fruit Theflies fruit gather. fly optimization algorithm is to find the global optimum through the foraging evolutionThe fruit of Drosophila fly optimization fruit flies. algorithm Foraging is to for find fruit the flies global is divided optimum into through two procedures. the forag- Theing evolution first procedure of Drosophila is relying fruit on a flies. strong Fora senseging of for smell fruit to flies initially is divided locate theinto food two sourceproce- anddures. quickly The first approach procedure the food is relying source; on the a secondstrong proceduresense of smell is discovering to initially where locate food the food and companionssource and quickly gather throughapproach vision the food and flysource; in that the direction. second procedure The basic optimizationis discovering process where offood the and fruit companions fly optimization gather algorithm through isvision the fruit and flyfly populationin that direction. starts The from basic a given optimiza- initial position,tion process and of performs the fruit an fly olfactory optimization search algori accordingthm is tothe the fruit given fly flightpopulation direction starts and from step a distance.given initial Then, position, the approximate and performs direction an olfactory of the optimal search solution according and to the the distance given flight between di- therection fruit and fly andstep thedistance. optimal Then, solution the approximat are preliminarilye direction determined of the optimal by the solution concentration, and the anddistance finally between the position the fruit of fly the and optimal the optima solutionl solution is reached are preliminarily through visual determined search. The by fruitthe concentration, fly optimization and algorithm finally the has position a good of global the optimal optimization solution ability, is reached and can through meet vis- the optimizationual search. The problems fruit fly in optimization different fields. algorithm has a good global optimization ability, and canAccording meet the to optimization the foraging processproblems of in fruit different flies, the fields. fruit fly algorithm performs opti- mizationAccording according to the to theforaging following process necessary of fruit steps:flies, the fruit fly algorithm performs opti- 1.mizationSet theaccording size of to the the fruit following fly population necessary and steps: initialize the position of the fruit fly 1. Setpopulation the size of randomly; the fruit fly population and initialize the position of the fruit fly popu- 2. lationSet the randomly; direction and distance for individual fruit flies to search randomly through smell: 2. Set the direction and distance for individual fruit flies to search randomly through X = X + RandomValue smell: i axis (9) Yi = Yaxis + RandomValue Appl. Sci. 2021, 11, x FOR PEER REVIEW 7 of 12

=+ XXiaxisRandomValue =+ (9) YYiaxisRandomValue Appl. Sci. 2021, 11, 7129 7 of 12 3. The location of the food is unknown. First, calculate the distance D(i) from the origin, and use the reciprocal of the distance D(i) as the taste concentration judgment value S(i): 3. The location of the food is unknown. First, calculate the distance D(i) from the origin, and use the reciprocal of the distance D(i) as the taste concentration judgment Di()=+ X22 Y value S(i): ii (10) Si()= 1/q Di () D(i) = X2 + Y2 i i (10) 4. Substitute S(i) into the taste concentrationS(i) = 1/D judgment(i) function fitness, the taste con- 4. centrationSubstitute SSmell(i) into(i) at the the taste individual concentration position judgment of the fruit function fly can fitness, be obtained: the taste concen- tration Smell(i) at the individual position of the fruit fly can be obtained: Smell() i= Fitness(()) S i (11) Smell(i) = Fitness(S(i)) (11) 5. Find the fruit fly with the best taste concentration Smell in the population: 5. Find the fruit fly with the best taste concentration Smell in the population: [bestSmell , bestindex ]= minimum( Smell ) (12) [bestSmell, bestindex] = minimum(Smell) (12) 6. Keep the X and Y coordinates of the best individual fruit fly, and the best taste con- 6. centrationKeep the Xvalue; and Y coordinates of the best individual fruit fly, and the best taste 7. Repeatconcentration steps (2) value; to (5) for iterative optimization, and judge whether the taste concen- 7. trationRepeat of steps the (2)current to (5) iteration for iterative is better optimization, than that and of the judge previous whether iteration, the taste and concen- if so, proceedtration of to the the current next iteration iteration after is better step (6). than that of the previous iteration, and if so, proceed to the next iteration after step (6). 3.3. FOA Optimized BP Algorithm 3.3. FOAIn order Optimized to make BP Algorithmup for the shortcomings of the BP algorithm, this paper uses the FOAIn algorithm order to maketo optimize up for the shortcomingsparameters of of the the BP BP algorithm algorithm, and this improve paper uses the thegeneral- FOA algorithmization ability to optimize and output the parameters accuracy of of the the BP BP algorithm. algorithm The and algorithm improve the flow generalization is shown in abilityFigure and6. output accuracy of the BP algorithm. The algorithm flow is shown in Figure6.

FigureFigure 6.6. TheThe flowflow chartchart ofof thethe FOAFOA optimizedoptimized BPBP algorithm.algorithm.

α βγ θθθ AccordingAccording toto EquationEquation (6),(6), pxppp,xyz,py,,,,,pz, α, β, γ are input, and θ1123,,θ2,, θ3 are are output.output. AA BPBP neuralneural networknetwork structurestructure isis constructedconstructed withwith sixsix inputsinputs andand threethree outputs.outputs. TheThe inputinput and output samples are normalized. The initial weight and initial threshold of the neural network are randomly generated. In the FOA optimized BP neural network algorithm, the initial weights and initial thresholds are used as the initial position of the fruit fly. The seven steps of the fruit fly optimization algorithm are followed, the value of the corresponding fitness calculated, and then the optimal weight and threshold are found through iteration. Appl. Sci. 2021, 11, 7129 8 of 12

Among them, the fitness function is represented by the variance function of the joint angle output by the BP and the expected joint angle, expressed as:

N 1 2 J(k) = ∑ (θi(k) − θi) (13) N i=1

where N represents the number of training samples, θi(k) represents the output of the i-th sample at the k-th iteration, and θi represents the expected output value of the i-th sample. The optimal weights and thresholds found by the fruit fly optimization algorithm are used as the initial weights and initial thresholds of the BP algorithm to train the neural network. Finally, the trained neural network is used to solve the inverse kinematics. In the FOA optimized BP algorithm, the fruit fly population size is set to 30 and set up to 50 iterations. The FOA optimized BP algorithm uses the FOA algorithm to find the global optimum, and the BP algorithm to find the local optimum. The algorithm’s convergence speed, generalization ability, and convergence accuracy are improved.

4. Simulation and Discussion In the simulation experiment, based on the Monte Carlo method, 2000 sets of samples are generated in the robot workspace, of which 1950 sets are used as training samples and 50 sets are used as test samples. The robot end position coordinates and fixed angle samples in the training samples are used as the input of the BP neural network, and the joint angle samples of the robot are used as the output of the BP neural network. Then, theFOA algorithm is used to optimize the convergence of the network, and the weights and thresholds of the BP neural network are returned. The training samples are brought into the FOA optimized BP algorithm to train the network. Finally, the test samples are brought into the trained network for testing, and the joint variables of the robot outputted by the test samples are obtained. The FOA optimized BP algorithm contains two hidden layers. The first hidden layer has 24 neurons and the second hidden layer has 43 neurons. The number of fruit flies is set to 30, and the number of iterations is set to 50. In order to intuitively reflect the effect of the algorithm proposed in this article, the FOA optimized BP algorithm is compared with the ordinary BP algorithm and the PSO optimized BP algorithm. In the PSO optimized BP algorithm, the number of particles is set to 30, and 50 iterations are set. The learning factor c1 = c2 = 2, and the inertia factor w = 0.95, are used. After the simulation, the robot joint variables output by the three algorithms are compared with the expected values in the test samples. Table2 shows the maximum error range and the overall mean square error (MSE) range of each joint in the 50 groups of samples.

Table 2. The range of joint error and the range of MSE.

Joint Error BP PSO-BP FOA-BP and MSE ◦ ∆θ1( ) −0.1530–0.1775 −0.07472–0.07594 −0.02583–0.03612 ◦ ∆θ2( ) −0.2723–0.2977 −0.1570–0.2022 −0.04686–0.1271 ◦ ∆θ3( ) −0.11206–0.2087 −0.07277–0.1599 −0.02986–0.03678 MSE 6.400 × 10−6–1.983 × 10−3 6.390 × 10−6–1.116 × 10−3 2.870 × 10−7–3.258 × 10−4

From the results in Table2, it can be seen that the maximum error range of the joint variables output by the ordinary BP algorithm is [−0.2723, 0.2977], and the range of MSE is [6.400 × 10−6, 1.983 × 10−3]. The maximum error range of the joint variables output by the PSO optimized BP algorithm is [−0.1570, 0.2022], and the range of MSE is [6.390 × 10−6, 1.116 × 10−3]. The maximum error range of the joint variables output by the FOA optimized BP algorithm is [−0.04686, 0.1271], and the range of MSE is [2.870 × 10−7, 3.258 × 10−4]. Appl.Appl. Sci. Sci. 2021 2021, ,11 11, ,x x FOR FOR PEER PEER REVIEW REVIEW 99 ofof 1212

TableTable 2. 2. The The range range of of joint joint error error and and the the range range of of MSE. MSE.

JointJoint Error Error BPBP PSO-BPPSO-BP FOA-BPFOA-BP andand MSE MSE Δ°Δ°θθ 11()() −−0.1530–0.17750.1530–0.1775 −−0.07472–0.075940.07472–0.07594 −−0.02583–0.036120.02583–0.03612 Δ°Δ°θθ 22()() −−0.2723–0.29770.2723–0.2977 −−0.1570–0.20220.1570–0.2022 −−0.04686–0.12710.04686–0.1271 Δ°Δ°θθ 33()() −−0.11206–0.20870.11206–0.2087 −−0.07277–0.15990.07277–0.1599 −−0.02986–0.036780.02986–0.03678 MSEMSE 6.400 6.400 × × 10 10−−66–1.983–1.983 × × 10 10−−3 3 6.390 6.390 × × 10 10−−66–1.116–1.116 × × 10 10−−3 3 2.870 2.870 × × 10 10−−77–3.258–3.258 × × 10 10−−4 4

FromFrom the the results results in in Table Table 2, 2, it it can can be be seen seen that that the the maximum maximum error error range range of of the the joint joint variablesvariables output output by by the the ordinary ordinary BP BP algorithm algorithm is is [ [−−0.2723,0.2723, 0.2977], 0.2977], and and the the range range of of MSE MSE isis [6.400 [6.400 × × 10 10−−6,6 ,1.983 1.983 × × 10 10−−3].3]. The The maximum maximum error error range range of of the the joint joint variables variables output output by by the the −−66 Appl. Sci. 2021, 11, 7129 PSOPSO optimizedoptimized BPBP algorithmalgorithm isis [[−−0.1570,0.1570, 0.2022],0.2022], andand thethe rangerange ofof MSEMSE isis [6.390[6.390 ×× 9 10 of10 12, , 1.1161.116 × × 10 10−−3].3]. The The maximum maximum error error range range of of the the joint joint variables variables output output by by the the FOA FOA optimized optimized BPBP algorithm algorithm is is [ [−−0.04686,0.04686, 0.1271], 0.1271], and and the the range range of of MSE MSE is is [2.870 [2.870 × × 10 10−−77, ,3.258 3.258 × × 10 10−−44].]. Fig- Fig- uresures 7–97–9 showshow thethe comparisoncomparison ofof thethe errorserrors ofof eacheach jointjoint outputoutput byby thethe threethree algorithmsalgorithms forFiguresfor 5050 setssets7–9 ofshowof samples.samples. the comparison FigureFigure 1010 of isis the aa compcomp errorsarisonarison of each ofof jointthethe MSEMSE output outputoutput by the byby three thethe three algorithmsthree algo-algo- rithms.forrithms. 50 sets of samples. Figure 10 is a comparison of the MSE output by the three algorithms.

θθ FigureFigure 7. 7. Comparison Comparison of of the the error error of of θ11.1. .

Appl. Sci. 2021, 11, x FOR PEER REVIEW 10 of 12

θθ FigureFigure 8. 8. Comparison Comparison of of the the error error of of θ22.2. .

θ Figure 9. Comparison of the error of θ3.3 .

Figure 10. Comparison of the overall MSE.

It can be seen from Figures 7–9 that the output error of the ordinary BP algorithm is the largest, and the output error of the FOA optimized BP algorithm is the smallest. The error distribution of the FOA optimized BP algorithm is relatively uniform. From Figure 10, it can be seen that the MSE output by the FOA optimized BP algorithm is much smaller than the MSE output by the other two algorithms. It can be seen that when solving the robot inverse kinematics, the error of the FOA optimized BP algorithm is smaller than that of the ordinary BP algorithm and PSO opti- mized BP algorithm, and the MSE is also significantly reduced. The FOA optimized BP algorithm can improve the accuracy of solving inverse kinematics. The effectiveness of the FOA optimized BP algorithm in solving the inverse kinematics of the robot is verified.

5. Conclusions This paper proposes an FOA optimized BP algorithm to solve the robot inverse kin- ematics. The nonlinear mapping ability of the BP neural network was used to avoid the complicated derivation process of solving robot inverse kinematics. Using the character- istics of the fruit fly optimization algorithm, the convergence accuracy, convergence speed, and generalization ability of BP algorithm were improved. The proposed FOA op- timized BP algorithm can significantly improve the problems of complex calculations, large calculations, and difficult-to-guarantee accuracy in other robot inverse kinematics solving methods. The experimental results prove that the error of the FOA optimized BP algorithm is obviously smaller than that of the ordinary BP algorithm and the PSO opti- mized BP algorithm. After obtaining a large amount of experimental data, the positioning accuracy of the robot can be improved, which has important application value. Appl. Sci. 2021, 11, x FOR PEER REVIEW 10 of 12

Appl. Sci. 2021, 11, 7129 10 of 12 θ Figure 9. Comparison of the error of 3 .

Figure 10. Comparison of the overall MSE.

It can be seen from FiguresFigures7 7–9–9 that that the the output output error error of of the the ordinary ordinary BP BP algorithm algorithm is is the largest, andand thethe outputoutput errorerror ofof thethe FOAFOA optimizedoptimized BPBP algorithmalgorithm isis thethe smallest.smallest. TheThe error distributiondistribution ofof the the FOA FOA optimized optimized BP BP algorithm algorithm is relativelyis relatively uniform. uniform. From FromFigure Figure 10, it10, can it can be seenbe seen that that the the MSE MSE output output by by the th FOAe FOA optimized optimized BP BP algorithm algorithm is is muchmuch smallersmaller than the MSE output byby thethe otherother twotwo algorithms.algorithms. It can bebe seenseen thatthat whenwhen solvingsolving thethe robotrobot inverseinverse kinematics,kinematics, the error of thethe FOAFOA optimized BP algorithm is smaller than that of the ordinary BP algorithm and PSO opti- mized BP algorithm, and the MSEMSE isis alsoalso significantlysignificantly reduced.reduced. TheThe FOAFOA optimizedoptimized BPBP algorithm cancan improveimprove the the accuracy accuracy of of solving solving inverse inverse kinematics. kinematics. The The effectiveness effectiveness of the of FOAthe FOA optimized optimized BP algorithmBP algorithm in solving in solving the the inverse inverse kinematics kinematics of the of the robot robot is verified. is verified.

5. Conclusions This paper proposes an FOA optimized BP algorithm to solve the robot inverse kine- This paper proposes an FOA optimized BP algorithm to solve the robot inverse kin- matics. The nonlinear mapping ability of the BP neural network was used to avoid the ematics. The nonlinear mapping ability of the BP neural network was used to avoid the complicated derivation process of solving robot inverse kinematics. Using the characteris- complicated derivation process of solving robot inverse kinematics. Using the character- tics of the fruit fly optimization algorithm, the convergence accuracy, convergence speed, istics of the fruit fly optimization algorithm, the convergence accuracy, convergence and generalization ability of BP algorithm were improved. The proposed FOA optimized speed, and generalization ability of BP algorithm were improved. The proposed FOA op- BP algorithm can significantly improve the problems of complex calculations, large cal- timized BP algorithm can significantly improve the problems of complex calculations, culations, and difficult-to-guarantee accuracy in other robot inverse kinematics solving large calculations, and difficult-to-guarantee accuracy in other robot inverse kinematics methods. The experimental results prove that the error of the FOA optimized BP algorithm solving methods. The experimental results prove that the error of the FOA optimized BP is obviously smaller than that of the ordinary BP algorithm and the PSO optimized BP algorithm is obviously smaller than that of the ordinary BP algorithm and the PSO opti- algorithm. After obtaining a large amount of experimental data, the positioning accuracy mized BP algorithm. After obtaining a large amount of experimental data, the positioning of the robot can be improved, which has important application value. accuracy of the robot can be improved, which has important application value. Author Contributions: Y.B., M.L. and F.P. developed the methodology and drafted the manuscript; Y.B. performed the numerical simulations and experiments; Y.B. revised the paper. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by Jiangsu Key R&D Program (Grant Nos. BE2018004-1, BE2018004-2, BE2018004). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: This paper is no data. Conflicts of Interest: The authors declare no conflict of interest. Appl. Sci. 2021, 11, 7129 11 of 12

References 1. Bingul, Z.; Ertunc, H.; Oysu, C. Applying neural network to inverse kinematic problem for 6R robot manipulator with offset wrist. In Adaptive and Natural Computing Algorithms; Springer: Vienna, Austria, 2005; pp. 112–115. 2. Zhang, H.; Paul, R.P. A parallel inverse kinematics solution for robot manipulators based on multiprocessing and linear extrapolation. In Proceedings of the IEEE International Conference on and Automation, Cincinnati, OH, USA, 13–18 May 1990; pp. 468–474. 3. Featherstone, R. Position and velocity transformations between robot end-effector coordinates and joint angles. Int. J. Robot. Res. 1983, 2, 35–45. [CrossRef] 4. Lee, C.G. Robot arm kinematics, dynamics, and control. Computer 1982, 15, 62–80. [CrossRef] 5. Manocha, D.; Canny, J.F. Efficient inverse kinematics for general 6R manipulators. IEEE Trans. Robot. Autom. 1994, 10, 648–657. [CrossRef] 6. Paul, R.P.; Shimano, B. Kinematic control equations for simple manipulators. In Proceedings of the 1978 IEEE Conference on Decision and Control including the 17th Symposium on Adaptive Processes, San Diego, CA, USA, 10–12 January 1979; pp. 1398–1406. 7. Xu, J.; Wang, W.; Sun, Y. Two optimization algorithms for solving robotics inverse kinematics with redundancy. J. Control Theory Appl. 2010, 8, 166–175. [CrossRef] 8. Korein, J.U.; Badler, N.I. Techniques for generating the goal-directed motion of articulated structures. IEEE Comput. Graph. Appl. 1982, 2, 71–81. [CrossRef] 9. Goldenberg, A.; Benhabib, B.; Fenton, R. A complete generalized solution to the inverse kinematics of robots. IEEE J. Robot. Autom. 1985, 1, 14–20. [CrossRef] 10. Köker, R.; Öz, C.; Çakar, T.; Ekiz, H. A study of neural network based inverse kinematics solution for a three-joint robot. Robot. Auton. Syst. 2004, 49, 227–234. [CrossRef] 11. Antsaklis, P.J. Neural networks for control systems. IEEE Trans. Neural Netw. 1990, 1, 242–244. [CrossRef][PubMed] 12. Kalra, P.; Mahapatra, P.; Aggarwal, D.J. An evolutionary approach for solving the multimodal inverse kinematics problem of industrial robots. Mech. Mach. Theory 2006, 41, 1213–1229. [CrossRef] 13. Secară, C.; Vlădăreanu, L. Iterative genetic algorithm based strategy for obstacles avoidance of a redundant manipulator. In Proceedings of the 2010 American Conference on Applied Mathematics, Cambridge, MA, USA, 27–29 January 2010; pp. 361–366. 14. Köker, R. A genetic algorithm approach to a neural-network-based inverse kinematics solution of robotic manipulators based on error minimization. Inf. Sci. 2013, 222, 528–543. [CrossRef] 15. Ayyıldız, M.; Çetinkaya, K. Comparison of four different heuristic optimization algorithms for the inverse kinematics solution of a real 4-DOF serial robot manipulator. Neural Comput. Appl. 2016, 27, 825–836. [CrossRef] 16. Jiang, G.; Luo, M.; Bai, K.; Chen, S. A precise positioning method for a puncture robot based on a PSO-optimized BP neural network algorithm. Appl. Sci. 2017, 7, 969. [CrossRef] 17. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, MHS’95, Nagoya, Japan, 4–6 October 1995; pp. 39–43. 18. Boudjelaba, K.; Ros, F.; Chikouche, D. Potential of particle swarm optimization and genetic algorithms for FIR filter design. Circuits Syst. Signal Process. 2014, 33, 3195–3222. [CrossRef] 19. Karlik, B.; Aydin, S. An improved approach to the solution of inverse kinematics problems for robot manipulators. Eng. Appl. Artif. Intell. 2000, 13, 159–164. [CrossRef] 20. Ozgoren, M.K. Optimal inverse kinematic solutions for redundant manipulators by using analytical methods to minimize position and velocity measures. J. Mech. Robot. 2013, 5, 031009. [CrossRef] 21. Dereli, S.; Köker, R. IW-PSO approach to the inverse kinematics problem solution of a 7-DOF serial robot manipulator. Sigma J. Eng. Nat. Sci. 2018, 36, 77–85. 22. Ren, H.; Ben-Tzvi, P. Learning inverse kinematics and dynamics of a robotic manipulator using generative adversarial networks. Robot. Auton. Syst. 2020, 124, 103386. [CrossRef] 23. Dereli, S.; Köker, R. Calculation of the inverse kinematics solution of the 7-DOF redundant robot manipulator by the firefly algorithm and statistical analysis of the results in terms of speed and accuracy. Inverse Probl. Sci. Eng. 2020, 28, 601–613. [CrossRef] 24. Zhou, Z.; Guo, H.; Wang, Y.; Zhu, Z.; Wu, J.; Liu, X. Inverse kinematics solution for robotic manipulator based on extreme learning machine and sequential mutation genetic algorithm. Int. J. Adv. Robot. Syst. 2018, 15.[CrossRef] 25. Wu, L.; Liu, Q.; Tian, X.; Zhang, J.; Xiao, W. A new improved fruit fly optimization algorithm IAFOA and its application to solve engineering optimization problems. Knowl. Based Syst. 2018, 144, 153–173. [CrossRef] 26. Darvish, A.; Ebrahimzadeh, A. Improved fruit-fly optimization algorithm and its applications in antenna arrays synthesis. IEEE Trans. Antennas Propag. 2018, 66, 1756–1766. [CrossRef] 27. Han, X.; Liu, Q.; Wang, H.; Wang, L. Novel fruit fly optimization algorithm with trend search and co-evolution. Knowl. Based Syst. 2018, 141, 1–17. [CrossRef] 28. Du, T.S.; Ke, X.T.; Liao, J.G.; Shen, Y.J. DSLC-FOA: Improved fruit fly optimization algorithm for application to structural engineering design optimization problems. Appl. Math. Model. 2018, 55, 314–339. [CrossRef] 29. Xu, X.; Luo, M.; Tan, Z.; Pei, R. Echo signal extraction method of laser radar based on improved singular value decomposition and wavelet threshold denoising. Infrared Phys. Technol. 2018, 92, 327–335. [CrossRef] Appl. Sci. 2021, 11, 7129 12 of 12

30. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [CrossRef] 31. K ˚urková, V. Kolmogorov’s theorem and multilayer neural networks. Neural Netw. 1992, 5, 501–506. [CrossRef] 32. Pan, W.T. A new fruit fly optimization algorithm: Taking the financial distress model as an example. Knowl. Based Syst. 2012, 26, 69–74. [CrossRef]