<<

applied sciences

Article Which Local Search Operator Works Best for the Open-Loop TSP?

Lahari Sengupta *, Radu Mariescu-Istodor and Pasi Fränti

Machine Learning Group, School of Computing, University of Eastern Finland, 80101 Joensuu, Finland; [email protected].fi (R.M.-I.); [email protected].fi (P.F.) * Correspondence: [email protected].fi; Tel.: +358-417232752

 Received: 6 August 2019; Accepted: 17 September 2019; Published: 23 September 2019 

Abstract: The traveling salesman problem (TSP) has been widely studied for the classical closed-loop variant. However, very little attention has been paid to the open-loop variant. Most of the existing studies also focus merely on presenting the overall optimization results (gap) or focus on processing time, but do not reveal much about which operators are more efficient to achieve the result. In this paper, we present two new operators (link swap and 3–permute) and study their efficiency against existing operators, both analytically and experimentally. Results show that while 2-opt and relocate contribute equally in the closed-loop case, the situation changes dramatically in the open-loop case where the new operator, link swap, dominates the search; it contributes by 50% to all improvements, while 2-opt and relocate have a 25% share each. The results are also generalized to and .

Keywords: traveling salesman problem; open-loop TSP; randomized ; local search; NP–hard; O–Mopsi game

1. Introduction The traveling salesman problem (TSP) aims to find the shortest tour for a salesperson to visit N number of cities. In graph theory, a cycle including every vertex of a graph makes a Hamiltonian cycle. Therefore, in the context of graph theory, the solution to a TSP is defined as the minimum-weight Hamiltonian cycle in a weighted graph. In this paper, we consider the symmetric Euclidean TSP variant, where the distance between two cities is calculated by their distance in the Euclidean space. Both the original problem [1] and its Euclidean variant are NP–hard [2]. Finding the optimum solution efficiently is therefore not realistic, even for the relatively small size of the input. O–Mopsi (http://cs.uef.fi/o-mopsi/) is a mobile-based orienteering game where the player needs to find some real-world targets, with the help of GPS navigation [3]. Unlike a prior instruction of visiting order in the classical orienteering, O–Mopsi players have to decide the order of visiting targets by themselves. The game finishes immediately when the last target is reached. The player does not need to return to the start target. This corresponds to a special case of the orienteering problem [4]. Finding the optimum order by minimizing the tour length corresponds to solving open-loop TSP [5], because the players are not required to return to the start position. The open-loop variant is also NP–hard [2]. The solution to a closed-loop TSP is a tour; however, the solution to an open-loop TSP is an open path. Hence, a TSP with the size N contains N links in its closed-loop solution and N-1 in its open-loop solution (Figure1). However, the optimum solution to an open-loop TSP is not usually achievable by removing the largest link from the optimum solution to the closed-loop TSP (Figure1).

Appl. Sci. 2019, 9, 3985; doi:10.3390/app9193985 www.mdpi.com/journal/applsci Appl. Sci. 2019, 9, 3985x FOR PEER REVIEW 22 of of 2426 Appl. Sci. 2019, 9, x FOR PEER REVIEW 2 of 26

Figure 1. Difference between open- and closed-loop traveling salesman problem (TSP). FigureFigure 1. 1. DifferenceDifference between between open open-- and and closed closed-loop-loop traveling traveling salesman salesman problem problem ( (TSP).TSP). In O–Mopsi, players are not required to solve the optimum tour, but finding a good tour is still In O–Mopsi, players are not required to solve the optimum tour, but finding a good tour is still an essentialIn O–Mopsi, part ofplayers the game. are not Experiments required to showed solve the that, optimum among tourthe players, but finding who completeda good tour a isgame, still an essential part of the game. Experiments showed that, among the players who completed a game, anonly essential 14% had part found of the the game. optimum Experiments tour, even showed if the that, number among of targetsthe players was aboutwho completed 10, on average a game, [3]. only 14% had found the optimum tour, even if the number of targets was about 10, on average [3]. onlyThe optimum14% had foundtour is the still opt neededimum fortour reference, even if whenthe number analyzing of targets the performance was about of10, the on players.average This[3]. The optimum tour is still needed for reference when analyzing the performance of the players. This is Theis usually optimum made tour as is a stillpost needed-game analysisfor reference but can when also analyzing happen duringthe performance real-time ofplay the. A players. comparison This usually made as a post-game analysis but can also happen during real-time play. A comparison can be iscan usually be made made with as a thepost tour-game length analysis or withbut can the also visiting happen order. during The real game-time creator play. A also comparison needs an made with the tour length or with the visiting order. The game creator also needs an estimator for the canestimator be made for the with tour the length, tour lengthas it will or be with published the visiting as a part order. of the The game game info. creator If the game also needsis created an tour length, as it will be published as a part of the game info. If the game is created automatically on estimatorautomatically for the on tour the length,demand, as TSPit will must be published be solved asreal a part-time. of Thethe gameoptimality info. Ifis thehighly game desired is created, but the demand, TSP must be solved real-time. The optimality is highly desired, but can sometimes be automaticallycan sometimes on be the compromised demand, TSP for must the sake be solvedof fast processing.real-time. The optimality is highly desired, but compromised for the sake of fast processing. can sometimesThe number be compromised of targets in for the the current sake of Ofast–Mopsi processing. instances varies from 4 to 27, but larger The number of targets in the current O–Mopsi instances varies from 4 to 27, but larger instances instancesThe numbermay appear. of targets For the in smaller the current instances, O–Mopsi the rela instancestively simple varies branch from- and 4 to-bound 27, but larger may appear. For the smaller instances, the relatively simple branch-and-bound algorithm is fast enough instancesis fast enough may appear.to produc Fore the smalleroptimum instances, tour. However, the rela tivelyfor the simple longer branch instances,-and it-bound might algorithm take from to produce the optimum tour. However, for the longer instances, it might take from minutes to hours. isminutes fast enough to hours. to produc Alonge thewith optimum the exact tour. solver, However, a faster for the longer algorithm instances, is therefore it might takeneeded from to Along with the exact solver, a faster heuristic algorithm is therefore needed to find the solution more minutesfind the to solution hours. Alongmore quickly,with the in exact spite solver, of the a danger faster heuristic of occasionally algorithm resulting is therefore in a sub needed-optimal to quickly, in spite of the danger of occasionally resulting in a sub-optimal solution. Besides O–Mopsi, findsolution. the solution Besides more O–Mopsi, quickly, TSP in arises spite of in the route danger planning of occasionally [6], orienteering resulting problem in a subs -optimal[7], and TSP arises in route planning [6], orienteering problems [7], and automated map generation [8]. solution.automated Besides map generation O–Mopsi, [8]. TSP arises in route planning [6], orienteering problems [7], and An example of O–Mopsi play is shown in Figure2, with a comparison to the optimum tour automatedAn example map generation of O–Mopsi [8]. play is shown in Figure 2, with a comparison to the optimum tour (reference). For analyzing, we calculate several reference tours: optimum, fixed-start, and dynamic. (reference).An example For analyzing,of O–Mopsi we play calculate is shown several in Figure reference 2, with tours: a comparisonoptimum, fixed to -thestart, optimum and dynamic tour . The optimum tour takes into account the fact that the players can freely choose their starting point. (reference).The optimum For tour analyzing, takes into we account calculate the several fact that reference the players tours: can optimum freely choose, fixed- start,their startingand dynamic point.. Since selecting the best starting point is challenging [9], we also calculate two alternative reference TheSince optimum selectin gtour the takesbest startinginto account point the is challengingfact that the [9], players we also can calculatefreely choose two alternativetheir starting reference point. tours from the location where the player started. Fixed-start is the optimum tour using this starting Sincetours selectinfrom theg thelocation best startingwhere the point player is challenging started. Fixed [9],- startwe also is the calculate optimum two tour alternative using this reference starting point. The dynamic tour follows the player’s choices from the starting point but re-calculates the new tourspoint. from The thedynamic location tour where follows the playerthe player’s started. choices Fixed -fromstart isthe the st optimumarting point tour but using re-calculates this starting the optimum every time the player deviates from optimum path. Three different evaluation measures are point.new optimumThe dynamic every tour time follows the playerthe player’s deviates choices from from optimum the st arting path. Threepoint but different re-calculates evaluation the obtained using these three tours: gap, mismatches, and mistakes; see [10]. newmeasures optimum are obtained every time using the these player three deviates tours: gap from, mismatches optimum, and path. mistakes Three; see different [10]. evaluation measures are obtained using these three tours: gap, mismatches, and mistakes; see [10].

Figure 2.2. Examples of O–MopsiO–Mopsi playing (left),(left), thethe finalfinal tour (middle) and comparisoncomparison to optimum Figuretour (right). 2. Examples of O–Mopsi playing (left), the final tour (middle) and comparison to optimum tour (right). Appl. Sci. 2019, 9, 3985 3 of 24

Being NP–hard, exact solvers of TSPs are time-consuming. Despite the huge time consumption, exact solvers were prime attractions to researchers in earlier days. Dantzig, Fulkerson, and Johnson [11], Held and Karp [12], Padberg and Rinaldi [13], Grötschel and Holland [14], Applegate et al. [15], and others developed different algorithms to produce exact solutions. Laporte [16] surveyed exact algorithms for TSP. By the year 2006, the Concorde solver solved a TSP instance with 85900 cities [17]. Along with these, different heuristic algorithms have been developed to find very fast near-optimal solutions. The local search is one of the most common to produce near-optimum results [18]. It has two parts: tour construction and tour improvement. The tour construction generates an initial solution that can be far beyond the optimum. A significant amount of research developed different tour construction algorithms, such as the savings algorithm by Reference [19], polynomial–time 3/2 by Reference [20], insertion [21], greedy [21], and the nearest neighbor algorithm [21]. Among these, we use the and a random arrangement for the tour construction. Being an iterative process, the tour improvement step modifies the initial solution with a small improvement in each iteration. The key component in the local search is the choice of the operator, which defines how the current solution is modified to generate new candidate solutions. Most used operators are 2-opt [22] and its generalized variant, called k–opt [23], which is empirically the most effective local for large TSPs [24]. Okano et al. [25] analyzed the combination of 2-opt with different well-known construction heuristics. Several researchers studied local search for TSP and different optimization problems [16,21,26–32], even for vehicle routing application [33,34]. Although existing studies have analyzed the local search algorithms extensively, it is still not known which operators are effective for open-loop cases. Besides this, the existing literature mainly presents optimization results of algorithms instead of providing a detailed study on different operators. As operators are the backbone of the local search algorithms, we need a detailed study on them to know how to increase the productivity of the algorithm and how to avoid the local minima. In this paper, we study several operators and their combinations in the context of local search. We aim at answering which of them works best in terms of the quality and efficiency. We consider four operations, of which two are existing and two are new:

Relocate [35] • 2-opt [22] • 3–permutation (new) • Link swap (new) • Between the new two operators, the former also applies to the more common closed-loop TSP, but the second is specifically tailored for the open-loop problem. We also propose an algorithm to achieve a global minimum in most problem instances of size up to 31, preferably in real-time. We study the performance of these operators when applied separately, and when mixed. We consider random, best improvement, and first improvement search strategies. We first report how successful the operators are in equal conditions, and whether the choice of the best operator changes towards the end of the iterations. We study two initialization techniques: random and heuristic, and the effect of re-starting the search. Results are reported for two open-loop datasets (O–Mopsi, Dots), see Table1. The O–Mopsi dataset contains real-world TSP instances and the Dots dataset is a computer-generated random dataset for an experimental computer game. Since it is an outdoor dataset, generating the former one is relatively difficult. Therefore, the available number of instances are much lower than Dots. However, both datasets need real-time reference solutions. The rest of the paper is organized as follows. In Section2, we describe the local search operators we use in this study. We define these operators with illustrations and study their abilities to produce unique improved solutions. In Section3, we provide results of the tests carried on with these operators on Table1 datasets. Based on these results, we introduce a new method to solve open-loop TSPs in this section. In Section4, we evaluate the quality and e fficiency of our proposed algorithm. In Section5, Appl. Sci. 2019, 9, 3985 4 of 24 we test stochastic variants, aiming to improve our algorithm. Lastly, in Section6, we conclude our findings in this study.

Table 1. Datasets used in this study.

Dataset: Type: Distance: Instances: Sizes: O-Mopsi 1 Open loop Haversine 145 4-27 Dots 1 Open loop Euclidean 4226 4-31 1 http://cs.uef.fi/o-mopsi/datasets/.

2. Local Search The local search is a way to upgrade an existing solution to a new candidate solution by small changes. A local search operator seeks and finds a small modification to improve the solution. The search approach of a local search operator can be one of several of different types. In the first and best improvement strategy, the first and best candidate is always chosen among all the candidates, respectively. For these two cases, we continue the search until saturation, which means no further improvement is found. In a random search strategy, candidates are chosen randomly until some constant iteration. In the local search, a better solution can almost always be found. We next study the following four operators:

Relocate [35] • 2-opt [22] • 3–permute (new) Link swap (new) • 2.1. Relocate Changing the order of nodes can produce different solutions. Relocate is the process where a selected node (target) is moved from its current position in the tour to another position (destination). Hence, the position of the selected node is relocated. Each relocation of a node produces one outcome. Gendreau, Hertz, and Laporte [35] developed the GENI method to insert a new node by breaking a link. We use this idea and create relocate as a tour improvement operator. Figure3 illustrates relocate using an O-Mopsi example, where target A is moved between B and C. First, we take A from its current position by removing links PA and AQ, and by adding link PQ to close the gap. Then, A is added to its new position by removing link BC, and by adding BA and AC. As an effect, the tour becomes 30 m shorter. The size of the search neighborhood is O(N2) because there are N possible targets and N-2 destinationsAppl. Sci. 2019 to choose, 9, x FOR from,PEER REVIEW in total. 5 of 26

Figure 3. Working of the relocate operator and an example of how it improves the tour in an O–Mopsi Figure 3. Working of the relocate operator and an example of how it improves the tour in an gameO by–Mopsi 30 meters. game by 30 meters.

2.2. Two-Optimization (2-opt) Croes [22] first introduced the 2-optimization method, which is a simple and very common operator [36]. The idea of 2-opt is to exchange the links between two pairs of subsequent nodes as shown in Figure 4. It has also been generalized to k–opt (L–K heuristics) [23] and implemented by Helsgaun [37]. Johnson and McGeoch [21] explained thoroughly the 2-opt and 3–opt processes. In the case of 3–opt, the only difference from 2-opt is to exchange the links between three pairs of nodes. The method was originally developed for the closed-loop TSP, but it applies to the open-loop TSP as well. In addition, we also consider dummy nodes paired with one of the two terminal nodes. Such a way, terminal nodes can also change into an internal node.

Figure 4. Three examples of how the 2-optimization method can improve the tour.

The 2-opt approach removes crossings in the path. Pan and Xia [38] highlighted this cross-removing effect in their work. However, 2-opt is more than the cross-removal method, as shown in Figure 4. In Figure 5, 2-opt is illustrated with two real examples from O-Mopsi. Here, AB and CD are the two links involved in the operation. In the modified tour, these links are replaced by AC and BD. In Appl. Sci. 2019, 9, x FOR PEER REVIEW 5 of 26

Appl. Sci.Figure2019 , 3.9, 3985Working of the relocate operator and an example of how it improves the tour in an5 of 24 O–Mopsi game by 30 meters.

2.2. Two Two-Optimization-Optimization (2 (2-opt)-opt) Croes [2[222] firstfirst introducedintroduced the 2-optimization2-optimization method, which is a simple and very common operator [36]. [36]. The The idea idea of of 2 2-opt-opt is to exchange the links between two pairs of subsequent nodes as shown in Figure 44.. ItIt hashas alsoalso beenbeen generalizedgeneralized toto k–optk–opt (L–K(L–K heuristics)heuristics) [[2233] and implem implementedented by Helsgaun [[37].37]. JohnsonJohnson andand McGeochMcGeoch [ 21[2]1] explained explained thoroughly thoroughly the the 2-opt 2-opt and and 3–opt 3–opt processes. processes. In theIn thecase case of 3–opt, of 3–opt, the only the only difference difference from from 2-opt 2 is-opt to exchangeis to exchange the links the between links between three pairs three of pairs nodes. of nodes.The method The was method originally was original developedly developed for the closed-loop for the TSP, closed but-loop it applies TSP, to but the it open-loop applies TSPto the as well.open- Inloop addition, TSP as we well. also In consider addition, dummy we also nodes consider paired dummy with one nodes of the paired two terminal with one nodes. of the Such two a way,terminal terminal nodes. nodes Such can a way, also terminal change into nodes an can internal also node.change into an internal node.

FigureFigure 4. 4. ThreeThree examples examples of of how how the the 2 2-optimization-optimization method method can improve the tour.

The 2-opt 2-opt approach approach removes removes crossings crossings in the path. in the Pan path. and Xia Pan [38 and] highlighted Xia [38] this highlighted cross-removing this crosseffect-removing in their work. effect However, in their 2-opt work. is However, more than 2the-opt cross-removal is more than method, the cross as- shownremoval in methodFigure4., as shownIn in Figure Figure5, 2-opt4. is illustrated with two real examples from O-Mopsi. Here, AB and CD are the two linksIn Figure involved 5, 2-opt in is the illustrated operation. with In two the modifiedreal examples tour, from these O links-Mopsi. are Here replaced, AB byand AC CD and are BD.the twoIn the links first involved example, in the the original operation. links In cross, the modified which is thentour, deletedthese links and are the replaced length of by the AC overall and BD. tour In is shortened from 1010 m to 710 m. The second example shows that 2-opt can also improve the tour, even when the original links do not cross. The 2-opt operator works with links. A tour consists of N-1 links, so there are (N-1)*(N-2) possible pairs of links. Thus, the size of the neighborhood is O(N2). Appl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 26 the first example, the original links cross, which is then deleted and the length of the overall tour is shortened from 1010 m to 710 m. The second example shows that 2-opt can also improve the tour, even when the original links do not cross. Appl. Sci.The2019 2-,opt9, 3985 operator works with links. A tour consists of N-1 links, so there are (N-1)*(6N of- 242) possible pairs of links. Thus, the size of the neighborhood is O(N2).

Figure 5. Two examples of the 2-opt2-opt method in O-Mopsi.O-Mopsi.

2.3. Three Three-Node-Node Permutation (3 (3-permute)-permute) Permutation between the order of the nodes is another potentially useful operator for the locallocal search. Although thethe ideaidea isis quite quite straightforward, straightforward, we we are are not not aware aware of of its its existence existence in literature,in literature so,we so willwe will briefly briefly consider consider it here. it here. As per As theper theory, the theory, three thr nodesee nodes (A, B, (A, C) canB, C) generate can generate six diff erentsix different orders. Therefore,orders. Therefore, given an given existing an existing sequence sequence (triple) of (triple three) nodes,of three ABC, nodes, we ABC, can create we can five create new solutions:five new ACB,solutions: BAC, ACB, BCA, BAC, CAB, BCA, and CBA.CAB, and CBA. In Figure6 6,, anan originaloriginal andand itsits five five alternatives alternatives areare shown. shown. TheThe realreal O-MopsiO-Mopsi gamegame wherewhere thisthis example originatesoriginates fromfrom isis also also shown. shown. Here, Here the, the tour tour length length is reducedis reduced by by 170 170 m whenm when changing changing the originalthe original order order (ABC) (ABC) to the to betterthe better one one (BAC). (BAC). There are N-1-1 possible triples, including two dummies after the terminal points. The size of the neighborhood is,is, therefore,therefore, 6*( 6*(NN-1)-1)= =O( O(NN),), which which is is significantly significantly smaller smaller than than that that of the of relocatethe relocate and Appl. Sci. 2019, 9, x FOR PEER REVIEW 7 of 26 2-optand 2- operators.opt operators.

FigureFigure 6. 6. AllAll six six combinations combinations of 3–permute3–permute andand anan O–MopsiO–Mopsi game game example example that that is is improved improved by by 170 170 m. m.

2.4. Link Swap Solving the open-loop variant of TSP allows us to introduce a new operator that is not possible with the closed-loop problem. Link swap is analogous to relocate, hence we consider one link, AB, and swap it into a new location in the tour, see Figure 7. However, to keep the new solution as a valid TSP tour, we are limited to three choices: replace AB by AT2, BT1, or T1T2. As a result, one or both of the nodes A and B will become a new terminal node. In the O–Mopsi example in Figure 7, we can see that a single link swap operation improves the solution by reducing the tour length up to 137 m. We have N–1 choices for a link to be removed, and for each of them, we have three new alternatives. Thus, the size of the neighborhood is 3*(N–1) = O(N). Link swap is a special case of 3–opt and relocate operator, but as the size of the neighborhood is linear, it is a faster operation than both 3–opt and relocate operator.

Figure 7. Original tour and three alternatives created by link swap (left). In each case, the link x is replaced by a new link connecting to one or both terminal nodes. In the O–Mopsi example (right), the tour is improved by 137m.

2.5. The Uniqueness of the Operators Appl. Sci. 2019, 9, x FOR PEER REVIEW 7 of 26

Figure 6. All six combinations of 3–permute and an O–Mopsi game example that is improved by 170 Appl. Sci.m. 2019, 9, 3985 7 of 24

2.4. Link Swap 2.4. Link Swap Solving the open-loop variant of TSP allows us to introduce a new operator that is not possible Solving the open-loop variant of TSP allows us to introduce a new operator that is not possible with the closed-loop problem. Link swap is analogous to relocate, hence we consider one link, AB, with the closed-loop problem. Link swap is analogous to relocate, hence we consider one link, AB, and swap it into a new location in the tour, see Figure 7. However, to keep the new solution as a and swap it into a new location in the tour, see Figure7. However, to keep the new solution as a valid valid TSP tour, we are limited to three choices: replace AB by AT2, BT1, or T1T2. As a result, one or TSP tour, we are limited to three choices: replace AB by AT2, BT1, or T1T2. As a result, one or both of both of the nodes A and B will become a new terminal node. the nodes A and B will become a new terminal node. In the O–Mopsi example in Figure 7, we can see that a single link swap operation improves the In the O–Mopsi example in Figure7, we can see that a single link swap operation improves the solution by reducing the tour length up to 137 m. solution by reducing the tour length up to 137 m. We have N–1 choices for a link to be removed, and for each of them, we have three new We have N–1 choices for a link to be removed, and for each of them, we have three new alternatives. alternatives. Thus, the size of the neighborhood is 3*(N–1) = O(N). Link swap is a special case of Thus, the size of the neighborhood is 3*(N–1) = O(N). Link swap is a special case of 3–opt and relocate 3–opt and relocate operator, but as the size of the neighborhood is linear, it is a faster operation operator, but as the size of the neighborhood is linear, it is a faster operation than both 3–opt and than both 3–opt and relocate operator. relocate operator.

Figure 7. Original tour and three alternatives created by link swap (left). In In each each case, case, the the link x is replaced by a new link connecting to one or both terminal nodes. In In the the O O–Mopsi–Mopsi example (ri (right),ght), the tour is improved by 137m. 2.5. The Uniqueness of the Operators 2.5. The Uniqueness of the Operators All operators create a large number of neighbor solutions. The set of solutions generated by one operator can be similar to the set of another operator. Given the same initial solution, if all the candidate solutions generated by one operator were a subset of the solutions generated by another operator, then this approach would be redundant. Next, we study the uniqueness of these methods. Relocate: The highlighted target in the start tour in Figure8 can swap to six di fferent positions. By analyzing these six results, we can observe that the tours 1 to 3 are also possible outcomes of 3–permute. Tours 1 and 2 are also possible outcomes of 2-opt and tour 1 can be an outcome of link swap. However, the rest of the tours are unique, and only possible to produce by the relocate operator. Only the relocate operator can produce the best result in this example (tour 4). Appl. Sci. 2019, 9, x FOR PEER REVIEW 8 of 26

All operators create a large number of neighbor solutions. The set of solutions generated by oneAppl. operator Sci. 2019 , can9, x FOR be similarPEER REVIEW to the set of another operator. Given the same initial solution, 8if ofall 26 the candidate solutions generated by one operator were a subset of the solutions generated by another All operators create a large number of neighbor solutions. The set of solutions generated by operator, then this approach would be redundant. Next, we study the uniqueness of these methods. one operator can be similar to the set of another operator. Given the same initial solution, if all the Relocate: The highlighted target in the start tour in Figure 8 can swap to six different positions. candidate solutions generated by one operator were a subset of the solutions generated by another Byoperator, analyzing then these this sixapproach results would, we can be redundantobserve that. Next, the we tours study 1 tothe 3 uniquenessare also possible of these outcomes methods. of 3–permuRelocate:te. Tours The 1 andhighlighted 2 are also target possible in the outcomesstart tour inof Figure 2-opt 8and can tour swap 1 tocan six be different an outcome positions. of link swap.By analyzing However, these the six rest results of the, we tours can areobserve unique, that andthe tours only 1 possible to 3 are toalso produce possible by outcomes the relocate of operator3–permu. Onlyte. Tours the relocate 1 and 2 operatorare also possible can produce outcomes the best of 2 -resultopt and in tourthis example1 can be an(tour outcome 4). of link Appl.swap. Sci. However,2019, 9, 3985 the rest of the tours are unique, and only possible to produce by the relocate8 of 24 operator. Only the relocate operator can produce the best result in this example (tour 4).

Figure 8. Six neighboring solutions created by relocate, of which three are unique.

2-opt: In thisFigure method, 8. Six neighboring two links solutions are removed created by and relocate,relocate two , new of which ones three are arear created.e unique.unique. An example is shown in Figure 9, where the first removed link is fixed as the red one, which is highlighted, and 2-opt:2-opt: InIn this this method, method, two two links links are are removed removed and and two two new new ones onesare created. are created. An example An example is shown is the second one is varied. The resulting six new tours are shown. The other operators can generate inshown Figure in9 ,Figure where 9, the where first removedthe first linkremoved is fixed link as is the fixed red one,as the which red one, is highlighted, which is highlighted, and the second and four of them as well; however, two of them are unique. The best outcome is uniquely generated by onethe second is varied. one The is resultingvaried. The six newresulting tours six are new shown. tours The are other shown. operators The other cangenerate operators four can of generate them as the 2-opt operator. well;four of however, them as two well; of themhowever, are unique.two of them The best are outcomeunique. The is uniquely best outcome generated is uniquely by the 2-opt generated operator. by the 2-opt operator.

FigureFigure 9. 9. SixSix neighboring neighboring solutions solutions created byby 2-opt,2-opt, of of which which two two are are unique. unique. Figure 9. Six neighboring solutions created by 2-opt, of which two are unique. 3-permute: The example in Figure 10 reveals that the operator does not produce even a single unique3-permute: result. Therefore,The example if both in relocateFigure 1 and0 reveals 2-opt arethat used, the operator this operator does is not redundant. produce Relocate even a cansingle unique3 -result.permute: Therefore, The exa mpleif both in Figurerelocate 10 and reveals 2-opt that are the used, operator this doesoperator not produce is redundant. even a Relocatesingle findunique four result. out of Therefore, the five solutions, if both relocate and 2-opt and can 2 find-opt three.are used, this operator is redundant. Relocate can find four out of the five solutions, and 2-opt can find three. can findA potential four out benefit of the of five the solutions, 3–permute and operator 2-opt iscan that find it needs three. to explore only a smaller neighborhood, A potential benefit of2 the 3–permute operator is that it needs to explore only a smaller O(N),A in potential comparison be tonefit O(N of) of the relocate 3–permute and 2-opt. operator However, is that this it smaller needs neighborhood to explore only is not a extensive smaller neighborhood, O(N), in comparison to O(N2) of relocate and 2-opt. However, this smaller enough,neighborhood as it misses, O(N the), in solutions comparison shown to in Figures O(N2) 8 of and relocate9. Our experiments and 2-opt. also However, show that this the smaller local neighborhood is not extensive enough, as it misses the solutions shown in Figures 8 and 9. Our Appl.searchneighborhood Sci. 2019 using, 9, x 3-permute FOR is not PEER extensive REVIEW is consistently enough outperformed, as it misses the by the solutions other operators. shown in FiguresWe, therefore, 8 and do9. Ou9 not ofr 26 experiments also show that the local search using 3-permute is consistently outperformed by the considerexperiments this operatoralso show further. that the local search using 3-permute is consistently outperformed by the otherother operators. operators. We, We, therefore, therefore, do do not not considerconsider this operatoroperator further. further.

Figure 10. Six neighboring solutions created by 3–permute, but none of them is unique. Figure 10. Six neighboring solutions created by 3–permute, but none of them is unique. Link swap: This operator creates three alternatives for a selected link. Two of these are redundant withLink 2-opt, swap: but theThis one operator which connects creates the three two alternatives previous terminal for a points selected is unique link. Two(Figure of 11 these). It is are redundant with 2-opt, but the one which connects the two previous terminal points is unique (Figure 11). It is also the best one. Link swap is therefore worthwhile in cases when the terminal points are close to each other.

Figure 11. Three neighboring solutions created by link swap, of which the one connecting the terminal nodes is unique.

3. Performance of a Single Operator We next test the performance of the different operators within a local search. For this, we need an initial tour to start with and a search strategy. These will be discussed first, followed by the experimental results.

3.1. Initial Solution If the operator and search strategy are good, the choice of initialization should not matter much for the final result. We, therefore, consider only two choices: • Random • Heuristic (greedy) Random initialization selects the nodes one by one randomly to create the initial tour. A straightforward method for better initialization is greedy heuristic. It selects a random node as a starting point and always takes the closest unvisited node as the next target. Because of solving open-loop TSP, the starting point also matters in this case. We therefore systematically try all nodes as the starting point and generate N candidate tours using the greedy heuristics. Among N candidate tours, we select the shortest tour as the initial solution. The pseudocode for this is the following:

Algorithm 1: Finding the heuristic initial path HeuristicPath () InitialPath  infinity FOR node  1 TO N DO NewPath  GreedyPath ( node ) IF Length ( NewPath ) < Length ( InitialPath ) THEN Appl. Sci. 2019, 9, x FOR PEER REVIEW 9 of 26

Figure 10. Six neighboring solutions created by 3–permute, but none of them is unique.

Appl.Link Sci. 2019 swap:, 9, 3985 This operator creates three alternatives for a selected link. Two of these9 of 24 are redundant with 2-opt, but the one which connects the two previous terminal points is unique (Figure 11). It is also the best one. Link swap is therefore worthwhile in cases when the terminal also the best one. Link swap is therefore worthwhile in cases when the terminal points are close to points are close to each other. each other.

FigureFigure 11. 11. ThreeThree neighboring solutions solutions created created by link by swap, link swap, of which of the which one connecting the one connecting the terminal the terminalnodes isnodes unique. is unique. 3. Performance of a Single Operator 3. Performance of a Single Operator We next test the performance of the different operators within a local search. For this, we need anWe initial next tour test to the start performance with and aof search the dif strategy.ferent operators These will within be discussed a local search. first, followedFor this, bywe theneed anexperimental initial tour toresults. start with and a search strategy. These will be discussed first, followed by the experimental results. 3.1. Initial Solution 3.1. Initial Solution If the operator and search strategy are good, the choice of initialization should not matter much forIf the the final operator result. We, and therefore, search strategy consider are only good, two choices: the choice of initialization should not matter much forRandom the final result. We, therefore, consider only two choices: • • RandomHeuristic (greedy) • • Heuristic (greedy) RandomRandom initialization initialization selects selects the the nodes nodes one by oneone randomlyrandomly to to create create the the initial initial tour. tour. A Astraightforward straightforward method method for for bett betterer initialization initialization isis greedygreedy heuristic. heuristic. It It selects selects a randoma random node node as asa starting a starting point point and and always always takes takes the the closest closest unvisited nodenode asas the the next next target. target. Because Because of of solving solving open-loop TSP, the starting point also matters in this case. We therefore systematically try all nodes as open-loop TSP, the starting point also matters in this case. We therefore systematically try all nodes the starting point and generate N candidate tours using the greedy heuristics. Among N candidate as the starting point and generate N candidate tours using the greedy heuristics. Among N tours, we select the shortest tour as the initial solution. Algorithm 1 presents the pseudocode for this. candidate tours, we select the shortest tour as the initial solution. The pseudocode for this is the following: Algorithm 1: Finding the heuristic initial path HeuristicPath () Algorithm 1: Finding the heuristic initial path InitialPath infinity ← HeuristicPath () FOR node 1 TO N DO ← InitialPath  infinity NewPath GreedyPath ( node ) ← IF Length ( NewPathFOR )

3.2. Search Strategy For the search strategy we consider the following three choices: Best improvement • First improvement • Random search • Appl. Sci. 2019, 9, 3985 10 of 24

Best improvement studies the entire neighborhood and selects the best one among all solutions. This can be impractical, especially if the neighborhood is large. The first improvement studies the neighborhood only until it founds any solution that provides improvement. It can be more practical as it moves on with the search quicker. Random search studies the neighbors in random order, and similar to the first improvements, accepts any new solution that improves. We also consider repeated (multi-start) local search, which simply re-starts the search from scratch several times. This helps if the search is susceptible to getting stuck into a local minimum [39]. Moreover, every repeat is essentially a random attempt to converge to a local optimum; parallelization can be applied by running different repeats on different execution threads. O'Neil & Burtscher [40] and Al-Adwan et al. [41] preferred repeated local search, for TSP and Xiang et al. [42] for matrix multiplication to overcome local optima. A pseudocode that covers all the search variants with all the operators is given in Algorithm 2. The only parameter is the number of iterations for which the search continues. First and best improvement searches can also be terminated when the entire neighborhood is searched without further improvement. This corresponds to the hill-climbing strategy.

Algorithm 2: Finding the improved tour by all the operators for different search strategies LocalSearch ( InitialPath, OperationType ) Path InitialPath ← SWITCH SearchType CASE RANDOM FOR i 1 TO Iterations DO ← NewPath NewRandomSolution ( Path, OperationType ) ← IF Length ( NewPath ) < Length ( Path ) THEN Path NewPath ← CASE FIRST Path NewFirstSolution ( Path ) ← CASE BEST Path NewBestSolution ( Path ) ← RETURN Path NewRandomSolution ( Path, OperationType ) SWITCH OperationType CASE RELOCATE Target Random ( 1, N ) ← Destination Random ( 1, N ) ← NewPath relocate ( Path, Target, Destination ) ← CASE 2OPT FirstLink Random ( 1, N ) ← SecondLink Random ( 1, N ) ← NewPath 2Opt ( Path, FirstLink, SecondLink ) ← CASE LINKSWAP Link Random ( 1, N ) ← NewPath linkSwap ( Path, Link ) ← RETURN NewPath NewFirstSolution ( Path, OperationType ) FOR i 1 TO N DO ← FOR j 1 TO N DO ← SWITCH OperationType CASE RELOCATE NewPath relocate ( Path, i, j ) ← CASE 2OPT NewPath 2Opt ( Path, i, j ) ← Appl. Sci. 2019, 9, 3985 11 of 24

IF Length ( NewPath ) < Length ( Path ) THEN Path NewPath ← RETURN Path IF OperationType = LINKSWAP NewPath linkSwap ( Path, i ) ← IF Length ( NewPath ) < Length ( Path ) THEN Path NewPath ← RETURN Path

NewBestSolution ( Path, OperationType ) FOR i 1 TO N DO ← FOR j 1 TO N DO ← SWITCH OperationType CASE RELOCATE NewPath relocate ( Path, i, j ) ← CASE 2OPT NewPath 2Opt ( Path, i, j ) ← IF Length ( NewPath ) < Length ( Path ) THEN Path NewPath ← IF OperationType = LINKSWAP NewPath linkSwap ( Path, i ) ← IF Length ( NewPath ) < Length ( Path ) THEN Path NewPath ← RETURN Path

3.3. Results With A Single Operator We first study how close the operators can improve paths to the optimum solutions using O–Mopsi and Dots datasets. Optimum paths were computed by branch-and-bound. Two results are reported:

Gap (%) • Instances solved (%) • The gap is the length of the optimum path divided by the length of the tour found by the local search (0% indicates optimum). The second result is the number of instances for which an optimum solution is found by the local search. We also express it in a percentage value, which indicates 100% means that the method finds the optimum solutions for all instances. We execute every operator individually with all search strategies on all O-Mopsi game instances. Repeated search is applied only for the random search with random initialization. We combine the results of O-Mopsi and Dots datasets and report those in Table2. In the table, we first write the gap values, and the percentage values of solved instances are written inside the parentheses. The 2-opt is the best individual operator. It reaches a 2% gap and solves 57% of all instances when starts from a random initialization, and 0.8% (73% instances) using the heuristic initialization. The repeats are important. With 25 repeats, the 2-opt solves 97% of all instances, and the mean gap is almost 0%. The better initialization provides significant improvement in all cases, which indicates the search gets stuck into a local minimum. The choice of the search strategy, however, has only minor influence. We, therefore, consider only the random search in further experiments. Appl. Sci. 2019, 9, 3985 12 of 24

Table 2. Combined results of single and subsequent application of random improvement with different operators averaged over O–Mopsi and Dots instances. The percentage of solved instances are shown inside the parenthesis. The number of iterations was fixed to 10000 for randomized search, whereas first and best improvements were iterated until convergence.

Single Operator Double Operators Mix of Three Repeated Repeated (25 times) Second Initialization Random Heuristic Repeated (25 times) (25 times) Operator: Link Relocate 2-opt Subseq-uent Mixed swap First 7% (30%) 1.0% (70%) ------Best 6% (34%) 0.9% (72%) ------Reloc-ate 0.18% 0.007% 0.032% 0.010% 0.005% Random 6% (34%) 0.9% (71%) - (90%) (98%) (97%) (98%) (99.7%) First 2% (54%) 1.0% (70%) ------2-opt Best 2% (57%) 0.8% (73%) ------0.02% 0.009% 0.018% 0.005% 0.005% Random 2% (52%) 0.8% (73%) - (97%) (98%) (97%) (99%) (99.7%) Link Best 44% (5%) 2.9 (59%) ------swap 1.00% 0.019% 0.010% 0.006% 0.005% Random 11% (31%) 3.0 (58%) - (77%) (98%) (97%) (99%) (99.7%)

3.4. Combining the Operators Our primary goal was to find an optimum solution for all game instances. The best combination (2-opt with a random search using 25 repeats) missed it for only 3% of instances. Nevertheless, these are relatively easy instances; therefore, they should all be solvable by the local search. We, therefore, consider combining different operators. We do this by applying a single run of the local search with one operator, and then continue the search with another operator. We consider all pairs and the subsequent combinations of all three operators. We tested all combinations but report only the results of 25 repeats using the random improvement (random initialization). From the results in Table2, we can see that all combinations manage to solve a minimum of 97% of all instances and have a gap of 0.032% or less. The mixed variant with the order (2-opt + relocate + link swap) found the optimum solution for 99% cases. In this case, 2-opt solves already a 97% of all instances, then the consecutive relocate makes it 98%, and finally link swap makes it 99%. This shows that the operators have complementary search spaces; when one operator stops Appl.progressing, Sci. 2019, 9, anotherx FOR PEER one REVIEW can still improve further. Examples are shown in Figure 12. 13 of 26

FigureFigure 12. 12. OperatorsOperators have have complementary complementary search search spaces. spaces.

After being stopped, it is also possible that a single operator can start to work again when other operators make some improvement in the meantime. Hence, these works of other operators make the first operator alive again. Figure 13 has three such examples, where the first operator entraps into a local optimum. Then, other operators recover the process from the local optimum and improvement continues. Again, the first operator starts working later.

Figure 13. A single operator can re-work after being stuck with the help of other operators’ improvement. Appl. Sci. 2019, 9, x FOR PEER REVIEW 13 of 26

Appl. Sci. 2019, 9, 3985 13 of 24 Figure 12. Operators have complementary search spaces. After being stopped, it is also possible that a single operator can start to work again when other After being stopped, it is also possible that a single operator can start to work again when operators make some improvement in the meantime. Hence, these works of other operators make the other operators make some improvement in the meantime. Hence, these works of other operators first operator alive again. Figure 13 has three such examples, where the first operator entraps into a make the first operator alive again. Figure 13 has three such examples, where the first operator local optimum. Then, other operators recover the process from the local optimum and improvement entraps into a local optimum. Then, other operators recover the process from the local optimum continues. Again, the first operator starts working later. and improvement continues. Again, the first operator starts working later.

Figure 13. A single operator can re-work after being stuck with the help of other operators’ improvement. Figure 13. A single operator can re-work after being stuck with the help of other operators’ improvement.In addition, we consider also mixing the operators randomly as follows. We perform only a single run of the local search for 10,000 iterations. However, instead of fixing the operator beforehand, we choose it randomly at every iteration. The pseudocode of the mixed variant is demonstrated in Algorithm 3. This random mixing solves all instances with having a 0% gap if we repeat the process for 25 times.

Algorithm 3: Finding a solution by mixing local search operators RandomMixing ( NumberOfIteration, Path ) FOR i 1 TO NumberOfIteration DO ← operation Random ( NodeSwap, 2opt, LinkSwap ) ← NewPath NewRandomSolution ( Path, operation ) ← IF Length ( NewPath ) < Length ( Path ) THEN Path NewPath ← RETURN Path

Lawler et al. [43] stated that local search algorithms get trapped in sub-optimal points. Here, we find that the operators, even when mixed, could not solve all the instances from any random initial solution. Figure 14 shows three examples where the search was stuck to a local optimum from where none of Appl. Sci. 2019, 9, x FOR PEER REVIEW 14 of 26

In addition, we consider also mixing the operators randomly as follows. We perform only a single run of the local search for 10,000 iterations. However, instead of fixing the operator beforehand, we choose it randomly at every iteration. Pseudocode of the mixed variant is below. This random mixing solves all instances with having a 0% gap if we repeat the process for 25 times.

Algorithm 3: Finding a solution by mixing local search operators RandomMixing ( NumberOfIteration, Path ) FOR i  1 TO NumberOfIteration DO operation  Random ( NodeSwap, 2opt, LinkSwap ) NewPath  NewRandomSolution ( Path, operation ) IF Length ( NewPath ) < Length ( Path ) THEN Path  NewPath RETURN Path

Lawler et al. [43] stated that local search algorithms get trapped in sub-optimal points. Here, Appl. Sci. 2019, 9, 3985 14 of 24 we find that the operators, even when mixed, could not solve all the instances from any random initial solution. Figure 14 shows three examples where the search was stuck to a local optimum fromthe operatorswhere none can of process the operators further. Incan these process cases, further. the repeated In these random cases, mixedthe repeated local search random was mixed needed localto solve search these was instances. needed to solve these instances.

FiFiguregure 14. 14. ExamplesExamples of of local local optima. optima. Repeats Repeats overcome overcome these. these.

4.4. Analysis Analysis of of Repeated Repeated Random Random Mixed Mixed Local Local Search Search WeWe next next analyze analyze the the performance performance of of the the repeated repeated local local search search in in more more detail. detail. We We use use both both the the 145145 O O–Mopsi–Mopsi instances instances and and the the larger larger Dots Dots dataset, dataset, whi whichch contains contains 4226 4226 computer computer-generated-generated TSP TSP instances.instances. Here Here proble problemm size size varies varies from from 4 4to to 31. 31.

4.1. E ect of the repetitions 4.1. Effectff of the repetitions FigureFigure 15 15 reports how how many many of of the the instances instances are are solved solved when when we increase we increase the number the number of repeats. of repeats.In the case In the of O–Mopsi, case of O most–Mopsi, instances most (144)instances are solved (144) already are solved with already 15 repeats, with and 15 all repeats, with 25 and repeats. all withThe results25 repea withts. Thethe Dots results instances with the are Dotssimilar: instances 15–25 iterations are similar: are reasonably 15–25 iterations good, areafter reasonably which only Appl. Sci. 2019, 9, x FOR PEER REVIEW 15 of 26 good100 instances, after which were only not 100 solved instances optimally. were Afternot solved 150 repeats, optimally. only After one instance 150 repeats remains, only unsolved. one instance remains unsolved.

Figure 15. Instances solved as a function of the number of repetitions. Figure 15. Instances solved as a function of the number of repetitions. Figure 16 demonstrates the effect of repetition for a single (typical) instance as the average path Figure 16 demonstrates the effect of repetition for a single (typical) instance as the average path length. We portray all improving steps from a random start in the first run. There is a similar kind of length. We portray all improving steps from a random start in the first run. There is a similar kind improving steps from different random starts in the consecutive runs. However, we only show more of improving steps from different random starts in the consecutive runs. However, we only show improved steps than the previous run. Thus, we can see there are very few better steps in higher runs, more improved steps than the previous run. Thus, we can see there are very few better steps in and three or four repeated runs are usually enough to achieve the optimum result. The gap values higher runs, and three or four repeated runs are usually enough to achieve the optimum result. The gap values are also shown to demonstrate the closeness of our outcomes to the optimum result. In the first run, we usually achieve a result that is already very close to the optimum. The repetitions push the result to the global optimum, but they act merely as a fine tuner for the results. If we can settle the sub-optimal result very close to the global optimum, then a local search with only a few repeats would be sufficient for these datasets.

Figure 16. Typical example of the effect of repetition.

Next, we evaluate the total work based on the number of iterations and the number of repeats. We plot four different curves with random start repeated 1, 5, 15, and 25 times for the O–Mopsi instances. Figure 17 shows how close the path length becomes to the optimum during 10,000 iterations. A similar plot is also drawn for the Dots instances with 1, 5, 15, 25, 50, 100, and 150 repeats. All the curves coincide on each other so they appear as only one line in Figure 17. Appl. Sci. 2019, 9, x FOR PEER REVIEW 15 of 26

Figure 15. Instances solved as a function of the number of repetitions.

Figure 16 demonstrates the effect of repetition for a single (typical) instance as the average path length. We portray all improving steps from a random start in the first run. There is a similar kind of improving steps from different random starts in the consecutive runs. However, we only show Appl. Sci. 2019, 9, 3985 15 of 24 more improved steps than the previous run. Thus, we can see there are very few better steps in higher runs, and three or four repeated runs are usually enough to achieve the optimum result. The aregap also values shown are toalso demonstrate shown to demonstrate the closeness the of ourcloseness outcomes of our to theoutcomes optimum to the result. optimum In the result. first run, In wetheusually first run, achieve we usually a result achieve that is alreadya result very that closeis already to the very optimum. close Theto the repetitions optimum. push The the repetit resultions to thepush global the result optimum, to the but global they actoptimum merely, asbut a they fine tuneract merely for the as results. a fine Iftuner we can for settlethe results. the sub-optimal If we can resultsettle verythe sub close-optimal to the globalresult optimum,very close then to the a local global search optimum, with only then a fewa local repeats search would with be only suffi acient few forrepeats these would datasets. be sufficient for these datasets.

FigureFigure 16.16. Typical exampleexample ofof thethe eeffectffect ofof repetition.repetition.

Next,Next, wewe evaluateevaluate thethe totaltotal workwork basedbased on the number of iterations and the number of repeats. WeWe plotplot fourfour didifferentfferent curvescurves withwith randomrandom startstart repeatedrepeated 1,1, 5,5, 15,15, andand 2525 timestimes forfor thethe O–MopsiO–Mopsi instances.instances. Figure Figure 17 17 shows shows how how close close the path the lengthpath length becomes becomes to the optimumto the optimum during 10,000 during iterations. 10,000 Aiterations. similar plot A similar is also drawn plot is for also the drawn Dots instances for the Dots with instances 1, 5, 15, 25, with 50, 1, 100, 5, and15, 25, 150 50, repeats. 100, and All the 150 curvesAppl.repeats. Sci. coincide 2019 All, the9, x FOR oncurves each PEER coincide other REVIEW so on they each appear other as so only they one appear line inas Figureonly one 17 .line in Figure 17. 16 of 26

Figure 17. Total amount of work. Figure 17. Total amount of work.

4.2. Processing Time The time complexity of the method can be analyzed as follows. At each iteration, the algorithm tries to find improvement by a random local change. The effect of this change is tested in constant time. Only when improvement is found, the solution is updated accordingly, which takes O(N) time. Let us assume probability p for finding an improvement. The time complexity is, therefore, O(IR + pNIR), where I and R are the number of iterations and repetitions, respectively. The value for p is higher at the beginning, when the configuration is close to random and gradually decreases to zero when approaching to a saturation point (the solution that cannot be improved by any local operation). In our observations, p is small (0.002 on average for O–Mopsi and even lesser for Dots), so the time complexity reduces to O(IR), in practice. This means that the speed of the algorithm is almost independent on the problem size (see Figure 18). In contrast, optimal solvers such as Concorde [17] have exponential time complexity. We are unable to find a complexity analysis for Concorde in literature, however, we estimate it empirically by analyzing the running time required by Concorde to solve TSPLIB [44] instances. From the website (http://www.math.uwaterloo.ca/tsp/concorde/benchmarks/bench99.html), we choose 105 instances with increasing problem sizes (N between 14 and 13509) and analyze the trend. According to this, Concorde has a complexity of (1.5e^(0.004N)). This implies that for small instances (up to several thousands) the algorithm is expected to be fast. However, the analysis also reveals that some points highly deviate from the expected line. This implies that some instances are more difficult to solve (with a higher number of nodes in the search ) and require more time, such as the instance (d1291), with 1291 targets, which has almost 8 hours of running time where another instance (rl1304) with 1304 targets has only 22 minutes of running time. In comparison to that, Random Mix finishes more predictably. Appl. Sci. 2019, 9, 3985 16 of 24

4.2. Processing Time The time complexity of the method can be analyzed as follows. At each iteration, the algorithm tries to find improvement by a random local change. The effect of this change is tested in constant time. Only when improvement is found, the solution is updated accordingly, which takes O(N) time. Let us assume probability p for finding an improvement. The time complexity is, therefore, O(IR + pNIR), where I and R are the number of iterations and repetitions, respectively. The value for p is higher at the beginning, when the configuration is close to random and gradually decreases to zero when approaching to a saturation point (the solution that cannot be improved by any local operation). In our observations, p is small (0.002 on average for O–Mopsi and even lesser for Dots), so the time complexity reduces to O(IR), in practice. This means that the speed of the algorithm is almost independent on the problem size (see Figure 18). In contrast, optimal solvers such as Concorde [17] have exponential time complexity. We are unable to find a complexity analysis for Concorde in literature, however, we estimate it empirically by analyzing the running time required by Concorde to solve TSPLIB [44] instances. From the website (http://www.math.uwaterloo.ca/tsp/concorde/benchmarks/bench99.html), we choose 105 instances with increasing problem sizes (N between 14 and 13509) and analyze the trend. According to this, Concorde has a complexity of (1.5eˆ(0.004N)). This implies that for small instances (up to several thousands) the algorithm is expected to be fast. However, the analysis also reveals that some points highly deviate from the expected line. This implies that some instances are more difficult to solve (with a higher number of nodes in the search tree) and require more time, such as the instance (d1291), with 1291 targets,Appl. Sci. which 2019, 9 has, x FOR almost PEER 8REVIEW hours of running time where another instance (rl1304) with 1304 targets17 of 26 has only 22 minutes of running time. In comparison to that, Random Mix finishes more predictably.

Figure 18. Run time as a function of problem size.

Considering the repeats, weFigure summarized 18. Run time the as result a function in Table of problem3, which size. shows that a single run takes 0.8 ms and 25 repeats take 16 ms on an average, regardless of the dataset. Considering the repeats, we summarized the result in Table 3, which shows that a single run takes 0.8 ms and 25 repeats take 16 msTable on an 3. aveRunningrage, regardless time. of the dataset.

TableSingle 3. RunRunning time. 25 Repeats O–Mopsi 0.8 ms 16 ms Dots 0.7Single ms run 25 repeats 16 ms

O–Mopsi 0.8 ms 16 ms 4.3. Parameter Values Dots 0.7 ms 16 ms We investigate different values for the I and R parameters using grid-search and recorded how many operations are needed to solve different sized problems. The results are summarized in Figure 19. In4.3. this Parameter context, weValues want to mention, as there are not a sufficient number of samples with more than 25 targetsWe in theinvestigate O–Mopsi different dataset andvalues 30 targetsfor the I in and the R Dots parameters dataset, using we ignore grid- thosesearch samples. and recorded From thehow many operations are needed to solve different sized problems. The results are summarized in Figure 19. In this context, we want to mention, as there are not a sufficient number of samples with more than 25 targets in the O–Mopsi dataset and 30 targets in the Dots dataset, we ignore those samples. From the chart, we can conclude that proper parameter values to use for the O–Mopsi dataset are I = 213 and R = 25. The dots dataset requires higher values of I = 215 and R = 26. The relationship between I and R seems to stabilize for larger instances.

Figure 19. Required number of operations increases with problem size. Appl. Sci. 2019, 9, x FOR PEER REVIEW 17 of 26

Figure 18. Run time as a function of problem size.

Considering the repeats, we summarized the result in Table 3, which shows that a single run takes 0.8 ms and 25 repeats take 16 ms on an average, regardless of the dataset.

Table 3. Running time.

Single run 25 repeats

O–Mopsi 0.8 ms 16 ms

Dots 0.7 ms 16 ms

4.3. Parameter Values We investigate different values for the I and R parameters using grid-search and recorded how many operations are needed to solve different sized problems. The results are summarized in FigureAppl. Sci. 19.2019 In, 9 this, 3985 context, we want to mention, as there are not a sufficient number of samples17 with of 24 more than 25 targets in the O–Mopsi dataset and 30 targets in the Dots dataset, we ignore those samples.chart, we From can conclude the chart, that we proper can conclude parameter that values proper to use parameter for the O–Mopsi values to dataset use forare the I =O–2Mopsi13 and datasetR = 25. The are dotsI = 2 dataset13 and R requires = 25. The higher dots values dataset of I requires= 215 and higher R = 2 6 values. The relationship of I = 215 and between R = 2 I6 and. The R relationshipseems to stabilize between for I larger and R instances. seems to stabilize for larger instances.

Figure 19. Required number of operations increases with problem size. Appl. Sci. 2019, 9, x FORFigure PEER 19. REVIEW Required number of operations increases with problem size. 18 of 26 In Figure 20, we fix the product IR to 218 (O–Mopsi) and 221 (Dots) and vary their individual In Figure 20, we fix the product IR to 218 (O–Mopsi) and 221 (Dots) and vary their individual values.values. Typically, Typically, diff erent different same-product same-product combinations combinations are are equally equally eff ective effective at solving at solving the instances. the However,instances. using However, too few repetitions, using too few the repetitions,algorithm typically the algorithm gets stuck typically at a local gets optimum, stuck at a which local is not the globaloptimum one., which On theis not other the hand,global whenone. On the the number other hand, of iterations when the becomesnumber of too iterations small, itbecomes is likely that sometoo local small, operations it is likely canthat stillsome improve local operations the result can (not still saturated).improve the result (not saturated).

Figure 20. Figure 20.Gap Gap variationvariation with with varied varied iterations iterations and and repetitions. repetitions. 4.4. The Productivity of the Operators 4.4. The Productivity of the Operators WeWe next next analyze analyze the the productivity productivity of the operators operators,, as as follows. follows. We Wecount count how how many many times timeseach each operatoroperator found found an an improvement improvement inin the the solution. For For this this case, case, we we also also evaluate evaluate our operators our operators by by closed-loopclosed-loop instances instances (TSPLIB (TSPLIB instances) instances)[ 44[44].]. WeWe shouldshould mentionmention here here that that closed closed-loop-loop instances instances do not containdo not contain single single links links to swap. to swap. Therefore, Therefore, the the link link swap swap operation operation isis notnot possible for for them. them. We We can onlycan use only the use relocate the relocate and 2-optand 2-opt operators operators to to solve solveTSPLIB TSPLIB problems. problems. In Inthis this context, context, we want we wantto to mentionmention that that we we choose choos 12e 12 TSPLIB TSPLIB instances instances to to test testwith with problemproblem size of of 52 52–3795.–3795. These These instances instances are significantlyare significantly larger than larger O–Mopsi than O–Mopsi and Dots and instances,Dots instances, and andtherefore, therefore, we usedwe used a much a much higher higher number number of iterations (108) and 25 repeats. The total distribution of the improvement achieved by of iterations (108) and 25 repeats. The total distribution of the improvement achieved by operators is operators is shown in Figure 21 both for the open-loop (O–Mopsi and Dot games) and closed-loop shown in Figure 21 both for the open-loop (O–Mopsi and Dot games) and closed-loop (TSPLIB) [44] (TSPLIB) [44] cases. From the results, we can make the following observations. cases. FromFirst, the we results, can see we that can the make 2-opt the and following relocate observations. are almost equally productive in all cases. However, the situation changes dramatically from the closed-loop to the open-loop case. While 2-opt and relocate both have about a 50% share of all improvements in the closed-loop case (TSPLIB), they are inferior to the new link swap operator in case of open-loop test sets (O–Mopsi and Dots). The link swap operator dominates by a 50% share, while 2-opt and relocate have roughly a 25% share each. The relocate operator becomes also slightly more productive than 2-opt.

Figure 21. Share of the improvements by all three operators both with the open-loop (O–Mopsi and Dot games) and closed-loop (TSPLIB) problem instances. With TSBPLIB, we use 108 iterations and 25 repeats. Appl. Sci. 2019, 9, x FOR PEER REVIEW 18 of 26

In Figure 20, we fix the product IR to 218 (O–Mopsi) and 221 (Dots) and vary their individual values. Typically, different same-product combinations are equally effective at solving the instances. However, using too few repetitions, the algorithm typically gets stuck at a local optimum, which is not the global one. On the other hand, when the number of iterations becomes too small, it is likely that some local operations can still improve the result (not saturated).

Figure 20. Gap variation with varied iterations and repetitions.

4.4. The Productivity of the Operators We next analyze the productivity of the operators, as follows. We count how many times each operator found an improvement in the solution. For this case, we also evaluate our operators by closed-loop instances (TSPLIB instances) [44]. We should mention here that closed-loop instances do not contain single links to swap. Therefore, the link swap operation is not possible for them. We can only use the relocate and 2-opt operators to solve TSPLIB problems. In this context, we want to mention that we choose 12 TSPLIB instances to test with problem size of 52–3795. These instances are significantly larger than O–Mopsi and Dots instances, and therefore, we used a much higher number of iterations (108) and 25 repeats. The total distribution of the improvement achieved by Appl.operators Sci. 2019 is, 9shown, 3985 in Figure 21 both for the open-loop (O–Mopsi and Dot games) and closed18-loop of 24 (TSPLIB) [44] cases. From the results, we can make the following observations. First, we can see that the 2-opt and relocate are almost equally productive in all cases. First, we can see that the 2-opt and relocate are almost equally productive in all cases. However, However, the situation changes dramatically from the closed-loop to the open-loop case. While the situation changes dramatically from the closed-loop to the open-loop case. While 2-opt and relocate 2-opt and relocate both have about a 50% share of all improvements in the closed-loop case both have about a 50% share of all improvements in the closed-loop case (TSPLIB), they are inferior to (TSPLIB), they are inferior to the new link swap operator in case of open-loop test sets (O–Mopsi the new link swap operator in case of open-loop test sets (O–Mopsi and Dots). The link swap operator and Dots). The link swap operator dominates by a 50% share, while 2-opt and relocate have roughly dominates by a 50% share, while 2-opt and relocate have roughly a 25% share each. The relocate a 25% share each. The relocate operator becomes also slightly more productive than 2-opt. operator becomes also slightly more productive than 2-opt.

Figure 21. Share of the improvements by all three operators both with the open-loop (O–Mopsi and DotFigure games) 21. Share and closed-loopof the improvements (TSPLIB) by problem all three instances. operators With both TSBPLIB, with the weopen use-loop 108 (Oiterations–Mopsi andand Appl. Sci. 2019, 9, x FOR PEER REVIEW 8 19 of 26 25Dot repeats. games) and closed-loop (TSPLIB) problem instances. With TSBPLIB, we use 10 iterations and 25 repeats. One question question is is whether whether all all operators operators work work equally equally effi cientlyefficiently at the at beginning the beginning of the of search, the search, when therewhen is there more is room more for room improvements, for improvements, as they do as at they the end do of at the the search end of when the the search improvements when the areimprovements harder to find. are We harder select to four find. typical We select O-Mopsi four game typical examples O-Mopsi and game find examples the productivity and find of the operatorsproductivity through of the theoperators total search through spaces the (Figuretotal search 22). Wespaces plot ( themFigure in 22 Figure). We 23plot. Similarly, them in Figure Figure 23.24 showsSimilarly, the Figure productivity 24 shows of operators the productivity along the of search operators spaces along of fourthe typicalsearch spaces dot games. of four We typical plot those dot gamesgames. inWe Figure plot those25. The games diagrams in Figure show 25. that The there diagrams are no visibleshow that patterns there to are be no observed. visible patterns All operators to be remainobserved. productive All operators at all remain stages ofproductive the search. at Linkall stages swap of is the the search. most active Link andswap relocate is the most is slightly active more and productiverelocate is slightly at the endmore of productive the search withat the the end O–Mopsi of the search instances. with the O–Mopsi instances.

Figure 22. Productivity of the operators during the search with for 4 different O–Mopsi instances. Figure 22. Productivity of the operators during the search with for 4 different O–Mopsi instances.

Figure 23. Screenshots of the 4 different O–Mopsi instances (in the same order). Appl. Sci. 2019, 9, x FOR PEER REVIEW 19 of 26

One question is whether all operators work equally efficiently at the beginning of the search, when there is more room for improvements, as they do at the end of the search when the improvements are harder to find. We select four typical O-Mopsi game examples and find the productivity of the operators through the total search spaces (Figure 22). We plot them in Figure 23. Similarly, Figure 24 shows the productivity of operators along the search spaces of four typical dot games. We plot those games in Figure 25. The diagrams show that there are no visible patterns to be observed. All operators remain productive at all stages of the search. Link swap is the most active and relocate is slightly more productive at the end of the search with the O–Mopsi instances.

Appl. Sci. 2019, 9, 3985 19 of 24 Figure 22. Productivity of the operators during the search with for 4 different O–Mopsi instances.

Appl. Sci. 2019, 9, x FOR PEER REVIEW 20 of 26 FigureFigure 23. 23.Screenshots Screenshots of of the the 4 di4 differentfferent O–Mopsi O–Mopsi instances instances (in (in the the same same order). order).

Figure 24. Productivity of the operators during the search with for 4 different Dots instances. Figure 24. Productivity of the operators during the search with for 4 different Dots instances.

Figure 25. Screenshots of 4 different dots instances (in the same order).

5. Stochastic Variants We consider repeat to overcome the local optimum values. However, there are several other methods to overcome local optimum values using tabu search [45], simple simulated annealing and combined variants [46–50], [51–54]. Besides these, Quintero–Araujo et al. [55] combined iterated local search and Monte Carlo simulation on vehicle routing problems. We compared our results with some of these methods; additionally, we consider two stochastic variants of the local search: • Tabu search [45] • Simulated annealing [47] Tabu search is a first introduced by Glover [45]. The idea is to allow uphill moves that worsen the solution to avoid being stuck in a local optimum. It forces the search to new Appl. Sci. 2019, 9, x FOR PEER REVIEW 20 of 26

Appl. Sci. 2019, 9, 3985 20 of 24 Figure 24. Productivity of the operators during the search with for 4 different Dots instances.

Figure 25. Screenshots of 4 different dots instances (in the same order). Figure 25. Screenshots of 4 different dots instances (in the same order). 5. Stochastic Variants 5. Stochastic Variants We consider repeat to overcome the local optimum values. However, there are several other methodsWe consider to overcome repeat local to optimumovercome values the local using optimum tabu search values. [45 ],However, simple simulated there are annealing several other and methodscombined to variants overcome [46 local–50], optimum genetic algorithmvalues using [51 –tabu54]. search Besides [45], these, simple Quintero–Araujo simulated annealing et al. and [55] combined iterated variants local [46– search50], genetic and Monte algorithm Carlo simulation [51–54]. Besides on vehicle these, routing Quintero problems.–Araujo We et compared al. [55] combined iterated local search and Monte Carlo simulation on vehicle routing problems. We our results with some of these methods; additionally, we consider two stochastic variants of the local comparedsearch: our results with some of these methods; additionally, we consider two stochastic variants of the local search: • Tabu search [[45]45] • • Simulated annealing [[47]47] • Tabu search is a metaheuristic first introduced by Glover [45]. The idea is to allow uphill Tabu search is a metaheuristic first introduced by Glover [45]. The idea is to allow uphill moves moves that worsen the solution to avoid being stuck in a local optimum. It forces the search to new that worsen the solution to avoid being stuck in a local optimum. It forces the search to new directions. A tabu list of previously considered solutions, or moves, is maintained to prevent the search from going into cycles around the local optima. We incorporate the tabu search with our local search as follows. When an improvement is found in the solution, we mark the added links as tabu. These links are not allowed to be changed in the next 0.2 N iterations. We use the best improvement as the search strategy. Table4 shows results of tabu · search for O-Mopsi and Dots games. Simulated annealing (SA) is another approach to make the search stochastic. Although there are a large number of adaptive simulated annealing variants, we consider a simpler one, the noising method of Charon and Hudry [47]. The idea is to add random noise to the cost function to allow uphill moves and therefore avoid local optima. The noise is a random number between r [0, rmax], ∈ ± and acts as a multiplier for the noisy cost function: f = r f. The noise starts from rmax = 1 and noise · decreases at every iteration by a constant 2/T, where T is the number of iterations. The noise reaches r = 0 halfway, and the search then reduces back to normal local search by using normal cost function (f ) for the remaining iterations. We apply a random mixed local search with 25 repeats. Table4 also shows simulated annealing results for O-Mopsi and Dots instances. Appl. Sci. 2019, 9, x FOR PEER REVIEW 22 of 26

Table 4. Comparison of Tabu search and simulated annealing (SA) result with random mixed local search result with the same iteration and repetition for O–Mopsi and dot instances.

Repeated (25 times) Appl. Sci. 2019, 9, 3985 21 of 24 Appl. Sci. 2019, 9, x FOR PEER REVIEW Gap (avg.) Not solved22 of 26

Table 4. ComparisonRandom of Tabu mixed search local and search simulated annealing (SA) result0% with random mixed0 local

search result with the same iteration and repetition for O–Mopsi and dot instances. O–Mopsi Tabu search 0% 0 Table 4. Comparison of Tabu search and simulated annealing (SARepeated) result with (25 times)random mixed local Simulated annealing (SA) 0% 0 search result with the same iteration and repetition for O–MopsiGap and (avg.) dot instances. Not Solved Random mixed local search 0.001% 1 Random mixed local search 0%Repeated (25 0 times) O–Mopsi Tabu search 0% 0 Dots Tabu search 0.0003% 3 Simulated annealing (SA)Gap 0% (avg.) 0 Not solved RandomSA mixed local search 0.001%0.007% 1 10 Dots Random mixedTabu local search search 0.0003%0% 3 0

SA 0.007% 10 O–Mopsi Tabu search 0% 0

Simulated annealing (SA) 0% 0 Finding the productivity of the operators as in Figure 21 for random mixed local search, we consider finding out the sameRandom in case ofmixed tabu local search search and simulated annealing.0.001% From a random mixed1 local search, we found that link swap was the most effective operation. Figure 26 illustrates that in tabu search,Dots link swap is the least eTabuffective. search However, link swap is again0.0003% the most effective for the3 simulated annealing, as shown in Figure 27.SA We mark the added links as forbidden0.007% for tabu search, so10 link swap becomes too restrictive in this case. This makes link swap often non-working.

Figure 26. Share of improvement by all three methods for Tabu search.

Finding the productivity of the operators as in Figure 21 for random mixed local search, we consider finding out the same in case of tabu search and simulated annealing. From a random mixed local search, we found that link swap was the most effective operation. Figure 26 illustrates that in tabu search, link swap is the least effective. However, link swap is again the most effective for the simulated annealing, as shown in Figure 27. We mark the added links as forbidden for tabu search, so link swap becomes too restrictive in this case. This makes link swap often non -working. Figure 26. Share of improvement by all three methods for Tabu search. Figure 26. Share of improvement by all three methods for Tabu search.

Finding the productivity of the operators as in Figure 21 for random mixed local search, we consider finding out the same in case of tabu search and simulated annealing. From a random mixed local search, we found that link swap was the most effective operation. Figure 26 illustrates that in tabu search, link swap is the least effective. However, link swap is again the most effective for the simulated annealing, as shown in Figure 27. We mark the added links as forbidden for tabu search, so link swap becomes too restrictive in this case. This makes link swap often non-working.

Figure 27. Share of improvement by all three methods for simulated annealing (SA).

We compare the performance of random mixed local search, tabu search, and simulated annealing for O-Mopsi and dots games in Table5. We study the gap value and execution time for every method. Results show that neither tabu nor simulated annealing improved the performance of the random mixed local search algorithm.

Appl. Sci. 2019, 9, 3985 22 of 24

Table 5. Summary of overall results (mean).

Results Random mix Tabu SA Gap Time Gap Time Gap Time O–Mopsi 0% <1 s 0% 1.3 s 0 % <1 s Dots 0.001% <1 s 0.0003% 1.3 s 0.007% <1 s

6. Discussion We have studied the open-loop variant of TSP to find out which operators work best. Two new operators were proposed: link swap and 3–permute. The link swap operator was shown to be the most productive. Even though 2-opt works best as a single operator, the best results are obtained using a mixture of all three operators (2-opt, relocate, link swap). The new operator, link swap, provides, 50% of all the improvements, while 2-opt and relocate have roughly a 25% share each. To sum up, the link swap operator is the most efficient operator in the mix. For our problem instances up to size 31, the proposed combination of the local search (random mixed local search) finds the optimum solution in all O–Mopsi instances, and in all except one, Dots problem instances. Iterations and repetitions are the most significant parameters of the proposed method. Additionally, a suitable number of iterations and repetitions are essential with respect to the problem size, which is 213 and 25 for the O–Mopsi dataset, and 215 and 26 for the Dots dataset. For O–Mopsi instances, processing times are 0.8 ms (single) and 16 ms (repeats). For Dots instances, processing times are 0.7 ms (single) and 16 ms (repeats). The overall complexity of the algorithm mainly depends on the number of iterations and repeats. Furthermore, considering the multi-threaded platform, different repeats can work simultaneously to decrease the processing time by a factor of a number of threads. Tabu search and simulated annealing were also considered but they did not provide further improvements in our tests. The productivity of the operators was consistent with those of the local search. We conclude that the local search is sufficient for our application where the optimum result is needed in real-time, and the occasional sub-optimal result can be tolerated. The limitation of the new operator, link swap, is that it is suitable only for the open-loop case and does not apply to TSPLIB instances. Therefore, only 2-opt and relocate can be used for these instances. Without link swap, relocate also becomes weaker in the mixing algorithm, which might weaken the result.

Author Contributions: Conceptualization, P.F., R.M.-I., and L.S.; methodology, L.S., R.M.-I.; writing—original draft preparation, L.S.; writing—review and editing, P.F and R.M.-I.; supervision, P.F. Funding: This research received no external funding Conflicts of Interest: The authors declare no conflict of interest.

References

1. Garey, M.R.; Johnson, D.S. Computers and Intractability: A Guide to the Theory of Np-Completeness; W.H. Freeman & Co.: New York, NY, USA, 1979; ISBN 0716710455. 2. Papadimitriou, C.H. The Euclidean travelling salesman problem is NP-complete. Theor. Comput. Sci. 1977, 4, 237–244. [CrossRef] 3. Fränti, P.; Mariescu-Istodor, R.; Sengupta, L. O-Mopsi: Mobile Orienteering Game for Sightseeing, Exercising, and Education. ACM Trans. Multimed. Comput. Commun. Appl. 2017, 13, 56. [CrossRef] 4. Vansteenwegen, P.; Souffriau, W.; Van Oudheusden, D. The orienteering problem: A survey. Eur. J. Oper. Res. 2011, 209, 1–10. [CrossRef] 5. Chieng, H.H.; Wahid, N. A Performance Comparison of Genetic Algorithm’s Mutation Operators in n-Cities Open Loop Travelling Salesman Problem. In Recent Advances on Soft Computing and Data Mining. Advances in Intelligent Systems and Computing; Herawan, T., Ghazali, R., Deris, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; Volume 287. Appl. Sci. 2019, 9, 3985 23 of 24

6. Gavalas, D.; Konstantopoulos, C.; Mastakas, K.; Pantziou, G. A survey on algorithmic approaches for solving tourist trip design problems. J. Heuristics 2014, 20, 291–328. [CrossRef] 7. Golden, B.L.; Levy, L.; Vohra, R. The Orienteering Problem. Nav. Res. Logist. 1987, 34, 307–318. [CrossRef] 8. Perez, D.; Togelius, J.; Samothrakis, S.; Rohlfshagen, P.; Lucas, S.M. Automated Map Generation for the Physical Traveling Salesman Problem. IEEE Trans. Evol. Comput. 2014, 18, 708–720. [CrossRef] 9. Sengupta, L.; Mariescu-Istodor, R.; Fränti, P. Planning your route: Where to start? Comput. Brain Behav. 2018, 1, 252–265. [CrossRef] 10. Sengupta, L.; Fränti, P. Predicting difficulty of TSP instances using MST. In Proceedings of the IEEE International Conference on Industrial Informatics (INDIN), Helsinki, Finland, June 2019; pp. 848–852. 11. Dantzig, G.B.; Fulkerson, D.R.; Johnson, S.M. Solution of a Large Scale Traveling Salesman Problem; Technical Report P-510; RAND Corporation: Santa Monica, CA, USA, 1954. 12. Held, M.; Karp, R.M. The traveling salesman problem and minimum spanning trees: Part II. Math. Program. 1971, 1, 6–25. [CrossRef] 13. Padberg, M.; Rinaldi, G. A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM Rev. 1991, 33, 60–100. [CrossRef] 14. Grötschel, M.; Holland, O. Solution of large-scale symmetric travelling salesman problems. Math. Program. 1991, 51, 141–202. [CrossRef] 15. Applegate, D.; Bixby, R.; Chvatal, V.On the solution of traveling salesman problems. Documenta Mathematica Journal der Deutschen Mathematiker-Vereinigung. Int. Congr. Math. 1988, Extra Volume III, 645–656. 16. Laporte, G. The traveling salesman problem: An overview of exact and approximate algorithms. Eur. J. Oper. Res. 1992, 59, 231–247. [CrossRef] 17. Applegate, D.L.; Bixby, R.E.; Chvatal, V.; Cook, W.J. The Traveling Salesman Problem: A Computational Study; Princeton University Press: Princeton, NJ, USA, 2011. 18. Johnson, D.S.; Papadimitriou, C.H.; Yannakakis, M. How easy is local search? J. Comput. Syst. Sci. 1988, 37, 79–100. [CrossRef] 19. Clarke, G.; Wright, J.W. Scheduling of Vehicles from a Central Depot to a Number of Delivery Points. Oper. Res. 1964, 12, 568–581. [CrossRef] 20. Christofides, N. Worst-Case Analysis of a New Heuristic for the Travelling Salesman Problem (Technical Report388); Graduate School of Industrial Administration, Carnegie Mellon University: Pittsburgh, PA, USA, 1976. 21. Johnson, D.S.; McGeoch, L.A. The traveling salesman problem: A case study in local optimization. Local Search Comb. Optim. 1997, 1, 215–310. 22. Croes, G.A. A Method for Solving Traveling-Salesman Problems. Oper. Res. 1958, 6, 791–812. [CrossRef] 23. Lin, S.; Kernighan, B.W. An effective heuristic algorithm for the traveling-salesman problem. Oper. Res. 1973, 21, 498–516. [CrossRef] 24. Rego, C.; Glover, F. Local search and . In The Traveling Salesman Problem and Its Variations; Springer: Boston, MA, USA, 2007; pp. 309–368. 25. Okano, H.; Misono, S.; Iwano, K. New TSP construction heuristics and their relationships to the 2-opt. J. Heuristics 1999, 5, 71–88. [CrossRef] 26. Johnson, D.S.; McGeoch, L.A. Experimental analysis of heuristics for the STSP. In The Traveling Salesman Problem and Its Variations; Springer: Boston, MA, USA, 2007; pp. 369–443. 27. Aarts, E.; Aarts, E.H.; Lenstra, J.K. (Eds.) Local Search in Combinatorial Optimization; Princeton University Press: Princeton, NJ, USA, 2003. 28. Jünger, M.; Reinelt, G.; Rinaldi, G. The traveling salesman problem. Handb. Oper. Res. Manag. Sci. 1995, 7, 225–330. 29. Laporte, G. A concise guide to the traveling salesman problem. J. Oper. Res. Soc. 2010, 61, 35–40. [CrossRef] 30. Ahuja, R.K.; Ergun Ö Orlin, J.B.; Punnen, A.P. A survey of very large-scale neighborhood search techniques. Discret. Appl. Math. 2002, 123, 75–102. [CrossRef] 31. Rego, C.; Gamboa, D.; Glover, F.; Osterman, C. Traveling salesman problem heuristics: Leading methods, implementations and latest advances. Eur. J. Oper. Res. 2011, 211, 427–441. [CrossRef] 32. Matai, R.; Singh, S.; Mittal, M.L. Traveling salesman problem: An overview of applications, formulations, and solution approaches. In Traveling Salesman Problem, Theory and Applications; IntechOpen: London, UK, 2010. 33. Laporte, G. The vehicle routing problem: An overview of exact and approximate algorithms. Eur. J. Oper. Res. 1992, 59, 345–358. [CrossRef] Appl. Sci. 2019, 9, 3985 24 of 24

34. Vidal TCrainic, T.G.; Gendreau, M.; Prins, C. Heuristics for multi-attribute vehicle routing problems: A survey and synthesis. Eur. J. Oper. Res. 2013, 231, 1–21. [CrossRef] 35. Gendreau, M.; Hertz, A.; Laporte, G. New insertion and postoptimization procedures for the traveling salesman problem. Oper. Res. 1992, 40, 1086–1094. [CrossRef] 36. Mersmann, O.; Bischl, B.; Bossek, J.; Trautmann, H.; Wagner, M.; Neumann, F. Local Search and the Traveling Salesman Problem: A Feature-Based Characterization of Problem Hardness. In Lecture Notes in Computer Science, Proceedings of the Learning and Intelligent Optimization, Paris, France, 16–20 January 2012; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7219, p. 7219. 37. Helsgaun, K. An effective implementation of the Lin–Kernighan traveling salesman heuristic. Eur. J. Oper. Res. 2000, 126, 106–130. [CrossRef] 38. Pan, Y.; Xia, Y. Solving TSP by dismantling cross paths. In Proceedings of the IEEE International Conference on Orange Technologies, Xian, China, 20–23 September 2014; pp. 121–124. 39. Martí, R. Multi-start methods. In Handbook of Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2003; pp. 355–368. 40. O’Neil, M.A.; Burtscher, M. Rethinking the parallelization of random-restart : A case study in optimizing a 2-opt TSP solver for GPU execution. In Proceedings of the 8th Workshop on General Purpose Processing using GPUs, San Francisco, CA, USA, 7 February 2015; pp. 99–108. 41. Al-Adwan, A.; Sharieh, A.; Mahafzah, B.A. Parallel heuristic local search algorithm on OTIS hyper hexa-cell and OTIS mesh of trees optoelectronic architectures. Appl. Intell. 2019, 49, 661–688. [CrossRef] 42. Xiang, Y.; Zhou, Y.; Chen, Z. A local search based restart for finding triple product property triples. Appl. Intell. 2018, 48, 2894–2911. [CrossRef] 43. Lawler, E.L.; Lenstra, J.K.; Rinnooy Kan AH, G.; Shmoys, D.B. The Traveling Salesman Problem; A Guided Tour of Combinatorial Optimization; Publisher Wiley: Chichester, UK, 1985. 44. Reinelt, G. A traveling salesman problem library. INFORMS J. Comput. 1991, 3, 376–384. [CrossRef] 45. Glover, F. Tabu Search-Part I. ORSA J. Comput. 1989, 1, 190–206. [CrossRef] 46. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [CrossRef][PubMed] 47. Charon, I.; Hurdy, O. Application of the noising method to the travelling salesman problem. Eur. J. Oper. Res. 2000, 125, 266–277. [CrossRef] 48. Chen, S.M.; Chien, C.Y. Solving the traveling salesman problem based on the genetic simulated annealing ant colony system with particle swarm optimization techniques. Expert Syst. Appl. 2011, 38, 14439–14450. [CrossRef] 49. Ezugwu, A.E.S.; Adewumi, A.O.; Frîncu, M.E. Simulated annealing based symbiotic organisms search optimization algorithm for traveling salesman problem. Expert Syst. Appl. 2017, 77, 189–210. [CrossRef] 50. Geng, X.; Chen, Z.; Yang, W.; Shi, D.; Zhao, K. Solving the traveling salesman problem based on an adaptive simulated annealing algorithm with greedy search. Appl. Soft Comput. 2011, 11, 3680–3689. [CrossRef] 51. Albayrak, M.; Allahverdi, N. Development a new mutation operator to solve the Traveling Salesman Problem by aid of Genetic Algorithms. Expert Syst. Appl. 2011, 38, 1313–1320. [CrossRef] 52. Nagata, Y.; Soler, D. A new genetic algorithm for the asymmetric traveling salesman problem. Expert Syst. Appl. 2012, 39, 8947–8953. [CrossRef] 53. Singh, S.; Lodhi, E.A. Study of variation in TSP using genetic algorithm and its operator comparison. Int. J. Soft Comput. Eng. 2013, 3, 264–267. 54. Vashisht, V.; Choudhury, T. Open loop travelling salesman problem using genetic algorithm. Int. J. Innov. Res. Comput. Commun. Eng. 2013, 1, 112–116. 55. Quintero-Araujo, C.L.; Gruler, A.; Juan, A.A.; Armas, J.; Ramalhinho, H. Using simheuristics to promote horizontal collaboration in stochastic city logistics. Prog. Artif. Intell. 2017, 6, 275. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).