ZHAW/HSR Print date: 04.02.19 TSM_Alg & FTP_Optimiz COMBINATORIAL PROBLEMS - P-CLASS Graph Search Given: 퐺 = (푉, 퐸), start node Goal: Search in a graph

DFS 1. Start at a, put it on stack. Insert from g h Depth-First- Stack = LIFO "Last In - First Out" top ↓ e e e e e Search 2. Whenever there is an unmarked neighbour, Access c f f f f f f f go there and and put it on stack from top ↓ d d d d d d d d d d d 3. If there is no unmarked neighbour, backtrack; b b b b b b b b b b b b b i.e. remove current node from stack (grey ⇒ green) and a a a a a a a a a a a a a a a go to step 2. BFS 1. Start at a, put it in queue. Insert from top ↓ h g Breadth-First Queue = FIFO "First In - First Out" f h g Search 2. Output first vertex from queue (grey ⇒ green). Mark d e c f h g all neighbors and put them in queue (white ⇒ grey). Do Access from bottom ↑ a b d e c f h g so until queue is empty Minimum Spanning (MST) Given: Graph 퐺 = (푉, 퐸, 푊) with undirected edges set 퐸, with positive weights 푊 Goal: Find a set of edges that connects all vertices of G and has minimum total weight. Application: Network design (water pipes, electricity cables, chip design) Algorithm: Kruskal's, Prim's, Optimistic, Pessimistic

Optimistic Approach Successively build the cheapest connection available =Kruskal's algorithm that is not redundant. 1956 Sort edges of G by increasing weight (Greedy) Set 퐸푇 to ∅ ab dh fh ef df be eh cd eg bd ad total For 푘 = 1. . 푛푒, do: 1 2 3 4 5 6 7 8 9 10 11 33

푂(|퐸| + log|퐸|) If 퐸푇 ∪ {푒푘} has no cycle with union-find- Set 퐸푇 = 퐸푇 ∪ {푒푘} datastructure Return 푇 = (푉, 퐸푇) Pessimistic Successively rule out the most expensive line that is ad bd eg cd eh be df ef fh dh ab total Approach not absolutely needed. 11 10 9 8 7 6 5 4 3 2 1 33

Prim's Algorithm Choose an arbitrary start vertex 푣 and set 푀 = {푣 }. 0 0 a b e f h d c g total 1957 Iteratively add to M a vertex in 푉 \ 푀 that can be a-b b-e e-f f-h h-d d-c e-g (Greedy) reached the cheapest from the current set M. Select 1 6 4 3 2 8 9 33 the corresponding edge. Continue until 푀 = 푉. 푂(|퐸| + |푉| log|푉|) For each vertex 푢 ∈ 푉 do: if L is managed with 휆[푢] = ∞ a Brodal queue 푝[푢] = emptyset Choose a vertex 푠 and set 휆[푠] ≔ 0 퐸푇 = emptyset 퐿 = 푉 // List of vertices not yet in T While 퐿 ≠ ∅ do Remove from L the vertex u with lower 휆[푢] If 푢 ≠ 푠 then 퐸푇 ≔ 퐸푇 ∪ {푝[푢], 푢} For each vertex 푣 ∈ 푎푑푗[푢] do If 푣 ∈ 퐿 and 휆[푣] > 푐푢푣 then 휆[푣] = 푐푢푣 푝[푣] = 푢 Return 푇 = (푉, 퐸푇)

Marcel Meschenmoser Lecturer: Prof. Dr. Mark Cieliebak & Prof. Dr. Lin Himmelmann Page 1 of 13 ZHAW/HSR Print date: 04.02.19 TSM_Alg & FTP_Optimiz Shortest paths

Given: Graph 퐺 = (푉, 퐸, 퐶) with cost 푐푖푗 ≥ 0 for each edge 푒 ∈ 퐸 and a start vertex 푠 ∈ 푉 Goal 1: Find the shortest path from start 푠 to 푗. Goal 2: Find the shortest path from start 푠 to all other vertices. Goal 3: Find the shortest path between all pairs of vertices.

Algorithm: (Goal 1+2) Dijkstra's, (Goal 3) Floyd-Warshall Dijkstra's algorithm We iteratively compute the shortest distance 퐼(푣) for a b=3 d=8 c=14 c=14 c=14 c=14 g=15 1959 the vertex v closest to 푣0 that has not been reached yet. d=9 e=10 e=10 f=14 f=13 g=15 Not working with negative weights. f=14 h=11 g=15 퐺 = (푉, 퐸, 퐶) 휆푗: length of the shortest path h=16 g=15

푝푗: predecessor of j on the shortest path 푂(|퐸| + |푉| log|푉|) 푠: start vertex if L is managed with For all 푗 ∈ 푉 do 휆푗 = ∞; 푝푗 = ∅ a Brodal queue 휆푠 = 0; 퐿 = 푉 While 퐿 ≠ ∅ Find i such that 휆푖 = min(휆푘|푘 ∈ 퐿) 퐿 = 퐿 표ℎ푛푒 {푖} For all 푗 ∈ 푠푢푐푐[푖] do If 푗 ∈ 퐿 and 휆푗 > 휆푖 + 푐푖푗 휆푗 = 휆푖 + 푐푖푗; 푝푗 = 푖 Return Lenghts 휆 and predecessors p Floyd-Warshall 푚푖푛푃푎푡ℎ(푖, 푗, 푛) are the shortest distances from 푣푖 to 푖 = 1 algorithm 푣푗, ∞ if not existing. 푎 푏 푐 푑 푒 푓 푔 ℎ Iterate through all rows/columns: 푎 − 3 ∞ 9 → 8 ∞ ∞ ∞ ∞ relies on dynamic 푚푖푛푃푎푡ℎ(푖, 푗, 푘) 푏 3 − ∞ 5 7 ∞ ∞ ∞ programming 푚푖푛푃푎푡ℎ(푖, 푗, 푘 + 1) = min {푚푖푛푃푎푡ℎ(푖, 푘 + 1, 푘) + 푐 ∞ ∞ − 6 ∞ ∞ ∞ ∞ 푚푖푛푃푎푡ℎ(푘 + 1, 푗, 푘) 푑 9 → 8 5 6 − ∞ → 12 6 ∞ 8 use recursion 푒 ∞ 7 ∞ ∞ → 12 − 6 5 1 푓 ∞ ∞ ∞ 6 6 − ∞ 2 negative weighs 푔 ∞ ∞ ∞ ∞ 5 ∞ − ∞ allowed as long as ℎ ∞ ∞ ∞ 8 1 2 ∞ −

no negative cycles Var: Given: start and goal coordinates Problem: We only see the immediate neighborhood of our position.

Marcel Meschenmoser Lecturer: Prof. Dr. Mark Cieliebak & Prof. Dr. Lin Himmelmann Page 2 of 13 ZHAW/HSR Print date: 04.02.19 TSM_Alg & FTP_Optimiz Max-Flow Input: directed graph 퐺 = (푉, 퐸) with capacity 푐(푒) > 0 Two special nodes, source s, and ,sink/target t, are given, 푠 ≠ 푡 Goal: Maximize the total amount of flow from s to t, where the flow on edge e doesn't exceed 푐(푒) and for every node 푣 ≠ 푠 푎푛푑 푣 ≠ 푡, incoming flow is equal to outgoing flow. Applications: Network routing, Transportation, Bipartite Match Algorithm: Ford-Fulkerson-Alg, Edmonds-Karp Greedily finding s-b-e-t s-b-d-t s-d-f-t total augmenting 1 8 3 12 paths s-b: 9 → 8 s-b: 8 → 0 s-d: 9 → 6 b-e: 5 → 4 b-d: 9 → 1 d-f: 3 → 0 e-t: 1 → 0 d-t: 8 → 0 f-t: 4 → 1

Ford-Fulkerson Idea: Insert backwards edges that can be used to (partially) undo previous paths. path 1: Set 푓푡표푡푎푙 = 0 While there is a path from s to t in G: ∗ 푂((푛푣 + 푛푒) 푓 ) Determine smallest capacity g along the path P 푓∗: optimal flow Add f to 푓푡표푡푎푙 Foreach edge 푢 → 푣 on the path decrease 푐(푢 → 푣) by f increase 푐(푣 → 푢) by f (backward) delete edge if capacity 푐 = 0 s-b-e-t s-b-d-t s-d-f-t s-d-b-e-f-t path 2: 1 8 3 1

Problem: Inefficient behaviour if the augmenting path is chosen arbitrarily.

Solution: Edmonds-Karp algorithm Edmonds-Karp Idea: In each iteration, use a shortest augmenting path, i.e. a path from s to t with algorithm the fewest number of edges. Can be found with BFS in time 푂(|퐸|). Insert from top ↓ t=2 starting at s d=1 e=2 f=2 Access from bottom ↑ s=0 b=1 d=1 e=2 shortest augmenting path: s-d-t with vertex Restriction: Maximum flow through vertices is restricted. restrictions Solution: Add additional vertex before the restricted vertex and add new edge with weight = vertex restriction. Direct all ingoing edges to the new vertex. e.g. b by 3:

Distribution Insert source s and connect it with capacity ∞ edges to first layer. problem reduced Insert sink t and connect it with capacity ∞ edges to last layer. to Max-Flow

Bipartite Make edges directed. Add source and sink vertices s and t with corresponding matching edges. Set all edge weights to 1. Find integer maximum flow with Ford-Fulkerson algorithm.

Marcel Meschenmoser Lecturer: Prof. Dr. Mark Cieliebak & Prof. Dr. Lin Himmelmann Page 3 of 13 ZHAW/HSR Print date: 04.02.19 TSM_Alg & FTP_Optimiz Var: Goal: Find a minimum partition of the vertex set which divides S and T Max-Flow into two sets. Min-Cut Applicationgs: Redundancy in networks, reliability in telecommunication, military strategy, optimizing traffic systems

The maximum st flow equals the minimum value of all (exponentially many) possible st-cuts.

Var: Each edge e has capacity 푐(푒) and cost 푐표푠푡(푒) Min-Cost Goal: Find the maximum flow that has the minimum total cost Max-Flow

harder

Often there are several possible maximum flows with a cost per flow unit. Solution: Extend Ford-Fulkerson algorithm to take cost criterion into account. s-b-d-t s-b-e-t s-c-e-t 2*6=16 1*5=5 1*4 capacity = 4 cost = 21

Until now, we just used the Ford-Fulkerson algorithm with additionally annotating the costs. Now, look for cycles, respecting the capacities. This does not change the total flow from s to t, but decreases the total cost. s-c-e-b-s capacity 1*(3-4) s-b-e → s-c-e capacity = 4 new cost = 20

Marcel Meschenmoser Lecturer: Prof. Dr. Mark Cieliebak & Prof. Dr. Lin Himmelmann Page 4 of 13 ZHAW/HSR Print date: 04.02.19 TSM_Alg & FTP_Optimiz COMBINATORIAL PROBLEMS - NP-CLASS

TSP Input: 푛 cities, 푑푖푗 distances matrix between cities 푖 and 푗. Traveling Goal: Find the shortes tour passing exactly once in each city. Salesperson 푛−1 ∑ 푑푝푖푝푖+1 + 푑푝푛푝1 Problem 푖=1 Application: Network Design, pipes, cables, chip design, ... 푂((푛 − 1)!) Algorithms: Nearest Neighbour (=Greedy), MST-based 푀푆푇 < 푇푆푃 ≤ 2 ∗ 푀푆푇 CVRP Input: n customers and 1 depot, 푞푖: quantity ordered by Capacitated customer i, distances 푑푖푗, Q verhicle capacity Vehicle Routing Goal: Minimize the total length performed by the vehicle Problem Each customer gets his delivery, Each tour start and ends at the depot Total demand on each tour des not exceed Q

Var: Goal: Minimize number of trucks/paths Steiner Tree Input: 퐺 = (푉, 퐸, 퐶) with vertices, edges and weights and Problem 푠푢푏푠푒푡 퐷 ⊆ 푉 Goal: Find a tree of minimum weight containing all vertices of D and, eventually, other vertices of V not in D (Steiner nodes).

Scheduling Given: n jobs and m machines, each job contains m tasks Problems Machine has to be free and previous task done. Each task has a known processing time 푝푖푗 on the machine. Goal: Find a "good" order in which the jobs should be processed. Objectives: - Makespan: Minimize the completion time of the last job - Sum of completion times: (weighted) sum of completion times of all jobs - Minimum Tardiness: minimize the individual due date. Var: Permutation Huge objects or production line Flow Shop Ordering of tasks on machines is fixed. Problem n! permutations Var: Flow Shop Smaller objects which can be storaged Problem Var: Job Shop Introduction for new employies Problem Knapsack Input: n items each with a weight 푤푖 and a value 푣푖, maximum Packing Problem weight capacity W Goal: maximize the value with respect to maximum weight Algorithms: Naive Greedy, Smarter Greedy simplest ILP (only one inequality) but NP-complete

Santa Claus Input: n item with location on earth and weigt, max carry weight Problem Goal: Minimize the weighted distance to deliver all items to the given location

Vertex Coloring Input: Graph Goal: Find an assignment of colors to each vertex such that no edge connects two identically colored vertices. Variants: Minimize the number of colors Algorithm: First Fitting Color, Decreasing Degree

Marcel Meschenmoser Lecturer: Prof. Dr. Mark Cieliebak & Prof. Dr. Lin Himmelmann Page 5 of 13 ZHAW/HSR Print date: 04.02.19 TSM_Alg & FTP_Optimiz QAP Quadratic Given: Set of activities and locations along with the flows Assignment between activities and the distances between locations Problem 푑푖푗 ∈ 퐷:distance matrix 푓푖푘 ∈ 퐹:flow matrix 푥푖푗:1 if activity is assigned to location j, 0 otherwise Goal: assign each activity to a location to minimize total cost 푛 푛

min 푧 = ∑ ∑ 푑푖푗푓푗푘푥푖푗푥ℎ푘 푖,푗=1 푘=1 Neighborhoods - swap 2 locations of 2 facilities 푂(푛2) - swap locations of k=3,4,... facitilites 푂(푛푘) Post office Goal: Find the next post office (or ATM nowadays) in a city from problems your location. Solution: Use Voronoi diagram Exact cover Given: Subset 푈푖, 푖 = 1. . 푛 of a base set M Find: Exact cover (if one exists), i.e. a choice of sets 푈푖 such that their union is m and no element is contained in more than one of these sets. Optimization: Find a choice of sets 푈푖 such that their union is M and as few element as possible are contained in 푈푖. n queens Goal: Place n queens on a 푛 × 푛 board such that no two queens problem can capture each other.

Marcel Meschenmoser Lecturer: Prof. Dr. Mark Cieliebak & Prof. Dr. Lin Himmelmann Page 6 of 13 ZHAW/HSR Print date: 04.02.19 TSM_Alg & FTP_Optimiz HEURISTICS General Heuristic A heuristic method is based on knowledge acquired by experience on a given problem. They are opposed to exact algorithms. Heuristic methods do not necessarily provide the best solution, but need reasonable time. Meta-Heuristic A meta-heuristic is a limited set of concepts that can be applied to a large set of combinatorial optimization problems and that allow creating new heuristic methods. It provides support for designing heuristic methods, based on knowledge acquired by designing heuristic

method for various problems. Trajectory- Start with a solution and improve it continuously by based exploring its "neighborhood". We obtain a trajectory through the solution space. Population- An evolving population of (partial) solutions, whose based members evolve and adapt individually to the problem and are searching for the optimum. Problem specific information can be exchanged between the members of the population, and can be passed on to descendants. Neighborhood Too small: get stuck quickly without exploring Too large: computation time to high

Heuristics For TSP: Nearest Start from city 1. Neighbour Repeat: Go to the nearest city not yet visited, until all visited. (=Greedy) Best Global Edge Initial 푆: ∅ (=Greedy) R: Set of edges that can be added to S such that: No cycle is created, no vertex with degree > 2 is created 푐(푆, 푒): weight of edge e (this is independent from S) Maximum Regret 푆 = {1} (=Greedy) R: Set of cities not yet visited 푐(푆, 푒) = Regret of not going to e from i Choose the largest 푐(푆, 푒) Best Insertion 푆 = Tour on 2 cities (=Greedy) 푅: Set of cities not yet visited 푐(푆, 푒) = Minimum insertion cost of city e between 2 cities Choose the smallest 푐(푆, 푒) For Vertex Coloring First Fitting Color Select an ordering of the vertices (=Greedy) 푠 = ∅ // set of already colored vertices 푅 = 푉 // set of vertices not colored yet 퐶(푠, 푒) // set of colors that can be assigned to a vertex (sorted) Choose first color in 퐶(푠, 푒) Decreasing sort the vertices by decreasing order of degree Degree Remaining is identical to first fitting color (=Greedy) 푫풔풂풕풖풓 choose the uncolored vertex with the highest "saturation (=Greedy) degree", i.e. with the maximum number of different adjacent colors. Break ties by choosing the vertex with maximum degree.

Marcel Meschenmoser Lecturer: Prof. Dr. Mark Cieliebak & Prof. Dr. Lin Himmelmann Page 7 of 13 ZHAW/HSR Print date: 04.02.19 TSM_Alg & FTP_Optimiz Meta Heuristics Constructive Random Idea: Generate a solution randomly, uniformly in the S = Random solution Sampling solution space Repeat “or Building” Advantages: Simple, works good, easy to implement Find another solution randomly Disadvantages: bad solution quality, uniform distribution Use if better than S sometimes not trivial Until satisfied Sometimes not easy to find -> add penalty function Greedy Idea: Build element by element, by adding systematically R = E (set to add) Construction the most appropriate (“best”) element. Repeat Needs a cost function that measures the quality of adding Evaluate 푐(푆, 푒) for each 푒 ∈ 푅 element e to a partial solution S. Choose 푒′ which optimizes 푐(푆, 푒) Adding an element generally implies restrictions for the Add 푒′ to the partial solution S next elements to add. Remove from R all elements that Disadvantages: too short-sighted cannot be added to S anymore. Works optimally for: MST, Shortest Path, ... Until S is a complete solution Exhaustive Search Idea: Generate all feasible solutions and find the optimum Advantages: Guarantees to find an optimal solution Often easy to implement Disadvantages: The set of feasible solutions is often exponential -> takes to much time. Pilot Method Idea: Evaluate the quality of adding an element e to s by For all e that can be added to S 푂(푛2) for TSP completing it to a full solution. The heuristic to complete a Calc s+e with heuristic “pilot” solution must be chosen, e.g. greedy + local search, ... Keep the best solution Remarks: Time complexity is increased by the “pilot” Beam Search Idea: Similar to exhaustive search, but define a beam For each partial solution width (e.g. 2) and browse depth. expand up to k depth beam width = ∞ → 푒푥ℎ푎푢푠푡푖푣푒 푤푖푡ℎ 퐵퐹푆 keep the most promising

Marcel Meschenmoser Lecturer: Prof. Dr. Mark Cieliebak & Prof. Dr. Lin Himmelmann Page 8 of 13 ZHAW/HSR Print date: 04.02.19 TSM_Alg & FTP_Optimiz Local Improvement Local Idea: Start with a given solution (with a constructive Repeat Improvement method) and find improvements Try to find an improved solution = Local Search Neighbourhoods: Terminating: fixed number, no Perform if found = Hill Climping improvement after x steps, when a target is reached While An improvement is found Selection First Improving Move (takes the first move that improves While improving solution the current solution) s' = new solution if c(s') < c(s) s = s' Best Improving Move (take the best move that improves costOfBestNewSol = a the current solution) for each move m in M(s) s' = new solution with m applied if c(s') < costOfBestNewSol costOfBestNewSol = c(s') bestMove = m apply bestMove to s Neighbourhood TSP: search for intersections, 2/3/k-opt, Connectivity: A global optimum can be or-opt (move a subchain of vertices somewhere else) reached from any feasible solution -> use a doubled chained list Low diameter: Number of moves for linking any pair of solutions is small CVRP: First fit, Random, Closest (customer, point of Low ruggedness: Limited number of local gravity, edge), lowest left capacity, change beginning optima Limited size: Number of neighbour solutions is Knacksack: shift, invert, transpose(swap) small Easy to evaluate: neighbour solution must be found without prohibitive computation Var Always take best solution from neighborhood, even if this decreases target function. (+) to escape from local optima (-) might go back and forth (run into a cylce) (-) not monotone increasing Tabu Search Idea: Similar to , but with some memory. Start with a random solution (=Local Search) Try to avoid steps that go back to previously visited Repeat solutions, or that undo the effect of previous steps. Find a better solution Goal: Promote diversity of the solutions explored, in Consider steps that ar not tabu particular to reduce cyclic behaviour and to escape from Perform if better local optima. and update tabu list Tabu list: store last move / store last n moves / ... Until criteria reached Allow invalid solutions but penalize them (fix or variable). Tuning: Learn tabu value (incr/decr or rand) example Iteration Variable flipped Current solution Revenue Vol Tabu list 0 - 0 0 0 0 0 0 0 0 0 0 0 1 4 0 0 0 1 0 0 0 0 0 0 12 14 4 2 1 1 0 0 1 0 0 0 0 0 0 12+11=23 14+33=47 1,4 3 2 1 1 0 1 0 0 0 0 0 0 23+10=33 47+27=74 1,2,4 4 3 1 1 1 1 0 0 0 0 0 0 33+9=42 74+16=90 1,2,3 5 4 1 1 0 0 1 0 0 0 0 0 43-

Marcel Meschenmoser Lecturer: Prof. Dr. Mark Cieliebak & Prof. Dr. Lin Himmelmann Page 9 of 13 ZHAW/HSR Print date: 04.02.19 TSM_Alg & FTP_Optimiz ACO Param: Set parameters

Ant Colony Distance matrix 퐷 = (푑푖푗) → 푔푖푣푒푛 Initialize pheromone trails Optimization Do Trace matrix 푇 = (휏푖푗) →pheromone trail 훼: weight of the pheromone (0 → nearest neighbor) Construct Ant Solutions Apply local search (optional) 훽: weight of the distance (0 → push very few tours) Update Pheromones 휌: While criteria reached 휏0: init of the pheromone trail 푄: constant for pheromon update 푚: batch size maxiter: maximal iterations e.g. for TSP

Var: MMAS Changes: MaxMin Ant Only the best ant updates the pheromone trails System Maintains lower and upper bounds 휏푚푖푛, 휏푚푎푥 for pheromone values 휏푒 Pheromone is updated for the "best" solution, for the current iteration or "best-so-far" one of the most Update depends on the cost of the best solution successful e.g. for TSP

Important There is no need that ants really follow a consecutive path -> like they can fly

Marcel Meschenmoser Lecturer: Prof. Dr. Mark Cieliebak & Prof. Dr. Lin Himmelmann Page 10 of 13 ZHAW/HSR Print date: 04.02.19 TSM_Alg & FTP_Optimiz Randomized Local Search Noising Methods Add a random noise when evaluating the moves with Repeat decreasing probability Apply randomly move Typical Noise: standard deviation Accept if it's better Generalisation of the next three Accept worse according to noise Var: Threshold Accept all move deteriorating (=verschlechtern) the Accepting solution less than a given threshold Var: Great Deluge Only accept solutions with a given minimal quality; increase level of quality (like a "flood") Var: Simulated Similar to hill climbing, but allow non-improving moves. Start with random solution Annealing Probability: Fixed / decreasing over time / Repeat decreasing over time and depending on quality Apply randomly move 푓(푥푖)−푓(푦푖) Accept if it's better min {1, 푒 푇푖 } Accept with probability if worse

If 푓(푥푖) ≥ 푓(푦푖) term gets ≥ 1 and 푦푖 is accepted. If 푇 → ∞, every solution 푦푖 will be accepted. If 푇 → 0, only better solutions will be accepted Advantages: spend more time on good solutions Restarting When local search does not improve anymore, start a new local search from: - best solution so far (good solutions are close) - randomly generated (explore with large variety - GRASP) - modification of best solution (keep structure - VNS) Var: GRASP Input: s* best solution found so far Repeat Greedy s = minimal partial solution Randomized construct s' from s with greedy Adaptive Search find optima s'' in neighbourhood Procedure if f(s'') < f(s*) s* = s'' Var: VNS Idea: Working with p different neighbourhoods Repeat Variable Choose a randomly neighbourhood Neighbourhood find local optima in these Search apply if better

Marcel Meschenmoser Lecturer: Prof. Dr. Mark Cieliebak & Prof. Dr. Lin Himmelmann Page 11 of 13 ZHAW/HSR Print date: 04.02.19 TSM_Alg & FTP_Optimiz Decomposition Methods Neighbourhood is too large to fully explore. Split problem in smaller problems, optimize local solutions VLNS Idea: Partially explore the large neighbourhood to find Very Large improvements Neighbourhood ILP: fix value of subset and solve, repeat with other subset Search Iterated Local Search: randomly perturb (=stören) the best solution so far, and apply an improving method POPMUSIC Idea: Decompose solution into parts, optimize several Solution S=푠1 ∪ 푠2 ∪ … ∪ 푠푝 Partial parts of the solution, repeat until optimized portions cover 푂 = ∅ // already optimized parts Optimization the entire solution space Parameter r // "nearest parts" Metaheuristic under special While 푂 ≠ 푆 intensication Choose a seed part 푠푖 ∉ 푂 conditions Create a subproblem R composed of the r "closest" parts Optimize subproblem R Difficulty: Sub-problems may not be independent from If R improved then set 푂 = 푂\푅 one another else 푂 = 푂 ∪ 푠푖 example Variables (example with VRP) Part = Tour of one vehicle Distance = between centres of gravity Sub-Problem: A smaller VRP Optimization process: e.g. Tabu Search Variables (example with TSP) Part = a single city c on a tour Distance = r adjacent cities on the tour, immediately before/after c Sub-Problem: solve TSP for the r cities (e.g. exhaustive) Re-Combination: re-insert subtour in existing tour Variables (example for permutation shop problem) Part = a single job x Distance = take x as seed job and select r jobs that are scheduled before and after x Sub-Problem = Re-Schedule the selected job (use additional time constraints for start/end time -> still valid Re-Combination: shift all jobs as much to the left as possible

Marcel Meschenmoser Lecturer: Prof. Dr. Mark Cieliebak & Prof. Dr. Lin Himmelmann Page 12 of 13 ZHAW/HSR Print date: 04.02.19 TSM_Alg & FTP_Optimiz Evolutionary Idea: Selection: Survival of the fittest (natural selection) Choose a suitable encoding Algorithms Finite lifespan of individuals (generations). Start at random population Inheritance of traits, info is passed on to descendants. Repeat Genetic Recombination, info is exchanged between parents. Create next generation: Algorithms Diversification (by mutations). - assign fitness to individuals Population acquires and cultivates shared knowledge - natural selection (culture, collective memory) - choosing parents for reproduc. Local development of populations (different cultures in - recombination and mutation different regions) Until criterion satisfied Terminology An individual is a possible solution candidate. e.g. vector The genes are the individual entries of an individual. The alleles are concrete values which a gene takes, e.g.0/1 The population is the set of all individuals at a given time. A generation is the population at a specific point in time. The genotype is the encoded form of an individual. The phenotype is the decoded form of an individual, do not depend on encoding. Are the solution candidates. The fitness function is our quality measure for a solution. 1. Encoding Desirable: A small change in genotype corresponds to a standard: 00-01-10-11 (dist=1-2) small change in phenotype. -> use gray code gray: 00-01-11-10 (dist=1) Example: Vector of n integer / binary vector / ... 2. Replacement General replacement: Replace all individuals schemes Strict elitism: Keep only m best individuals Weak elitism: m best individuals are mutated Delete-n: Replace n random individuals Delete-n-last: Replace n worst individuals, keep others Retirement home: Store some for later procreation 3. Selection Selection pressure means that better individuals should Better exploration when pressure is low have a higher chance of reproduction. Better exploitation when pressure is high Strategy: Increasing selection pressure - (Unbiased) Random selection Tournamen selection: - Random selection with bias on better individuals best 2-nd 3rd-best (k)-best (roulette wheel) 푝 (1 − 푝)푝 (1 − 푝)2푝 .. (1 − 푝)(푘−1)푝 - Tournament selection The larger k, the higher the selection pressure 4. Recombination Crossing of individuals to exchange and pass advantageous 1-point 2-point uniform properties. parent 1 1011|110 1011|000|001 10110 - one-point crossover parent 2 0010|011 0010|110|100 00101 - two-point crossover child 1 0010|110 1011|110|001 10111 - uniform crossover child 2 1011|011 0010|000|100 00100 - PMX (2-point with repair mechanism) - adjacency method 5. Mutation bit flip / position mutation / inversion

Marcel Meschenmoser Lecturer: Prof. Dr. Mark Cieliebak & Prof. Dr. Lin Himmelmann Page 13 of 13