Application of a Hill-Climbing Algorithm to Exact and Approximate Inference in Credal Networks

Total Page:16

File Type:pdf, Size:1020Kb

Application of a Hill-Climbing Algorithm to Exact and Approximate Inference in Credal Networks 4th International Symposium on Imprecise Probabilities and Their Applications, Pittsburgh, Pennsylvania, 2005 Application of a hill-climbing algorithm to exact and approximate inference in credal networks Andr´es Cano, Manuel G´omez, Seraf´ın Moral Department of Computer Science and Artificial Intelligence E.T.S Ingenier´ıa Inform´atica, University of Granada [email protected], [email protected], [email protected] Abstract function of the number of parents [20]. This hypothe- sis will be accepted in this paper. Some authors have This paper proposes two new algorithms for inference considered the propagation of probabilistic intervals in credal networks. These algorithms enable proba- directly in graphical structures [1, 12, 23, 24]. In the bility intervals to be obtained for the states of a given proposed procedures, however, there is no guarantee query variable. The first algorithm is approximate that the calculated intervals are always the same as and uses the hill-climbing technique in the Shenoy- those obtained when a global computation using the Shafer architecture to propagate in join trees; the associated convex sets of probabilities is used. In gen- second is exact and is a modification of Rocha and eral, it can be said that calculated bounds are wider Cozman's branch-and-bound algorithm, but applied than exact ones. The problem is that exact bounds to general directed acyclic graphs. need a computation with the associated convex sets Keywords. Credal network, probability inter- of probabilities. Following this approach, different vals, Bayesian networks, strong independence, hill- approximate and exact methods have been proposed climbing, branch-and-bound algorithms. [5, 3, 2, 8, 20, 19]. These assume that there is a convex set of conditional probabilities for each configuration of the parent variables in the dependence graph, and 1 Introduction provide a model for obtaining exact bounds through the use of local computations. Working with con- A credal network is a graphical structure (a di- vex sets, however, may be extremely inefficient: if we rected acyclic graph (DAG) [9]) which is similar to a have n variables and each variable, Xi, has a convex Bayesian network [18], where imprecise probabilities set with li extreme points as the conditional infor- are used to represent quantitative knowledge. Each mation, the propagation algorithm has a complexity node in the DAG represents a random event, and n order of O(K: i=1 li), where K is the complexity of is associated with a set of convex sets of conditional carrying out aQsimple probabilistic propagation. This probability distributions. is so since the propagation of convex sets is equiva- There are several mathematical models for imprecise lent to the propagation of all the global probabilities probabilities [25]. We consider convex sets of proba- that can be obtained by choosing an exact conditional bilities to be the most suitable for calculating and rep- probability in each convex set. resenting imprecise probabilities as they are powerful In this paper, we shall propose a new approximate enough to represent the results of basic operations algorithm for inference in credal networks. This is within the model without having to make approxima- a hill-climbing procedure which quickly obtains good tions which cause loss of information (as in interval inner approximations to the correct intervals. The probabilities). algorithm is based on the Shenoy-Shafer architecture In this paper, we shall consider the problem of com- [22] for propagation in join trees. Rocha, Campos and puting upper and lower probabilities for each state of Cozman [13] present another hill-climbing search, in- a specific query variable, given a set of observations. spired by the Lukatskii-Shapot algorithm, for obtain- A very common hypothesis is that of separately speci- ing accurate inner approximations. We shall also pro- fied credal sets [17, 20] (there is no restriction between pose a branch-and-bound procedure (also used in [19]) the set of probabilities associated to a variable given for inference in credal networks, where the initial so- two different configurations of its parents). Under this lution is computed with our hill-climbing algorithm, hypothesis, the size of the conditional convex sets is a reducing its running time. This approach is similar to the one in [13]. Campos and Cozman [11] also give a multilinear programming method which provides a very fast branch-and-bound procedure. The rest of the paper is organized as follows: Sec- tion 2 describes the basics of probability propagation in Bayesian networks and the Shenoy-Shafer architec- ture; Section 3 presents basic notions about credal sets and credal networks; Section 4 introduces proba- Figure 1: A Bayesian network and its join tree bility trees and briefly describes two algorithms based on variable elimination and probability trees for in- j Each observation, Xi = xi , is represented by means ference in credal networks; Section 5 details the pro- of a valuation which is a Dirac function defined on posed hill-climbing algorithm for inference in credal j j ΩXi as δXi (xi; xi ) = 1 if xi = xi, xi 2 ΩXi , and networks; Section 6 explains how to apply the pro- j j δ (x ; x ) = 0 if x 6= x . posed algorithm in Rocha and Cozman's branch-and- Xi i i i i bound algorithm; Section 7 shows the experimental The aim of probability propagation algorithms is work; and finally Section 8 presents the conclusions. to calculate the a posteriori probability function p(xqjxE), (for every xq 2 ΩXq ) by making local com- 2 Probability propagation in putations, where Xq is a given query variable. This distribution verifies: Bayesian networks Let X = fX1; : : : ; Xng be a set of variables. Let us #Xq assume that each variable Xi takes values on a finite 0 j 1 p(xqjxE) / p(xijyj ) δXi (xi; xi ) (2) set ΩXi (the frame of Xi). We shall use xi to de- Y Y Xi j @ xi 2xE A note one of the values of Xi, xi 2 ΩXi . If I is a set of indices, we shall write XI for the set fXiji 2 Ig. Let N = f1; : : : ; ng be the set of all the indices. The In fact, the previous formula is the expression for p(xq; xE). p(xqjxE ) can be obtained from p(xq; xE) Cartesian product i2I ΩXi will be denoted by ΩXI . Q by normalization. The elements of ΩXI are called configurations of XI and will be written with x or xI . In Shenoy and A known propagation algorithm can be constructed Shafer's [22] terminology, a mapping from a set ΩXI by transforming the DAG into a join tree. For exam- R+ into 0 will be called a valuation h for XI . Shenoy ple, the righthand tree in Figure 1 shows a possible and Shafer define two operations on valuations: com- join tree for the DAG on the left. There are sev- bination h1 ⊗ h2 (multiplication) and marginalization eral schemes [16, 22, 21, 14] for propagating on join #J h (adding the removed variables). trees. We shall follow Shafer and Shenoy's scheme A Bayesian network is a directed acyclic graph (see [22]. Every node in the join tree has a valuation ΨCi the left side of Figure 1), where each node represents attached, initially set to the identity mapping. There a random event Xi, and the topology of the graph are two messages (valuations) MCi!Cj , MCj !Ci be- shows the independence relations between variables tween every two adjacent nodes Ci and Cj . MCi!Cj according to the d-separation criterion [18]. Each is the message that Ci sends to Cj , and MCj !Ci is node Xi also has a conditional probability distribu- the message that Cj sends to Ci. Every conditional tion pi(XijY) for that variable given its parents Y. distribution pi and every observation δi will be as- Following Shafer and Shenoy's terminology, these con- signed to one node Ci which contains all its variables. ditional distributions can be considered as valuations. Each node obtains a (possibly empty) set of condi- A Bayesian network determines a unique joint proba- tional distributions and Dirac functions, which must bility distribution: then be combined into a single valuation. The propagation algorithm is performed by crossing p(X = x) = p (x jy ) 8x 2 Ω (1) Y i i j X the join tree from leaves to root and then from root i2N to leaves, updating messages as follows: #Ci\Cj An observation is the knowledge of the exact value MCi!Cj = (ΨCi ⊗ ( MCk!Ci )) j O Xi = xi of a variable. The set of observed variables is Ck 2Ady(Ci;Cj ) denoted by XE; the configuration for these variables (3) is denoted by xE and it is called the evidence set. E where Ady(Ci; Cj ) is the set of adjacent nodes to Ci will be the set of indices of the observed variables. with the exception of Cj . Once the propagation has been performed, the a pos- The propagation of credal sets is completely analo- teriori probability distribution for Xq, p(XqjxE ), can gous to the propagation of probabilities; the proce- be calculated by looking for a node Ci containing Xq dures are the same. Here, we shall only describe the and normalizing the following expression: main differences and further details can be found in [5]. Valuations are convex sets of possible probabili- p(X ; x ) = (Ψ ⊗ ( M ))#Xq (4) q E Ci O Cj !Ci ties, with a finite number of vertices. A conditional Cj 2Ady(Ci) valuation is a convex set of conditional probability where Ady(Ci) is the set of adjacent nodes to Ci. The distributions. An observation of a value for a variable normalization factor is the probability of the given will be represented in the same way as in the prob- evidence that can be calculated from the valuation abilistic case.
Recommended publications
  • A Branch-And-Price Approach with Milp Formulation to Modularity Density Maximization on Graphs
    A BRANCH-AND-PRICE APPROACH WITH MILP FORMULATION TO MODULARITY DENSITY MAXIMIZATION ON GRAPHS KEISUKE SATO Signalling and Transport Information Technology Division, Railway Technical Research Institute. 2-8-38 Hikari-cho, Kokubunji-shi, Tokyo 185-8540, Japan YOICHI IZUNAGA Information Systems Research Division, The Institute of Behavioral Sciences. 2-9 Ichigayahonmura-cho, Shinjyuku-ku, Tokyo 162-0845, Japan Abstract. For clustering of an undirected graph, this paper presents an exact algorithm for the maximization of modularity density, a more complicated criterion to overcome drawbacks of the well-known modularity. The problem can be interpreted as the set-partitioning problem, which reminds us of its integer linear programming (ILP) formulation. We provide a branch-and-price framework for solving this ILP, or column generation combined with branch-and-bound. Above all, we formulate the column gen- eration subproblem to be solved repeatedly as a simpler mixed integer linear programming (MILP) problem. Acceleration tech- niques called the set-packing relaxation and the multiple-cutting- planes-at-a-time combined with the MILP formulation enable us to optimize the modularity density for famous test instances in- cluding ones with over 100 vertices in around four minutes by a PC. Our solution method is deterministic and the computation time is not affected by any stochastic behavior. For one of them, column generation at the root node of the branch-and-bound tree arXiv:1705.02961v3 [cs.SI] 27 Jun 2017 provides a fractional upper bound solution and our algorithm finds an integral optimal solution after branching. E-mail addresses: (Keisuke Sato) [email protected], (Yoichi Izunaga) [email protected].
    [Show full text]
  • AI-KI-B Heuristic Search
    AI-KI-B Heuristic Search Ute Schmid & Diedrich Wolter Practice: Johannes Rabold Cognitive Systems and Smart Environments Applied Computer Science, University of Bamberg last change: 3. Juni 2019, 10:05 Outline Introduction of heuristic search algorithms, based on foundations of problem spaces and search • Uniformed Systematic Search • Depth-First Search (DFS) • Breadth-First Search (BFS) • Complexity of Blocks-World • Cost-based Optimal Search • Uniform Cost Search • Cost Estimation (Heuristic Function) • Heuristic Search Algorithms • Hill Climbing (Depth-First, Greedy) • Branch and Bound Algorithms (BFS-based) • Best First Search • A* • Designing Heuristic Functions • Problem Types Cost and Cost Estimation • \Real" cost is known for each operator. • Accumulated cost g(n) for a leaf node n on a partially expanded path can be calculated. • For problems where each operator has the same cost or where no information about costs is available, all operator applications have equal cost values. For cost values of 1, accumulated costs g(n) are equal to path-length d. • Sometimes available: Heuristics for estimating the remaining costs to reach the final state. • h^(n): estimated costs to reach a goal state from node n • \bad" heuristics can misguide search! Cost and Cost Estimation cont. Evaluation Function: f^(n) = g(n) + h^(n) Cost and Cost Estimation cont. • \True costs" of an optimal path from an initial state s to a final state: f (s). • For a node n on this path, f can be decomposed in the already performed steps with cost g(n) and the yet to perform steps with true cost h(n). • h^(n) can be an estimation which is greater or smaller than the true costs.
    [Show full text]
  • CAP 4630 Artificial Intelligence
    CAP 4630 Artificial Intelligence Instructor: Sam Ganzfried [email protected] 1 • http://www.ultimateaiclass.com/ • https://moodle.cis.fiu.edu/ • HW1 out 9/5 today, due 9/28 (maybe 10/3) – Remember that you have up to 4 late days to use throughout the semester. – https://www.cs.cmu.edu/~sganzfri/HW1_AI.pdf – http://ai.berkeley.edu/search.html • Office hours 2 • https://www.cs.cmu.edu/~sganzfri/Calendar_AI.docx • Midterm exam: on 10/19 • Final exam pushed back (likely on 12/12 instead of 12/5) • Extra lecture on NLP on 11/16 • Will likely only cover 1-1.5 lectures on logic and on planning • Only 4 homework assignments instead of 5 3 Heuristic functions 4 8-puzzle • The object of the puzzle is to slide the tiles horizontally or vertically into the empty space until the configuration matches the goal configuration. • The average solution cost for a randomly generated 8-puzzle instance is about 22 steps. The branching factor is about 3 (when the empty tile is in the middle, four moves are possible; when it is in a corner, two; and when it is along an edge, three). This means that an exhaustive tree search to depth 22 would look at about 3^22 = 3.1*10^10 states. A graph search would cut this down by a factor of about 170,000 because only 181,440 distinct states are reachable (homework exercise). This is a manageable number, but for a 15-puzzle, it would be 10^13, so we will need a good heuristic function.
    [Show full text]
  • Heuristic Search Viewed As Path Finding in a Graph
    ARTIFICIAL INTELLIGENCE 193 Heuristic Search Viewed as Path Finding in a Graph Ira Pohl IBM Thomas J. Watson Research Center, Yorktown Heights, New York Recommended by E. J. Sandewall ABSTRACT This paper presents a particular model of heuristic search as a path-finding problem in a directed graph. A class of graph-searching procedures is described which uses a heuristic function to guide search. Heuristic functions are estimates of the number of edges that remain to be traversed in reaching a goal node. A number of theoretical results for this model, and the intuition for these results, are presented. They relate the e])~ciency of search to the accuracy of the heuristic function. The results also explore efficiency as a consequence of the reliance or weight placed on the heuristics used. I. Introduction Heuristic search has been one of the important ideas to grow out of artificial intelligence research. It is an ill-defined concept, and has been used as an umbrella for many computational techniques which are hard to classify or analyze. This is beneficial in that it leaves the imagination unfettered to try any technique that works on a complex problem. However, leaving the con. cept vague has meant that the same ideas are rediscovered, often cloaked in other terminology rather than abstracting their essence and understanding the procedure more deeply. Often, analytical results lead to more emcient procedures. Such has been the case in sorting [I] and matrix multiplication [2], and the same is hoped for this development of heuristic search. This paper attempts to present an overview of recent developments in formally characterizing heuristic search.
    [Show full text]
  • Cooperative and Adaptive Algorithms Lecture 6 Allaa (Ella) Hilal, Spring 2017 May, 2017 1 Minute Quiz (Ungraded)
    Cooperative and Adaptive Algorithms Lecture 6 Allaa (Ella) Hilal, Spring 2017 May, 2017 1 Minute Quiz (Ungraded) • Select if these statement are true (T) or false (F): Statement T/F Reason Uniform-cost search is a special case of Breadth- first search Breadth-first search, depth- first search and uniform- cost search are special cases of best- first search. A* is a special case of uniform-cost search. ECE457A, Dr. Allaa Hilal, Spring 2017 2 1 Minute Quiz (Ungraded) • Select if these statement are true (T) or false (F): Statement T/F Reason Uniform-cost search is a special case of Breadth- first F • Breadth- first search is a special case of Uniform- search cost search when all step costs are equal. Breadth-first search, depth- first search and uniform- T • Breadth-first search is best-first search with f(n) = cost search are special cases of best- first search. depth(n); • depth-first search is best-first search with f(n) = - depth(n); • uniform-cost search is best-first search with • f(n) = g(n). A* is a special case of uniform-cost search. F • Uniform-cost search is A* search with h(n) = 0. ECE457A, Dr. Allaa Hilal, Spring 2017 3 Informed Search Strategies Hill Climbing Search ECE457A, Dr. Allaa Hilal, Spring 2017 4 Hill Climbing Search • Tries to improve the efficiency of depth-first. • Informed depth-first algorithm. • An iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by incrementally changing a single element of the solution. • It sorts the successors of a node (according to their heuristic values) before adding them to the list to be expanded.
    [Show full text]
  • Fast and Efficient Black Box Optimization Using the Parameter-Less Population Pyramid
    Fast and Efficient Black Box Optimization Using the Parameter-less Population Pyramid B. W. Goldman [email protected] Department of Computer Science and Engineering, Michigan State University, East Lansing, 48823, USA W. F. Punch [email protected] Department of Computer Science and Engineering, Michigan State University, East Lansing, 48823, USA doi:10.1162/EVCO_a_00148 Abstract The parameter-less population pyramid (P3) is a recently introduced method for per- forming evolutionary optimization without requiring any user-specified parameters. P3’s primary innovation is to replace the generational model with a pyramid of mul- tiple populations that are iteratively created and expanded. In combination with local search and advanced crossover, P3 scales to problem difficulty, exploiting previously learned information before adding more diversity. Across seven problems, each tested using on average 18 problem sizes, P3 outperformed all five advanced comparison algorithms. This improvement includes requiring fewer evaluations to find the global optimum and better fitness when using the same number of evaluations. Using both algorithm analysis and comparison, we find P3’s effectiveness is due to its ability to properly maintain, add, and exploit diversity. Unlike the best comparison algorithms, P3 was able to achieve this quality without any problem-specific tuning. Thus, unlike previous parameter-less methods, P3 does not sacrifice quality for applicability. There- fore we conclude that P3 is an efficient, general, parameter-less approach to black box optimization which is more effective than existing state-of-the-art techniques. Keywords Genetic algorithms, linkage learning, local search, parameter-less. 1 Introduction A primary purpose of evolutionary optimization is to efficiently find good solutions to challenging real-world problems with minimal prior knowledge about the problem itself.
    [Show full text]
  • Backtracking / Branch-And-Bound
    Backtracking / Branch-and-Bound Optimisation problems are problems that have several valid solutions; the challenge is to find an optimal solution. How optimal is defined, depends on the particular problem. Examples of optimisation problems are: Traveling Salesman Problem (TSP). We are given a set of n cities, with the distances between all cities. A traveling salesman, who is currently staying in one of the cities, wants to visit all other cities and then return to his starting point, and he is wondering how to do this. Any tour of all cities would be a valid solution to his problem, but our traveling salesman does not want to waste time: he wants to find a tour that visits all cities and has the smallest possible length of all such tours. So in this case, optimal means: having the smallest possible length. 1-Dimensional Clustering. We are given a sorted list x1; : : : ; xn of n numbers, and an integer k between 1 and n. The problem is to divide the numbers into k subsets of consecutive numbers (clusters) in the best possible way. A valid solution is now a division into k clusters, and an optimal solution is one that has the nicest clusters. We will define this problem more precisely later. Set Partition. We are given a set V of n objects, each having a certain cost, and we want to divide these objects among two people in the fairest possible way. In other words, we are looking for a subdivision of V into two subsets V1 and V2 such that X X cost(v) − cost(v) v2V1 v2V2 is as small as possible.
    [Show full text]
  • Artificial Intelligence 1: Informed Search
    Informed Search II 1. When A* fails – Hill climbing, simulated annealing 2. Genetic algorithms When A* doesn’t work AIMA 4.1 A few slides adapted from CS 471, UBMC and Eric Eaton (in turn, adapted from slides by Charles R. Dyer, University of Wisconsin-Madison)...... Outline Local Search: Hill Climbing Escaping Local Maxima: Simulated Annealing Genetic Algorithms CIS 391 - Intro to AI 3 Local search and optimization Local search: • Use single current state and move to neighboring states. Idea: start with an initial guess at a solution and incrementally improve it until it is one Advantages: • Use very little memory • Find often reasonable solutions in large or infinite state spaces. Useful for pure optimization problems. • Find or approximate best state according to some objective function • Optimal if the space to be searched is convex CIS 391 - Intro to AI 4 Hill Climbing Hill climbing on a surface of states h(s): Estimate of distance from a peak (smaller is better) OR: Height Defined by Evaluation Function (greater is better) CIS 391 - Intro to AI 6 Hill-climbing search I. While ( uphill points): • Move in the direction of increasing evaluation function f II. Let snext = arg maxfs ( ) , s a successor state to the current state n s • If f(n) < f(s) then move to s • Otherwise halt at n Properties: • Terminates when a peak is reached. • Does not look ahead of the immediate neighbors of the current state. • Chooses randomly among the set of best successors, if there is more than one. • Doesn’t backtrack, since it doesn’t remember where it’s been a.k.a.
    [Show full text]
  • Module 5: Backtracking
    Module-5 : Backtracking Contents 1. Backtracking: 3. 0/1Knapsack problem 1.1. General method 3.1. LC Branch and Bound solution 1.2. N-Queens problem 3.2. FIFO Branch and Bound solution 1.3. Sum of subsets problem 4. NP-Complete and NP-Hard problems 1.4. Graph coloring 4.1. Basic concepts 1.5. Hamiltonian cycles 4.2. Non-deterministic algorithms 2. Branch and Bound: 4.3. P, NP, NP-Complete, and NP-Hard 2.1. Assignment Problem, classes 2.2. Travelling Sales Person problem Module 5: Backtracking 1. Backtracking Some problems can be solved, by exhaustive search. The exhaustive-search technique suggests generating all candidate solutions and then identifying the one (or the ones) with a desired property. Backtracking is a more intelligent variation of this approach. The principal idea is to construct solutions one component at a time and evaluate such partially constructed candidates as follows. If a partially constructed solution can be developed further without violating the problem’s constraints, it is done by taking the first remaining legitimate option for the next component. If there is no legitimate option for the next component, no alternatives for any remaining component need to be considered. In this case, the algorithm backtracks to replace the last component of the partially constructed solution with its next option. It is convenient to implement this kind of processing by constructing a tree of choices being made, called the state-space tree. Its root represents an initial state before the search for a solution begins. The nodes of the first level in the tree represent the choices made for the first component of a solution; the nodes of the second level represent the choices for the second component, and soon.
    [Show full text]
  • A. Local Beam Search with K=1. A. Local Beam Search with K = 1 Is Hill-Climbing Search
    4. Give the name that results from each of the following special cases: a. Local beam search with k=1. a. Local beam search with k = 1 is hill-climbing search. b. Local beam search with one initial state and no limit on the number of states retained. b. Local beam search with k = ∞: strictly speaking, this doesn’t make sense. The idea is that if every successor is retained (because k is unbounded), then the search resembles breadth-first search in that it adds one complete layer of nodes before adding the next layer. Starting from one state, the algorithm would be essentially identical to breadth-first search except that each layer is generated all at once. c. Simulated annealing with T=0 at all times (and omitting the termination test). c. Simulated annealing with T = 0 at all times: ignoring the fact that the termination step would be triggered immediately, the search would be identical to first-choice hill climbing because every downward successor would be rejected with probability 1. d. Simulated annealing with T=infinity at all times. d. Simulated annealing with T = infinity at all times: ignoring the fact that the termination step would never be triggered, the search would be identical to a random walk because every successor would be accepted with probability 1. Note that, in this case, a random walk is approximately equivalent to depth-first search. e. Genetic algorithm with population size N=1. e. Genetic algorithm with population size N = 1: if the population size is 1, then the two selected parents will be the same individual; crossover yields an exact copy of the individual; then there is a small chance of mutation.
    [Show full text]
  • On the Design of (Meta)Heuristics
    On the design of (meta)heuristics Tony Wauters CODeS Research group, KU Leuven, Belgium There are two types of people • Those who use heuristics • And those who don’t ☺ 2 Tony Wauters, Department of Computer Science, CODeS research group Heuristics Origin: Ancient Greek: εὑρίσκω, "find" or "discover" Oxford Dictionary: Proceeding to a solution by trial and error or by rules that are only loosely defined. Tony Wauters, Department of Computer Science, CODeS research group 3 Properties of heuristics + Fast* + Scalable* + High quality solutions* + Solve any type of problem (complex constraints, non-linear, …) - Problem specific (but usually transferable to other problems) - Cannot guarantee optimallity * if well designed 4 Tony Wauters, Department of Computer Science, CODeS research group Heuristic types • Constructive heuristics • Metaheuristics • Hybrid heuristics • Matheuristics: hybrid of Metaheuristics and Mathematical Programming • Other hybrids: Machine Learning, Constraint Programming,… • Hyperheuristics 5 Tony Wauters, Department of Computer Science, CODeS research group Constructive heuristics • Incrementally construct a solution from scratch. • Easy to understand and implement • Usually very fast • Reasonably good solutions 6 Tony Wauters, Department of Computer Science, CODeS research group Metaheuristics A Metaheuristic is a high-level problem independent algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms. Instead of reinventing the wheel when developing a heuristic
    [Show full text]
  • Job Shop Scheduling with Metaheuristics for Car Workshops
    Job Shop Scheduling with Metaheuristics for Car Workshops Hein de Haan s2068125 Master Thesis Artificial Intelligence University of Groningen July 2016 First Supervisor Dr. Marco Wiering (Artificial Intelligence, University of Groningen) Second Supervisor Prof. Dr. Lambert Schomaker (Artificial Intelligence, University of Groningen) Abstract For this thesis, different algorithms were created to solve the problem of Jop Shop Scheduling with a number of constraints. More specifically, the setting of a (sim- plified) car workshop was used: the algorithms had to assign all tasks of one day to the mechanics, taking into account minimum and maximum finish times of tasks and mechanics, the use of bridges, qualifications and the delivery and usage of car parts. The algorithms used in this research are Hill Climbing, a Genetic Algorithm with and without a Hill Climbing component, two different Particle Swarm Opti- mizers with and without Hill Climbing, Max-Min Ant System with and without Hill Climbing and Tabu Search. Two experiments were performed: one focussed on the speed of the algorithms, the other on their performance. The experiments point towards the Genetic Algorithm with Hill Climbing as the best algorithm for this problem. 2 Contents 1 Introduction 5 1.1 Scheduling . 5 1.2 Research Question . 7 2 Theoretical Background of Job Shop Scheduling 9 2.1 Formal Description . 9 2.2 Early Work . 11 2.3 Current Algorithms . 13 2.3.1 Simulated Annealing . 13 2.3.2 Neural Networks . 14 3 Methods 19 3.1 Introduction . 19 3.2 Hill Climbing . 20 3.2.1 Introduction . 20 3.2.2 Hill Climbing for Job Shop Scheduling .
    [Show full text]