Constraint Propagation, Graph Traversal, and Backtracking Constraint Satisfaction

Total Page:16

File Type:pdf, Size:1020Kb

Constraint Propagation, Graph Traversal, and Backtracking Constraint Satisfaction Solving Sudoku Puzzles Constraint propagation, Graph traversal, and Backtracking Constraint Satisfaction • Given some constraints, can I find a solution? • e.g. Given the contents of my fridge, is there a nutritious dinner to be made? • e.g. Can we find some values such that this equation evaluates to True? A & ~B Constraint Propagation • Can we reduce the number of constraints to our problem? • e.g. We have hour long classes. • 1 at 10, 1 at 10:30, 2 at 11. • How many classrooms will we need? Graph Traversal • One of the most fundamental graph problems is to traverse every edge and vertex in a graph. • For correctness, we must do the traversal in a systematic way so that we don’t miss anything. • For efficiency, we must make sure we visit each edge at most twice. • Since a maze is just a graph, such an algorithm must be powerful enough to enable us to get out of an arbitrary maze. Marking Vertices • The key idea is that we must mark each vertex when we first visit it, and keep track of what have not yet completely explored. • Each vertex will always be in one of the following three states: 1. Undiscovered: the vertex in its initial, untouched state. 2. Discovered: the vertex after we have encountered it, but before we have checked out all its incident edges. 3. Processed: the vertex after we have visited all its incident edges. • A vertex cannot be processed before we discover it, so over the course of the traversal the state of each vertex progresses from undiscovered to discovered to processed. To Do List • We must also maintain a structure containing all the vertices we have discovered but not yet completely explored. • Initially, only a single start vertex is considered to be discovered. • To completely explore a vertex, we look at each edge going out of it. For each edge which goes to an undiscovered vertex, we mark it discovered and add it to the list of work to do. • Note that regardless of what order we fetch the next vertex to explore, each edge is considered exactly twice, when each of its endpoints are explored. In what order should we process vertices? • Breadth first search: First-in, First-out • Depth first search: Last-in, First-out Depth-first Search Depth-first Search http://logicalgenetics.com/solving-sudoku-puzzles-using-depth-first-search/ Backtracking • A general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems. - Wikipedia • Term coined in the 1950’s by D. H. Lehmer Eight Queens Problem • Give all arrangements of eight queens on an 8x8 chessboard so that no queen attacks another. • No two queens are on the same row, column, or diagonal • Problem invented by Max Bezel in 1848 Algorithm Start in the leftmost column If all queens are placed, return true for (every possible choice among the rows in this column): if the queen can be placed safely there, make that choice and then recursively try to place the rest of the queens if recursion successful, return true if ! successful, remove queen and try another row in this column if all rows have been tried and nothing worked, return false to trigger backtracking .
Recommended publications
  • Lecture 4 Dynamic Programming
    1/17 Lecture 4 Dynamic Programming Last update: Jan 19, 2021 References: Algorithms, Jeff Erickson, Chapter 3. Algorithms, Gopal Pandurangan, Chapter 6. Dynamic Programming 2/17 Backtracking is incredible powerful in solving all kinds of hard prob- lems, but it can often be very slow; usually exponential. Example: Fibonacci numbers is defined as recurrence: 0 if n = 0 Fn =8 1 if n = 1 > Fn 1 + Fn 2 otherwise < ¡ ¡ > A direct translation in:to recursive program to compute Fibonacci number is RecFib(n): if n=0 return 0 if n=1 return 1 return RecFib(n-1) + RecFib(n-2) Fibonacci Number 3/17 The recursive program has horrible time complexity. How bad? Let's try to compute. Denote T(n) as the time complexity of computing RecFib(n). Based on the recursion, we have the recurrence: T(n) = T(n 1) + T(n 2) + 1; T(0) = T(1) = 1 ¡ ¡ Solving this recurrence, we get p5 + 1 T(n) = O(n); = 1.618 2 So the RecFib(n) program runs at exponential time complexity. RecFib Recursion Tree 4/17 Intuitively, why RecFib() runs exponentially slow. Problem: redun- dant computation! How about memorize the intermediate computa- tion result to avoid recomputation? Fib: Memoization 5/17 To optimize the performance of RecFib, we can memorize the inter- mediate Fn into some kind of cache, and look it up when we need it again. MemFib(n): if n = 0 n = 1 retujrjn n if F[n] is undefined F[n] MemFib(n-1)+MemFib(n-2) retur n F[n] How much does it improve upon RecFib()? Assuming accessing F[n] takes constant time, then at most n additions will be performed (we never recompute).
    [Show full text]
  • Graph Traversals
    Graph Traversals CS200 - Graphs 1 Tree traversal reminder Pre order A A B D G H C E F I In order B C G D H B A E C F I Post order D E F G H D B E I F C A Level order G H I A B C D E F G H I Connected Components n The connected component of a node s is the largest set of nodes reachable from s. A generic algorithm for creating connected component(s): R = {s} while ∃edge(u, v) : u ∈ R∧v ∉ R add v to R n Upon termination, R is the connected component containing s. q Breadth First Search (BFS): explore in order of distance from s. q Depth First Search (DFS): explores edges from the most recently discovered node; backtracks when reaching a dead- end. 3 Graph Traversals – Depth First Search n Depth First Search starting at u DFS(u): mark u as visited and add u to R for each edge (u,v) : if v is not marked visited : DFS(v) CS200 - Graphs 4 Depth First Search A B C D E F G H I J K L M N O P CS200 - Graphs 5 Question n What determines the order in which DFS visits nodes? n The order in which a node picks its outgoing edges CS200 - Graphs 6 DepthGraph Traversalfirst search algorithm Depth First Search (DFS) dfs(in v:Vertex) mark v as visited for (each unvisited vertex u adjacent to v) dfs(u) n Need to track visited nodes n Order of visiting nodes is not completely specified q if nodes have priority, then the order may become deterministic for (each unvisited vertex u adjacent to v in priority order) n DFS applies to both directed and undirected graphs n Which graph implementation is suitable? CS200 - Graphs 7 Iterative DFS: explicit Stack dfs(in v:Vertex) s – stack for keeping track of active vertices s.push(v) mark v as visited while (!s.isEmpty()) { if (no unvisited vertices adjacent to the vertex on top of the stack) { s.pop() //backtrack else { select unvisited vertex u adjacent to vertex on top of the stack s.push(u) mark u as visited } } CS200 - Graphs 8 Breadth First Search (BFS) n Is like level order in trees A B C D n Which is a BFS traversal starting E F G H from A? A.
    [Show full text]
  • Exhaustive Recursion and Backtracking
    CS106B Handout #19 J Zelenski Feb 1, 2008 Exhaustive recursion and backtracking In some recursive functions, such as binary search or reversing a file, each recursive call makes just one recursive call. The "tree" of calls forms a linear line from the initial call down to the base case. In such cases, the performance of the overall algorithm is dependent on how deep the function stack gets, which is determined by how quickly we progress to the base case. For reverse file, the stack depth is equal to the size of the input file, since we move one closer to the empty file base case at each level. For binary search, it more quickly bottoms out by dividing the remaining input in half at each level of the recursion. Both of these can be done relatively efficiently. Now consider a recursive function such as subsets or permutation that makes not just one recursive call, but several. The tree of function calls has multiple branches at each level, which in turn have further branches, and so on down to the base case. Because of the multiplicative factors being carried down the tree, the number of calls can grow dramatically as the recursion goes deeper. Thus, these exhaustive recursion algorithms have the potential to be very expensive. Often the different recursive calls made at each level represent a decision point, where we have choices such as what letter to choose next or what turn to make when reading a map. Might there be situations where we can save some time by focusing on the most promising options, without committing to exploring them all? In some contexts, we have no choice but to exhaustively examine all possibilities, such as when trying to find some globally optimal result, But what if we are interested in finding any solution, whichever one that works out first? At each decision point, we can choose one of the available options, and sally forth, hoping it works out.
    [Show full text]
  • Graph Traversal with DFS/BFS
    Graph Traversal Graph Traversal with DFS/BFS One of the most fundamental graph problems is to traverse every Tyler Moore edge and vertex in a graph. For correctness, we must do the traversal in a systematic way so that CS 2123, The University of Tulsa we dont miss anything. For efficiency, we must make sure we visit each edge at most twice. Since a maze is just a graph, such an algorithm must be powerful enough to enable us to get out of an arbitrary maze. Some slides created by or adapted from Dr. Kevin Wayne. For more information see http://www.cs.princeton.edu/~wayne/kleinberg-tardos 2 / 20 Marking Vertices To Do List The key idea is that we must mark each vertex when we first visit it, We must also maintain a structure containing all the vertices we have and keep track of what have not yet completely explored. discovered but not yet completely explored. Each vertex will always be in one of the following three states: Initially, only a single start vertex is considered to be discovered. 1 undiscovered the vertex in its initial, virgin state. To completely explore a vertex, we look at each edge going out of it. 2 discovered the vertex after we have encountered it, but before we have checked out all its incident edges. For each edge which goes to an undiscovered vertex, we mark it 3 processed the vertex after we have visited all its incident edges. discovered and add it to the list of work to do. A vertex cannot be processed before we discover it, so over the course Note that regardless of what order we fetch the next vertex to of the traversal the state of each vertex progresses from undiscovered explore, each edge is considered exactly twice, when each of its to discovered to processed.
    [Show full text]
  • Backtrack Parsing Context-Free Grammar Context-Free Grammar
    Context-free Grammar Problems with Regular Context-free Grammar Language and Is English a regular language? Bad question! We do not even know what English is! Two eggs and bacon make(s) a big breakfast Backtrack Parsing Can you slide me the salt? He didn't ought to do that But—No! Martin Kay I put the wine you brought in the fridge I put the wine you brought for Sandy in the fridge Should we bring the wine you put in the fridge out Stanford University now? and University of the Saarland You said you thought nobody had the right to claim that they were above the law Martin Kay Context-free Grammar 1 Martin Kay Context-free Grammar 2 Problems with Regular Problems with Regular Language Language You said you thought nobody had the right to claim [You said you thought [nobody had the right [to claim that they were above the law that [they were above the law]]]] Martin Kay Context-free Grammar 3 Martin Kay Context-free Grammar 4 Problems with Regular Context-free Grammar Language Nonterminal symbols ~ grammatical categories Is English mophology a regular language? Bad question! We do not even know what English Terminal Symbols ~ words morphology is! They sell collectables of all sorts Productions ~ (unordered) (rewriting) rules This concerns unredecontaminatability Distinguished Symbol This really is an untiable knot. But—Probably! (Not sure about Swahili, though) Not all that important • Terminals and nonterminals are disjoint • Distinguished symbol Martin Kay Context-free Grammar 5 Martin Kay Context-free Grammar 6 Context-free Grammar Context-free
    [Show full text]
  • Graph Traversal and Linear Programs October 6, 2016
    CS 125 Section #5 Graph Traversal and Linear Programs October 6, 2016 1 Depth first search 1.1 The Algorithm Besides breadth first search, which we saw in class in relation to Dijkstra's algorithm, there is one other fundamental algorithm for searching a graph: depth first search. To better understand the need for these procedures, let us imagine the computer's view of a graph that has been input into it, in the adjacency list representation. The computer's view is fundamentally local to a specific vertex: it can examine each of the edges adjacent to a vertex in turn, by traversing its adjacency list; it can also mark vertices as visited. One way to think of these operations is to imagine exploring a dark maze with a flashlight and a piece of chalk. You are allowed to illuminate any corridor of the maze emanating from your current position, and you are also allowed to use the chalk to mark your current location in the maze as having been visited. The question is how to find your way around the maze. We now show how the depth first search allows the computer to find its way around the input graph using just these primitives. Depth first search uses a stack as the basic data structure. We start by defining a recursive procedure search (the stack is implicit in the recursive calls of search): search is invoked on a vertex v, and explores all previously unexplored vertices reachable from v. Procedure search(v) vertex v explored(v) := 1 previsit(v) for (v; w) 2 E if explored(w) = 0 then search(w) rof postvisit(v) end search Procedure DFS (G(V; E)) graph G(V; E) for each v 2 V do explored(v) := 0 rof for each v 2 V do if explored(v) = 0 then search(v) rof end DFS By modifying the procedures previsit and postvisit, we can use DFS to solve a number of important problems, as we shall see.
    [Show full text]
  • Backtracking / Branch-And-Bound
    Backtracking / Branch-and-Bound Optimisation problems are problems that have several valid solutions; the challenge is to find an optimal solution. How optimal is defined, depends on the particular problem. Examples of optimisation problems are: Traveling Salesman Problem (TSP). We are given a set of n cities, with the distances between all cities. A traveling salesman, who is currently staying in one of the cities, wants to visit all other cities and then return to his starting point, and he is wondering how to do this. Any tour of all cities would be a valid solution to his problem, but our traveling salesman does not want to waste time: he wants to find a tour that visits all cities and has the smallest possible length of all such tours. So in this case, optimal means: having the smallest possible length. 1-Dimensional Clustering. We are given a sorted list x1; : : : ; xn of n numbers, and an integer k between 1 and n. The problem is to divide the numbers into k subsets of consecutive numbers (clusters) in the best possible way. A valid solution is now a division into k clusters, and an optimal solution is one that has the nicest clusters. We will define this problem more precisely later. Set Partition. We are given a set V of n objects, each having a certain cost, and we want to divide these objects among two people in the fairest possible way. In other words, we are looking for a subdivision of V into two subsets V1 and V2 such that X X cost(v) − cost(v) v2V1 v2V2 is as small as possible.
    [Show full text]
  • Graphs, Connectivity, and Traversals
    Graphs, Connectivity, and Traversals Definitions Like trees, graphs represent a fundamental data structure used in computer science. We often hear about cyber space as being a new frontier for mankind, and if we look at the structure of cyberspace, we see that it is structured as a graph; in other words, it consists of places (nodes), and connections between those places. Some applications of graphs include • representing electronic circuits • modeling object interactions (e.g. used in the Unified Modeling Language) • showing ordering relationships between computer programs • modeling networks and network traffic • the fact that trees are a special case of graphs, in that they are acyclic and connected graphs, and that trees are used in many fundamental data structures 1 An undirected graph G = (V; E) is a pair of sets V , E, where • V is a set of vertices, also called nodes. • E is a set of unordered pairs of vertices called edges, and are of the form (u; v), such that u; v 2 V . • if e = (u; v) is an edge, then we say that u is adjacent to v, and that e is incident with u and v. • We assume jV j = n is finite, where n is called the order of G. •j Ej = m is called the size of G. • A path P of length k in a graph is a sequence of vertices v0; v1; : : : ; vk, such that (vi; vi+1) 2 E for every 0 ≤ i ≤ k − 1. { a path is called simple iff the vertices v0; v1; : : : ; vk are all distinct.
    [Show full text]
  • Module 5: Backtracking
    Module-5 : Backtracking Contents 1. Backtracking: 3. 0/1Knapsack problem 1.1. General method 3.1. LC Branch and Bound solution 1.2. N-Queens problem 3.2. FIFO Branch and Bound solution 1.3. Sum of subsets problem 4. NP-Complete and NP-Hard problems 1.4. Graph coloring 4.1. Basic concepts 1.5. Hamiltonian cycles 4.2. Non-deterministic algorithms 2. Branch and Bound: 4.3. P, NP, NP-Complete, and NP-Hard 2.1. Assignment Problem, classes 2.2. Travelling Sales Person problem Module 5: Backtracking 1. Backtracking Some problems can be solved, by exhaustive search. The exhaustive-search technique suggests generating all candidate solutions and then identifying the one (or the ones) with a desired property. Backtracking is a more intelligent variation of this approach. The principal idea is to construct solutions one component at a time and evaluate such partially constructed candidates as follows. If a partially constructed solution can be developed further without violating the problem’s constraints, it is done by taking the first remaining legitimate option for the next component. If there is no legitimate option for the next component, no alternatives for any remaining component need to be considered. In this case, the algorithm backtracks to replace the last component of the partially constructed solution with its next option. It is convenient to implement this kind of processing by constructing a tree of choices being made, called the state-space tree. Its root represents an initial state before the search for a solution begins. The nodes of the first level in the tree represent the choices made for the first component of a solution; the nodes of the second level represent the choices for the second component, and soon.
    [Show full text]
  • CS/ECE 374: Algorithms & Models of Computation
    CS/ECE 374: Algorithms & Models of Computation, Fall 2018 Backtracking and Memoization Lecture 12 October 9, 2018 Chandra Chekuri (UIUC) CS/ECE 374 1 Fall 2018 1 / 36 Recursion Reduction: Reduce one problem to another Recursion A special case of reduction 1 reduce problem to a smaller instance of itself 2 self-reduction 1 Problem instance of size n is reduced to one or more instances of size n − 1 or less. 2 For termination, problem instances of small size are solved by some other method as base cases. Chandra Chekuri (UIUC) CS/ECE 374 2 Fall 2018 2 / 36 Recursion in Algorithm Design 1 Tail Recursion: problem reduced to a single recursive call after some work. Easy to convert algorithm into iterative or greedy algorithms. Examples: Interval scheduling, MST algorithms, etc. 2 Divide and Conquer: Problem reduced to multiple independent sub-problems that are solved separately. Conquer step puts together solution for bigger problem. Examples: Closest pair, deterministic median selection, quick sort. 3 Backtracking: Refinement of brute force search. Build solution incrementally by invoking recursion to try all possibilities for the decision in each step. 4 Dynamic Programming: problem reduced to multiple (typically) dependent or overlapping sub-problems. Use memoization to avoid recomputation of common solutions leading to iterative bottom-up algorithm. Chandra Chekuri (UIUC) CS/ECE 374 3 Fall 2018 3 / 36 Subproblems in Recursion Suppose foo() is a recursive program/algorithm for a problem. Given an instance I , foo(I ) generates potentially many \smaller" problems. If foo(I 0) is one of the calls during the execution of foo(I ) we say I 0 is a subproblem of I .
    [Show full text]
  • Lecture 16: Dijkstra’S Algorithm (Graphs)
    CSE 373: Data Structures and Algorithms Lecture 16: Dijkstra’s Algorithm (Graphs) Instructor: Lilian de Greef Quarter: Summer 2017 Today • Announcements • Graph Traversals Continued • Remarks on DFS & BFS • Shortest paths for weighted graphs: Dijkstra’s Algorithm! Announcements: Homework 4 is out! • Due next Friday (August 4th) at 5:00pm • May choose to pair-program if you like! • Same cautions as last time apply: choose partners and when to start working wisely! • Can almost entirely complete using material by end of this lecture • Will discuss some software-design concepts next week to help you prevent some (potentially non-obvious) bugs Another midterm correction… ( & ) Bring your midterm to *any* office hours to get your point back. I will have the final exam quadruple-checked to avoid these situations! (I am so sorry) Graphs: Traversals Continued And introducing Dijkstra’s Algorithm for shortest paths! Graph Traversals: Recap & Running Time • Traversals: General Idea • Starting from one vertex, repeatedly explore adjacent vertices • Mark each vertex we visit, so we don’t process each more than once (cycles!) • Important Graph Traversal Algorithms: Depth First Search (DFS) Breadth First Search (BFS) Explore… as far as possible all neighbors first before backtracking before next level of neighbors Choose next vertex using… recursion or a stack a queue • Assuming “choose next vertex” is O(1), entire traversal is • Use graph represented with adjacency Comparison (useful for Design Decisions!) • Which one finds shortest paths? • i.e. which is better for “what is the shortest path from x to y” when there’s more than one possible path? • Which one can use less space in finding a path? • A third approach: • Iterative deepening (IDFS): • Try DFS but disallow recursion more than K levels deep • If that fails, increment K and start the entire search over • Like BFS, finds shortest paths.
    [Show full text]
  • Toward a Model for Backtracking and Dynamic Programming
    TOWARD A MODEL FOR BACKTRACKING AND DYNAMIC PROGRAMMING Michael Alekhnovich, Allan Borodin, Joshua Buresh-Oppenheim, Russell Impagliazzo, Avner Magen, and Toniann Pitassi Abstract. We propose a model called priority branching trees (pBT ) for backtrack- ing and dynamic programming algorithms. Our model generalizes both the priority model of Borodin, Nielson and Rackoff, as well as a simple dynamic programming model due to Woeginger, and hence spans a wide spectrum of algorithms. After witnessing the strength of the model, we then show its limitations by providing lower bounds for algorithms in this model for several classical problems such as Interval Scheduling, Knapsack and Satisfiability. Keywords. Greedy Algorithms, Dynamic Programming, Models of Computation, Lower Bounds. Subject classification. 68Q10 1. Introduction The “Design and Analysis of Algorithms” is a basic component of the Computer Science Curriculum. Courses and texts for this topic are often organized around a toolkit of al- gorithmic paradigms or meta-algorithms such as greedy algorithms, divide and conquer, dynamic programming, local search, etc. Surprisingly (as this is often the main “theory course”), these algorithmic paradigms are rarely, if ever, precisely defined. Instead, we provide informal definitional statements followed by (hopefully) well chosen illustrative examples. Our informality in algorithm design should be compared to computability the- ory where we have a well accepted formalization for the concept of an algorithm, namely that provided by Turing machines and its many equivalent computational models (i.e. consider the almost universal acceptance of the Church-Turing thesis). While quantum computation may challenge the concept of “efficient algorithm”, the benefit of having a well defined concept of an algorithm and a computational step is well appreciated.
    [Show full text]