Computer Science 332 Compiler Construction

Total Page:16

File Type:pdf, Size:1020Kb

Computer Science 332 Compiler Construction Top-Down Parsing Computer Science 332 Goal: ¡ Find a leftmost derivation for an input string, or Compiler Construction ¡ Construct a parse tree for the input starting from the root and creating nodes of the parse tree in preorder (parent, then children) 4.4: Top-Down Parsing Discussed deterministic special case – predictive parsing – in 2.4. (Skip Sections on Transition Diagrams and Error Recovery) General case is nondeterministic (backtracking) Of more theoretical than practical interest Recursive-Descent Parsing Recursive-Descent Parsing Requires backtracking Requires backtracking Consider grammar Consider grammar ¢ S cAd S ¢ cAd ¢ A ab | a A ¢ ab | a Parse input string w = cad: Parse input string w = cad: S c a d c a d Recursive-Descent Parsing Recursive-Descent Parsing Requires backtracking Requires backtracking Consider grammar Consider grammar ¢ S ¢ cAd S cAd A ¢ ab | a A ¢ ab | a Parse input string w = cad: S Parse input string w = cad: S c A d c A d c a d c a d Recursive-Descent Parsing Recursive-Descent Parsing Requires backtracking Requires backtracking Consider grammar Consider grammar S ¢ cAd S ¢ cAd A ¢ ab | a A ¢ ab | a Parse input string w = cad: S Parse input string w = cad: S c A d c A d a b a b c a d c a d Recursive-Descent Parsing Recursive-Descent Parsing Requires backtracking Requires backtracking Consider grammar Consider grammar ¢ S ¢ cAd S cAd A ¢ ab | a A ¢ ab | a Parse input string w = cad: S Parse input string w = cad: S c A d c A d a b FAIL c a d c a d Recursive-Descent Parsing Recursive-Descent Parsing Requires backtracking Requires backtracking Consider grammar Consider grammar S ¢ cAd S ¢ cAd A ¢ ab | a A ¢ ab | a Parse input string w = cad: S Parse input string w = cad: S c A d c A d a a c a d c a d Recursive-Descent Parsing Nonrecursive Predictive Parsing Requires backtracking Maintain stack explicitly, instead of relying on run- Consider grammar time support for recursion. S ¢ cAd Components ¡ A ¢ ab | a Input buffer : w$ ¡ Parse input string w = cad: S Stack : terminals and nonterminals ¡ Parsing table : c A d nonterminal × input symbol | ¢ production a SUCCEED ¡ Output stream : derivation c a d Nonrecursive Predictive Parsing Predictive Parsing Algorithm set input pointer ip to first symbol of w$ repeat Table M determines action based on stack symbol let X be the top stack symbol and a the symbol pointed to by ip X and input symbol a. if X is a terminal or $ then if X = a then Initial stack is start symbol on top of $. pop X from the stack and advance ip Possibilities are else error () else /* X is a nonterminal */ 1. X = a = $ : halt successfully if M[X, a] = X ¡ Y Y ... Y then begin 1 2 k 2. X = a $ : pop X and advance input pointer pop X from the stack push Y , Y , ..., Y onto the stack with Y on top /* order ? */ 3. X = nonterminal : Consult table entry M[X, a]. k k-1 1 1 output the production X ¡ Y Y ... Y If empty, report error. Else pop X and push table 1 2 k end entry. else error() until X = $ /* stack is empty */ Predictive Parsing Example FIRST and FOLLOW Grammar (note elimination of E T E' ∈ left recursion): E' + T E' | • Recall FIRST from Chapter 2: FIRST(α) is set of T F T' α terminals that begin strings derived from . Input: id + id * id T' * F T' | ∈ F ( E ) | id • Together with FOLLOW, helps us build parse Table: table from grammar. Input Symbol id + * ( ) $ • FOLLOW(A) is set of terminals a that can appear Nonterminal immediately to the right of A in some sentential E E ¡ T E' E ¡ T E' ⇒* α β ¡ E' E' ¡ +T E' E' ¡ ∈ E' ∈ form; i.e., a such that S Aa . T T ¡ F T' T ¡ F T' ¡ T' T' ¡ ∈ T' ¡ *F T'∋ T' ¡ ∈ T' ∈ F F ¢ id F ¡ ( E ) COMPUTING FIRST COMPUTING FOLLOW 1. If X is terminal, then FIRST(X) is {X}. 1. Place $ in FOLLOW(S), where S is the start symbol. £ 2. If X ∈ is a production, add ∈ to FIRST(X). £ 2. If there is a production A αBβ, then everything in 3. If X £ Y Y ... Y is a production, place a in FIRST(X) if β ∈ 1 2 k FIRST( ) except for is placed in FOLLOW(B). for some i, a is in FIRST(Y) and ∈ is in all of £ i 3. If there is a production A αB, or a production FIRST(Y ) ... FIRST(Y ); that is, Y ... Y ⇒* ∈. If ∈ is A £ αBβ where FIRST(β) contains ∈ (i.e., β ⇒* ∈), then 1 i-1 1 i-1 in FIRST(Y) for all j = 1, 2, ..., k, then add ∈ to everything in FOLLOW(A) is in FOLLOW(B). j FIRST(X). For example, everything in FIRST(Y ) is E T E' 1 Exercise: Compute FIRST, ∈ ∈ FOLLOW for nonterminals in E' + T E' | surely in FIRST(X). If Y does not derive , then we add 1 grammar: T F T' nothing more to FIRST(X), but if Y .⇒* ∈, then we add 1 T' * F T' | ∈ FIRST(Y ) and so on. F ( E ) | id 2 Construction of Predictive Parse Tables LL(1) Grammars Input: Grammar G Output: Parsing table M Ambiguous grammars will have more than one entry £ M[A, a] for some nonterminal A, terminal a. 1. For each production A α of the grammar, do steps 2 and 3. E.g., ambiguous if / then / else grammar : S iEtSS' | a S' eS | ∈ £ 2. For each terminal a in FIRST(α), add A α to M[A, a]. E b £ ∈ α α 3. If is in FIRST( ), add add A to M[A, b] for each This grammar produces a table M containing entry ∈ α £ £ terminal b in FOLLOW(A). If is FIRST( ) and $ is in M[S', e] = {S' ∈, S' eS} (because FOLLOW(S') = FOLLOW(A), add A £ α to M[A, $]. {e, $}). 4. Make each undefined entry of M be error. LL(1) Grammars LL(1) Grammars A grammar without such duplicate entries is called LL(1). First L means “read input Left to right”. So what to do when M has multiply-defined entries? Second L means “build Leftmost derivation”. Can try to make G LL(1) by eliminating left recursion, and left factoring the result – may produce an LL(1) grammar. 1 means one symbol of lookahead in input to make decisions. Won't work for some grammars, like our if / then / else No ambiguous or left-recursive grammar can be LL(1). example. £ α β, More technically: Grammar G is LL(1) iff for A | For such grammars, we may be able to eliminate all but one of 1.For no terminal a do both α and β derive strings beginning the multiple entries; e.g., change M[S', e] = {S' £ ∈, S' £ eS} with a. to M[S', e] = S' £ eS. 2.At most one of α and β can derive the empty string. But this must be done on a case-by case basis; there are no universal rules. 3.If β ⇒* ∈, then α does not derive any string beginning with a terminal in FOLLOW(A)..
Recommended publications
  • Lecture 4 Dynamic Programming
    1/17 Lecture 4 Dynamic Programming Last update: Jan 19, 2021 References: Algorithms, Jeff Erickson, Chapter 3. Algorithms, Gopal Pandurangan, Chapter 6. Dynamic Programming 2/17 Backtracking is incredible powerful in solving all kinds of hard prob- lems, but it can often be very slow; usually exponential. Example: Fibonacci numbers is defined as recurrence: 0 if n = 0 Fn =8 1 if n = 1 > Fn 1 + Fn 2 otherwise < ¡ ¡ > A direct translation in:to recursive program to compute Fibonacci number is RecFib(n): if n=0 return 0 if n=1 return 1 return RecFib(n-1) + RecFib(n-2) Fibonacci Number 3/17 The recursive program has horrible time complexity. How bad? Let's try to compute. Denote T(n) as the time complexity of computing RecFib(n). Based on the recursion, we have the recurrence: T(n) = T(n 1) + T(n 2) + 1; T(0) = T(1) = 1 ¡ ¡ Solving this recurrence, we get p5 + 1 T(n) = O(n); = 1.618 2 So the RecFib(n) program runs at exponential time complexity. RecFib Recursion Tree 4/17 Intuitively, why RecFib() runs exponentially slow. Problem: redun- dant computation! How about memorize the intermediate computa- tion result to avoid recomputation? Fib: Memoization 5/17 To optimize the performance of RecFib, we can memorize the inter- mediate Fn into some kind of cache, and look it up when we need it again. MemFib(n): if n = 0 n = 1 retujrjn n if F[n] is undefined F[n] MemFib(n-1)+MemFib(n-2) retur n F[n] How much does it improve upon RecFib()? Assuming accessing F[n] takes constant time, then at most n additions will be performed (we never recompute).
    [Show full text]
  • Exhaustive Recursion and Backtracking
    CS106B Handout #19 J Zelenski Feb 1, 2008 Exhaustive recursion and backtracking In some recursive functions, such as binary search or reversing a file, each recursive call makes just one recursive call. The "tree" of calls forms a linear line from the initial call down to the base case. In such cases, the performance of the overall algorithm is dependent on how deep the function stack gets, which is determined by how quickly we progress to the base case. For reverse file, the stack depth is equal to the size of the input file, since we move one closer to the empty file base case at each level. For binary search, it more quickly bottoms out by dividing the remaining input in half at each level of the recursion. Both of these can be done relatively efficiently. Now consider a recursive function such as subsets or permutation that makes not just one recursive call, but several. The tree of function calls has multiple branches at each level, which in turn have further branches, and so on down to the base case. Because of the multiplicative factors being carried down the tree, the number of calls can grow dramatically as the recursion goes deeper. Thus, these exhaustive recursion algorithms have the potential to be very expensive. Often the different recursive calls made at each level represent a decision point, where we have choices such as what letter to choose next or what turn to make when reading a map. Might there be situations where we can save some time by focusing on the most promising options, without committing to exploring them all? In some contexts, we have no choice but to exhaustively examine all possibilities, such as when trying to find some globally optimal result, But what if we are interested in finding any solution, whichever one that works out first? At each decision point, we can choose one of the available options, and sally forth, hoping it works out.
    [Show full text]
  • A Grammar-Based Approach to Class Diagram Validation Faizan Javed Marjan Mernik Barrett R
    A Grammar-Based Approach to Class Diagram Validation Faizan Javed Marjan Mernik Barrett R. Bryant, Jeff Gray Department of Computer and Faculty of Electrical Engineering and Department of Computer and Information Sciences Computer Science Information Sciences University of Alabama at Birmingham University of Maribor University of Alabama at Birmingham 1300 University Boulevard Smetanova 17 1300 University Boulevard Birmingham, AL 35294-1170, USA 2000 Maribor, Slovenia Birmingham, AL 35294-1170, USA [email protected] [email protected] {bryant, gray}@cis.uab.edu ABSTRACT between classes to perform the use cases can be modeled by UML The UML has grown in popularity as the standard modeling dynamic diagrams, such as sequence diagrams or activity language for describing software applications. However, UML diagrams. lacks the formalism of a rigid semantics, which can lead to Static validation can be used to check whether a model conforms ambiguities in understanding the specifications. We propose a to a valid syntax. Techniques supporting static validation can also grammar-based approach to validating class diagrams and check whether a model includes some related snapshots (i.e., illustrate this technique using a simple case-study. Our technique system states consisting of objects possessing attribute values and involves converting UML representations into an equivalent links) desired by the end-user, but perhaps missing from the grammar form, and then using existing language transformation current model. We believe that the latter problem can occur as a and development tools to assist in the validation process. A string system becomes more intricate; in this situation, it can become comparison metric is also used which provides feedback, allowing hard for a developer to detect whether a state envisaged by the the user to modify the original class diagram according to the user is included in the model.
    [Show full text]
  • Backtrack Parsing Context-Free Grammar Context-Free Grammar
    Context-free Grammar Problems with Regular Context-free Grammar Language and Is English a regular language? Bad question! We do not even know what English is! Two eggs and bacon make(s) a big breakfast Backtrack Parsing Can you slide me the salt? He didn't ought to do that But—No! Martin Kay I put the wine you brought in the fridge I put the wine you brought for Sandy in the fridge Should we bring the wine you put in the fridge out Stanford University now? and University of the Saarland You said you thought nobody had the right to claim that they were above the law Martin Kay Context-free Grammar 1 Martin Kay Context-free Grammar 2 Problems with Regular Problems with Regular Language Language You said you thought nobody had the right to claim [You said you thought [nobody had the right [to claim that they were above the law that [they were above the law]]]] Martin Kay Context-free Grammar 3 Martin Kay Context-free Grammar 4 Problems with Regular Context-free Grammar Language Nonterminal symbols ~ grammatical categories Is English mophology a regular language? Bad question! We do not even know what English Terminal Symbols ~ words morphology is! They sell collectables of all sorts Productions ~ (unordered) (rewriting) rules This concerns unredecontaminatability Distinguished Symbol This really is an untiable knot. But—Probably! (Not sure about Swahili, though) Not all that important • Terminals and nonterminals are disjoint • Distinguished symbol Martin Kay Context-free Grammar 5 Martin Kay Context-free Grammar 6 Context-free Grammar Context-free
    [Show full text]
  • Backtracking / Branch-And-Bound
    Backtracking / Branch-and-Bound Optimisation problems are problems that have several valid solutions; the challenge is to find an optimal solution. How optimal is defined, depends on the particular problem. Examples of optimisation problems are: Traveling Salesman Problem (TSP). We are given a set of n cities, with the distances between all cities. A traveling salesman, who is currently staying in one of the cities, wants to visit all other cities and then return to his starting point, and he is wondering how to do this. Any tour of all cities would be a valid solution to his problem, but our traveling salesman does not want to waste time: he wants to find a tour that visits all cities and has the smallest possible length of all such tours. So in this case, optimal means: having the smallest possible length. 1-Dimensional Clustering. We are given a sorted list x1; : : : ; xn of n numbers, and an integer k between 1 and n. The problem is to divide the numbers into k subsets of consecutive numbers (clusters) in the best possible way. A valid solution is now a division into k clusters, and an optimal solution is one that has the nicest clusters. We will define this problem more precisely later. Set Partition. We are given a set V of n objects, each having a certain cost, and we want to divide these objects among two people in the fairest possible way. In other words, we are looking for a subdivision of V into two subsets V1 and V2 such that X X cost(v) − cost(v) v2V1 v2V2 is as small as possible.
    [Show full text]
  • Module 5: Backtracking
    Module-5 : Backtracking Contents 1. Backtracking: 3. 0/1Knapsack problem 1.1. General method 3.1. LC Branch and Bound solution 1.2. N-Queens problem 3.2. FIFO Branch and Bound solution 1.3. Sum of subsets problem 4. NP-Complete and NP-Hard problems 1.4. Graph coloring 4.1. Basic concepts 1.5. Hamiltonian cycles 4.2. Non-deterministic algorithms 2. Branch and Bound: 4.3. P, NP, NP-Complete, and NP-Hard 2.1. Assignment Problem, classes 2.2. Travelling Sales Person problem Module 5: Backtracking 1. Backtracking Some problems can be solved, by exhaustive search. The exhaustive-search technique suggests generating all candidate solutions and then identifying the one (or the ones) with a desired property. Backtracking is a more intelligent variation of this approach. The principal idea is to construct solutions one component at a time and evaluate such partially constructed candidates as follows. If a partially constructed solution can be developed further without violating the problem’s constraints, it is done by taking the first remaining legitimate option for the next component. If there is no legitimate option for the next component, no alternatives for any remaining component need to be considered. In this case, the algorithm backtracks to replace the last component of the partially constructed solution with its next option. It is convenient to implement this kind of processing by constructing a tree of choices being made, called the state-space tree. Its root represents an initial state before the search for a solution begins. The nodes of the first level in the tree represent the choices made for the first component of a solution; the nodes of the second level represent the choices for the second component, and soon.
    [Show full text]
  • Section 12.3 Context-Free Parsing We Know (Via a Theorem) That the Context-Free Languages Are Exactly Those Languages That Are Accepted by Pdas
    Section 12.3 Context-Free Parsing We know (via a theorem) that the context-free languages are exactly those languages that are accepted by PDAs. When a context-free language can be recognized by a deterministic final-state PDA, it is called a deterministic context-free language. An LL(k) grammar has the property that a parser can be constructed to scan an input string from left to right and build a leftmost derivation by examining next k input symbols to determine the unique production for each derivation step. If a language has an LL(k) grammar, it is called an LL(k) language. LL(k) languages are deterministic context-free languages, but there are deterministic context-free languages that are not LL(k). (See text for an example on page 789.) Example. Consider the language {anb | n ∈ N}. (1) It has the LL(1) grammar S → aS | b. A parser can examine one input letter to decide whether to use S → aS or S → b for the next derivation step. (2) It has the LL(2) grammar S → aaS | ab | b. A parser can examine two input letters to determine whether to use S → aaS or S → ab for the next derivation step. Notice that the grammar is not LL(1). (3) Quiz. Find an LL(3) grammar that is not LL(2). Solution. S → aaaS | aab | ab | b. 1 Example/Quiz. Why is the following grammar S → AB n n + k for {a b | n, k ∈ N} an-LL(1) grammar? A → aAb | Λ B → bB | Λ. Answer: Any derivation starts with S ⇒ AB.
    [Show full text]
  • Formal Grammar Specifications of User Interface Processes
    FORMAL GRAMMAR SPECIFICATIONS OF USER INTERFACE PROCESSES by MICHAEL WAYNE BATES ~ Bachelor of Science in Arts and Sciences Oklahoma State University Stillwater, Oklahoma 1982 Submitted to the Faculty of the Graduate College of the Oklahoma State University iri partial fulfillment of the requirements for the Degree of MASTER OF SCIENCE July, 1984 I TheSIS \<-)~~I R 32c-lf CO'f· FORMAL GRAMMAR SPECIFICATIONS USER INTER,FACE PROCESSES Thesis Approved: 'Dean of the Gra uate College ii tta9zJ1 1' PREFACE The benefits and drawbacks of using a formal grammar model to specify a user interface has been the primary focus of this study. In particular, the regular grammar and context-free grammar models have been examined for their relative strengths and weaknesses. The earliest motivation for this study was provided by Dr. James R. VanDoren at TMS Inc. This thesis grew out of a discussion about the difficulties of designing an interface that TMS was working on. I would like to express my gratitude to my major ad­ visor, Dr. Mike Folk for his guidance and invaluable help during this study. I would also like to thank Dr. G. E. Hedrick and Dr. J. P. Chandler for serving on my graduate committee. A special thanks goes to my wife, Susan, for her pa­ tience and understanding throughout my graduate studies. iii TABLE OF CONTENTS Chapter Page I. INTRODUCTION . II. AN OVERVIEW OF FORMAL LANGUAGE THEORY 6 Introduction 6 Grammars . • . • • r • • 7 Recognizers . 1 1 Summary . • • . 1 6 III. USING FOR~AL GRAMMARS TO SPECIFY USER INTER- FACES . • . • • . 18 Introduction . 18 Definition of a User Interface 1 9 Benefits of a Formal Model 21 Drawbacks of a Formal Model .
    [Show full text]
  • CS/ECE 374: Algorithms & Models of Computation
    CS/ECE 374: Algorithms & Models of Computation, Fall 2018 Backtracking and Memoization Lecture 12 October 9, 2018 Chandra Chekuri (UIUC) CS/ECE 374 1 Fall 2018 1 / 36 Recursion Reduction: Reduce one problem to another Recursion A special case of reduction 1 reduce problem to a smaller instance of itself 2 self-reduction 1 Problem instance of size n is reduced to one or more instances of size n − 1 or less. 2 For termination, problem instances of small size are solved by some other method as base cases. Chandra Chekuri (UIUC) CS/ECE 374 2 Fall 2018 2 / 36 Recursion in Algorithm Design 1 Tail Recursion: problem reduced to a single recursive call after some work. Easy to convert algorithm into iterative or greedy algorithms. Examples: Interval scheduling, MST algorithms, etc. 2 Divide and Conquer: Problem reduced to multiple independent sub-problems that are solved separately. Conquer step puts together solution for bigger problem. Examples: Closest pair, deterministic median selection, quick sort. 3 Backtracking: Refinement of brute force search. Build solution incrementally by invoking recursion to try all possibilities for the decision in each step. 4 Dynamic Programming: problem reduced to multiple (typically) dependent or overlapping sub-problems. Use memoization to avoid recomputation of common solutions leading to iterative bottom-up algorithm. Chandra Chekuri (UIUC) CS/ECE 374 3 Fall 2018 3 / 36 Subproblems in Recursion Suppose foo() is a recursive program/algorithm for a problem. Given an instance I , foo(I ) generates potentially many \smaller" problems. If foo(I 0) is one of the calls during the execution of foo(I ) we say I 0 is a subproblem of I .
    [Show full text]
  • Topics in Context-Free Grammar CFG's
    Topics in Context-Free Grammar CFG’s HUSSEIN S. AL-SHEAKH 1 Outline Context-Free Grammar Ambiguous Grammars LL(1) Grammars Eliminating Useless Variables Removing Epsilon Nullable Symbols 2 Context-Free Grammar (CFG) Context-free grammars are powerful enough to describe the syntax of most programming languages; in fact, the syntax of most programming languages is specified using context-free grammars. In linguistics and computer science, a context-free grammar (CFG) is a formal grammar in which every production rule is of the form V → w Where V is a “non-terminal symbol” and w is a “string” consisting of terminals and/or non-terminals. The term "context-free" expresses the fact that the non-terminal V can always be replaced by w, regardless of the context in which it occurs. 3 Definition: Context-Free Grammars Definition 3.1.1 (A. Sudkamp book – Language and Machine 2ed Ed.) A context-free grammar is a quadruple (V, Z, P, S) where: V is a finite set of variables. E (the alphabet) is a finite set of terminal symbols. P is a finite set of rules (Ax). Where x is string of variables and terminals S is a distinguished element of V called the start symbol. The sets V and E are assumed to be disjoint. 4 Definition: Context-Free Languages A language L is context-free IF AND ONLY IF there is a grammar G with L=L(G) . 5 Example A context-free grammar G : S aSb S A derivation: S aSb aaSbb aabb L(G) {anbn : n 0} (((( )))) 6 Derivation Order 1.
    [Show full text]
  • Toward a Model for Backtracking and Dynamic Programming
    TOWARD A MODEL FOR BACKTRACKING AND DYNAMIC PROGRAMMING Michael Alekhnovich, Allan Borodin, Joshua Buresh-Oppenheim, Russell Impagliazzo, Avner Magen, and Toniann Pitassi Abstract. We propose a model called priority branching trees (pBT ) for backtrack- ing and dynamic programming algorithms. Our model generalizes both the priority model of Borodin, Nielson and Rackoff, as well as a simple dynamic programming model due to Woeginger, and hence spans a wide spectrum of algorithms. After witnessing the strength of the model, we then show its limitations by providing lower bounds for algorithms in this model for several classical problems such as Interval Scheduling, Knapsack and Satisfiability. Keywords. Greedy Algorithms, Dynamic Programming, Models of Computation, Lower Bounds. Subject classification. 68Q10 1. Introduction The “Design and Analysis of Algorithms” is a basic component of the Computer Science Curriculum. Courses and texts for this topic are often organized around a toolkit of al- gorithmic paradigms or meta-algorithms such as greedy algorithms, divide and conquer, dynamic programming, local search, etc. Surprisingly (as this is often the main “theory course”), these algorithmic paradigms are rarely, if ever, precisely defined. Instead, we provide informal definitional statements followed by (hopefully) well chosen illustrative examples. Our informality in algorithm design should be compared to computability the- ory where we have a well accepted formalization for the concept of an algorithm, namely that provided by Turing machines and its many equivalent computational models (i.e. consider the almost universal acceptance of the Church-Turing thesis). While quantum computation may challenge the concept of “efficient algorithm”, the benefit of having a well defined concept of an algorithm and a computational step is well appreciated.
    [Show full text]
  • Tree Search Backtracking Search Backtracking Search
    Search: tree search • Now let's put modeling aside and suppose we are handed a search problem. How do we construct an algorithm for finding a minimum cost path (not necessarily unique)? Backtracking search • We will start with backtracking search, the simplest algorithm which just tries all paths. The algorithm is called recursively on the current state s and the path leading up to that state. If we have reached a goal, then we can update the minimum cost path with the current path. Otherwise, we consider all possible actions a from state s, and recursively search each of the possibilities. • Graphically, backtracking search performs a depth-first traversal of the search tree. What is the time and memory complexity of this algorithm? • To get a simple characterization, assume that the search tree has maximum depth D (each path consists of D actions/edges) and that there are b available actions per state (the branching factor is b). • It is easy to see that backtracking search only requires O(D) memory (to maintain the stack for the recurrence), which is as good as it gets. • However, the running time is proportional to the number of nodes in the tree, since the algorithm needs to check each of them. The number 2 D bD+1−1 D of nodes is 1 + b + b + ··· + b = b−1 = O(b ). Note that the total number of nodes in the search tree is on the same order as the [whiteboard: search tree] number of leaves, so the cost is always dominated by the last level.
    [Show full text]