<<

Lexical and Analysis

Top-Down Easy for humans String of to write and characters understand

Lexemes String of identified tokens

Easy for Data programs structure to transform Syntax

A syntax is a set of rules defining the valid strings of a language, often specified by a context-free .

For example, a grammar E for arithmetic expressions: e → x | y | e + e | e – e | e * e | ( e ) Derivations

A derivation is a proof that some string conforms to a grammar.

A leftmost derivation: e ⇒ e + e ⇒ x + e ⇒ x + ( e ) ⇒ x + ( e * e ) ⇒ x + ( y * e ) ⇒ x + ( y * x ) Derivations

A rightmost derivation:

e ⇒ e + e ⇒ e + ( e ) ⇒ e + ( e * e ) ⇒ e + ( e * x ) ⇒ e + ( y * x ) ⇒ x + ( y * x )

Many ways to derive the same string: many ways to write the same proof. Parse tree: motivation

Also a proof that a given input is valid according to the grammar. But a parse tree:

. is more concise: we don’t write out the every time a non-terminal is expanded.

. abstracts over the order in which rules are applied. Parse tree: intuition

If non-terminal n has a production

n → X Y Z where X, Y, and Z are terminals or non-terminals, then a parse tree may have an interior node labelled n with three children labelled X, Y, and Z. n

X Y Z Parse tree: definition

A parse tree is a tree in which:

. the root is labelled by the start symbol; . each leaf is labelled by a terminal symbol, or 휀; . each interior node is labelled by a non-terminal; . if n is a non-terminal labelling an interior node whose children are

X1, X2, ⋯, Xn then there must exist a production n→ X1 X2 ⋯ Xn. Example 1

Example input string: x + y * x

A resulting parse tree according to grammar E:

e

e + e

x e * e

y x Example 2

The following is not a parse tree according to grammar E.

e

x + e

e * e

y x

Why? Because e → x + e is not a production in grammar E. Grammar notation

Non-terminals are underlined.

Rather than writing e → x e → e + e we may write:

e → x | e + e

(Also, symbols → and ::= will be used interchangeably.) Syntax Analysis

String of Parse tree symbols

A parse tree is: 1. A proof that a given input is valid according to the grammar; 2. A data structure that is convenient for to process.

(Syntax analysis may also report that the input string is invalid.) Ambiguity

If there exists more than one parse tree for any string then the grammar is ambiguous. For example, the string x+y*x has two parse trees:

e e

e + e e * e

x e * e e + e x

y x x y Operator precedence

Different parse trees often have different meanings, so we usually want unambiguous .

Conventionally, * has a higher precedence (binds tighter) than +, so there is only one interpretation of x+y*x, namely x+(y*x).

Operator associativity

Even with precedence rules, ambiguity remains, e.g. x-x-x-x.

Binary operators are either: . left-associative; . right-associative; . non-associative. Conventionally, - is left-associative, so there is only one interpretation of x-x-x-x, namely ((x-x)-x)-x.

Ambiguity removal

Example input:

e → x | y | e + e | e – e | e * e | ( e )

All operators are left associative, and * binds tighter than + and –. Ambiguity removal

Example output:

e → e + e1 | e – e1 | e1

e1 → e1 * e2 | e2

e2 → ( e ) | x | y Note: ignoring bracketed expressions

. e1 disallows + and –

. e2 disallows +, -, and * Disallowed parse trees

After disambiguation, there are no parse trees corresponding to the following originals:

e e

e * e e + e

e + e x x e - e

x y y x

LHS of * cannot RHS of + cannot contain a +. contain a -. Ambiguity removal: step-by-step

Given a non-terminal e which involves operators at n levels of precedence:

Step 1: introduce n+1 new non- terminals, e0 ⋯ en. Let op denote an operator with precedence i.

Step 2a: replace each production

e → e op e with

ei → ei op ei+1

| ei+1 if op is left-associative, or

ei → ei+1 op ei

| ei+1 if op is right-associative Step 2b: replace each production

e → op e with

ei → op ei

| ei+1

Step 2c: replace each production

e → e op with

ei → ei op

| ei+1 Construct the precedence table:

Operator Precedence +, - 0 * 1

Grammar E after step 2 becomes: e0 → e0 + e1 | e0 – e1 | e1 e1 → e1 * e2 | e2 e → ( e ) | x | y Step 3: replace each production e → ⋯ with

en → ⋯

After step 3: e0 → e0 + e1 | e0 – e1 | e1 e1 → e1 * e2 | e2 e2 → ( e ) | x | y Step 4: replace all occurrences of e0 with e.

After step 4:

e → e + e1 | e – e1 | e1

e1 → e1 * e2 | e2

e2 → ( e ) | x | y Exercise 1

Consider the following for logical propositions.

p → 0 (Zero) | 1 (One) | ~ p (Negation) | p + p (Disjunction) | p * p (Conjunction)

Now let + and * be right associative and the operators in increasing order of binding strength be : +, *, ~.

Give an unambiguous grammar for logical propositions.

Exercise 2

Which of the following grammars are ambiguous? b → 0 b 1 | 0 1 e → + e e | – e e | x s → if b then s | if b then s else s | skip Homework exercise

Consider the following ambiguous grammar G. s → if b then s | if b then s else s | skip

Give a unambiguous grammar that accepts the same language as G. Summary so far

. Syntax of a language is often specified by a context-free grammar

. Derivations and parse trees are proofs.

. Parse trees lead to a concise definition of ambiguity.

. Construction of unambiguous grammars using rules of precedence and associativity. PART 2: TOP-DOWN PARSING

• Recursive-Descent • • Left-Factoring • Predictive Parsing • Left-Recursion Removal • First and Follow Sets • Parsing tables and LL(1)

Top-down parsing

Top-down: begin with the start symbol and expand non-terminals, succeeding when the input string is matched.

A good strategy for writing parsers: 1. Implement a syntax checker to accept or refute input strings. 2. Modify the checker to construct a parse tree – straightforward. RECURSIVE DESCENT

A popular top-down parsing technique. Recursive descent

A consists of a set of functions, one for each non-terminal.

The function for non-terminal n returns true if some prefix of the input string can be derived from n, and false otherwise. Consuming the input

We assume a global variable next points to the input string. char* next;

Consume c from input if possible. int eat(char c) { if (*next == c) { next++; return 1; } return 0; } Recursive descent

Let parse(X) denote . X() if X is a non-terminal . eat(X) if X is a terminal

For each non-terminal N, introduce: int N() { char* save = next;

for each N → X1 X2 ⋯ Xn if (parse(X1) && parse(X2) && ⋯ &&

parse(Xn)) return 1; else next = save;

return 0; } Backtrack Exercise 4

Consider the following grammar G with start symbol e.

e → ( e + e ) | ( e * e ) | v

v → x | y

Using recursive descent, write a syntax checker for grammar G. Answer (part 1) int e() { char* save = next;

if (eat('(') && e() && eat('+') && e() && eat(')')) return 1; else next = save;

if (eat('(') && e() && eat('*') && e() && eat(')')) return 1; else next = save;

if (v()) return 1; else next = save;

return 0; } Answer (part 2)

int v() { char* save = next;

if (eat('x')) return 1; else next = save;

if (eat('y')) return 1; else next = save;

return 0; } Exercise 5

How many function calls are made by the recursive descent parser to parse the following strings?

(x*x)

((x*x)*x)

(((x*x)*x)*x)

(See animation of backtracking.) Answer

Number of calls is quadratic in the length of the input string.

Input string Length Calls (x*x) 5 21 ((x*x)*x) 9 53 (((x*x)*x)*x) 13 117

Lesson: backtracking expensive!

Function Function calls

String length LEFT FACTORING

Reducing backtracking! Left factoring

When two productions for a non-terminal share a common prefix, expensive backtracking can be avoided by left-factoring the grammar.

Idea: Introduce a new non- terminal that accepts each of the different suffixes. Example 3

Left-factoring grammar G by introducing non-terminal r:

e → ( e r | v

Common prefix r → + e ) | * e )

v → x Different suffixes | y Effect of left-factoring

Number of calls is now linear in the length of input string.

Input string Length Calls (x*x) 5 13 ((x*x)*x) 9 22 (((x*x)*x)*x) 13 31

Lesson: left-factoring a grammar

reduces backtracking.

Function Function calls

String length PREDICTIVE PARSING

Eliminating backtracking! Predictive parsing

Idea: know which production of a non-terminal to choose based solely on the next input symbol.

Advantage: very efficient since it eliminates all backtracking.

Disadvantage: not all grammars can be parsed in this way. (But many useful ones can.) Running example

The following grammar H will be used as a running example to demonstrate predictive parsing.

e → e + e | e * e | ( e ) | x | y Example: x+y*(y+x) Removing ambiguity

Since + and * are left-associative and * binds tighter than +, we can derive an unambiguous variant of H. e → e + t | t

t → t * f | f

f → ( e ) | x | y

Problem: left-recursive grammars cause recursive descent parsers to loop forever. int e() { char* save = next;

if (e() && eat('+') && t()) return 1; next = save;

if (t()) return 1; Call to self without next = save; consuming any input

return 0; } Eliminating left recursion

Let 훼 denote any sequence of grammar symbols. Rule 1 n → n 훼 ⟹ n' → 훼 n'

Rule 2 n → 훼 ⟹ n → 훼 n' where 훼 does not begin with n

Rule 3 Introduce new production n' → 휀 Eliminating left recursion

Example before: e → e + v | v v → x | y and after: e → v e' v → x | y e' → 휀 | + v e' Example 4

Running example, after eliminating left-recursion. e → t e' e' → + t e' | 휀 t → f t' t' → * f t' | 휀 f → ( e ) | x | y first and follow sets

Predictive parsers are built using the first and follow sets of each non-terminal in a grammar. Definition of first sets

Let 훼 denote any sequence of grammar symbols.

If 훼 can derive a string beginning with terminal a then a ∊ first(훼).

If 훼 can derive 휀 then 휀 ∊ first(훼). Computing first sets

If a is a terminal then a ∊ first(a 훼).

The empty string 휀 ∊ first(휀).

If X1X2⋯Xn is a sequence of grammar symbols and ∃i · a ∊ first(Xi) and ∀j < i · 휀 ∊ first(Xj) then a ∊ first(X1X2⋯ Xn ).

If n → 훼 is a production then first( n ) = first(훼). Exercise 6

Give all members of the sets: . first( v ) . first( e ) . first( v e )

e → ( e + e ) | ( e * e ) | v

v → x | 휀 Exercise 7

What are the first sets for each non-terminal in the following grammar. e → t e' e' → + t e' | 휀

t → f t' t' → * f t' | 휀

f → ( e ) | x | y Answer

first( f ) = { ‘(‘, ‘x’, ‘y’ } first( t' ) = { ‘*’, 휀 } first( t ) = { ‘(‘, ‘x’, ‘y’ } first( e' ) = { ‘+’, 휀 } first( e ) = { ‘(‘, ‘x’, ‘y’ } Definition of follow sets

Let 훼 and 훽 denote any sequence of grammar symbols.

Terminal a ∊ follow(n) if the start symbol of the grammar can derive a string of grammar symbols in which a immediately follows n.

The set follow(n) never contains 휀. End markers

In predictive parsing, it is useful to mark the end of the input string with a $ symbol.

((x*x)*x)$

$ is equivalent to '\0' in C. Computing follow sets

If s is the start symbol of the grammar then $ ∊ follow(s).

If n → 훼 x 훽 then everything in first(훽) except 휀 is in follow(x).

If n → 훼 x or n → 훼 x 훽 and 휀 ∊ first(훽) then everything in follow(n) is in follow(x). Exercise

Give all members of the sets: . follow( e ) . follow( v )

e → ( e + e ) | ( e * e ) | v

v → x | 휀 Exercise 8

What are the follow sets for each non-terminal in the following grammar.

e → t e' e' → + t e' | 휀

t → f t' t' → * f t' | 휀

f → ( e ) | x | y Answer

follow( e' ) = { $, ‘)’ } follow( e ) = { $, ‘)’ } follow( t' ) = { ‘+’, $, ‘)’ } follow( t ) = { ‘+’, $, ‘)’ } follow( f ) = { ‘*’, ‘+’, ‘)’, $ } Predictive parsing table

For each non-terminal n, a parse table T defines which production of n should be chosen, based on the next input symbol a.

Terminals

( + ... e e → ( e r

r r → + e Terminals

- v Non Production Predictive parsing table

for each production n → 훼 for each a ∊ first(훼) add n → 훼 to T[n , a] if 휀 ∊ first(훼) then for each b ∊ follow(n) add n → 훼 to T[n , b] Exercise 9

Construct a predictive parsing table for the following grammar.

e → t e' e' → + t e' | 휀

t → f t' t' → * f t' | 휀

f → ( e ) | x | y LL(1) grammars

If each cell in the parse table contains at most one entry then the a non-backtracking parser can be constructed and the grammar is said to be LL(1).

. First L: left-to-right scanning of the input. . Second L: a leftmost derivation is constructed. . The (1): using one input symbol of look-ahead to decide which grammar production to choose. Exercise 10

Write a syntax checker for the grammar of Exercise 9, utilising the predictive parsing table.

int e() { ... }

It should return a non-zero value if some prefix of the string pointed to by next conforms to the grammar, otherwise it should return zero. Answer (part 1) int e() { if (*next == 'x') return t() && e1(); if (*next == 'y') return t() && e1(); if (*next == '(') return t() && e1(); return 0; } int e1() { if (*next == '+') return eat('+') && t() && e1(); if (*next == ')') return 1; if (*next == '\0') return 1; return 0; } Answer (part 2) int t() { if (*next == 'x') return f() && t1(); if (*next == 'y') return f() && t1(); if (*next == '(') return f() && t1(); return 0; } int t1() { if (*next == '+') return 1; if (*next == '*‘) return eat('*') && f() && t1(); if (*next == ')') return 1; if (*next == '\0') return 1; return 0; } Answer (part 3)

int f() { if (*next == 'x') return eat('x'); if (*next == 'y') return eat('y'); if (*next == '(') return eat('(') && e() && eat(')'); return 0; }

(Notice how backtracking is not required.) Predictive parsing algorithm

Let s be a stack, initially containing the start symbol of the grammar, and let next point to the input string. while (top(s) != $) if (top(s) is a terminal) { if (top(s) == *next) { pop(s); next++; } else error(); }

else if (T[top(s), *next] == X → Y1⋯ Yn) { pop(s);

push(s, Yn⋯ Y1) /* Y1 on top */ } Exercise 11

Give the steps that a predictive parser takes to parse the following input.

x + x * y

For each step (loop iteration), show the input stream, the stack, and the parser action. Acknowledgements

Plus Stanford University lecture notes by Maggie Johnson and Julie Zelenski. APPENDIX Context-free grammars

Have four components:

1. A set of terminal symbols. 2. A set of non-terminal symbols. 3. A set of productions (or rules) of the form:

n → X1⋯ Xn

where n is a non-terminal and

X1⋯Xn is any sequence of terminals, non -terminals, and 휀. 4. The start symbol (one of the non-terminals). Notation

Non-terminals are underlined.

Rather than writing e → x e → e + e we may write:

e → x | e + e

(Also, symbols → and ::= will be used interchangeably.) Why context-free?

Unrestricted Context

Sensitive

Context Free

Regular

Nice balance between expressive power and efficiency of parsing. Chomsky hierarchy

Let t range over terminals, x and z over non-terminals and , 훽 and γ over sequences of terminals, non- terminals, and 휀.

Grammar Valid productions Unrestricted 훼 → 훽 Context-Sensitive 훼 x γ → 훼 훽 γ Context-Free x → 훽 x → t Regular x → t z x → 휀 Backus-Naur Form

BNF is a standard ASCII notation for specification of context-free grammars whose terminals are ASCII characters. For example:

::= "+" | "-" |

::= "x" | "y"

The BNF notation can itself be specified in BNF.