1 / 59

Understanding Parsers and Error Handling in Compiler Design

This text delves into the role of parsers, different parser methods, error recovery strategies, context-free grammars, derivations, language classifications, and more in the realm of compiler design. It also covers examples, ambiguity, and grammar classifications.

mmims
Download Presentation

Understanding Parsers and Error Handling in Compiler Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4 Chang Chi-Chung 2007.5.17

  2. The Role of the Parser Token LexicalAnalyzer Parser Rest of Front End intermediate representation Parse tree SourceProgram getNextToken Symbol Table

  3. The Types of Parsers for Grammars • Universal (any CFG) • Cocke-Younger-Kasimi • Earley • Top-down (CFG with restrictions) • Build parse trees from the root to the leaves. • Recursive descent (predictive parsing) • LL (Left-to-right, Leftmost derivation) methods • Bottom-up (CFG with restrictions) • Build parse trees from the leaves to the root. • Operator precedence parsing • LR (Left-to-right, Rightmost derivation) methods • SLR, canonical LR, LALR

  4. Representative Grammars E → TE’ E’ → + TE’ | ε T → FT’ T’ → * F T’ | ε F → ( E ) | id E → E + T | T T → T * F | F F → ( E ) | id E → E + E | E * E | ( E ) | id

  5. Error Handling • A good compiler should assist in identifying and locating errors • Lexical errors • important, compiler can easily recover and continue • Syntax errors • most important for compiler, can almost always recover • Static semantic errors • important, can sometimes recover • Dynamic semantic errors • hard or impossible to detect at compile time, runtime checks are required • Logical errors • hard or impossible to detect

  6. Error Recovery Strategies • Panic mode • Discard input until a token in a set of designated synchronizing tokens is found • Phrase-level recovery • Perform local correction on the input to repair the error • Error productions • Augment grammar with productions for erroneous constructs • Global correction • Choose a minimal sequence of changes to obtain a global least-cost correction

  7. Context-Free Grammar • Context-free grammar is a 4-tupleG = < T, N, P, S> where • T is a finite set of tokens (terminal symbols) • N is a finite set of nonterminals • P is a finite set of productionsof the form  where   N and  (NT)* • S  N is a designated start symbol

  8. Notational Conventions • Terminals • a, b, c, … T • example: 0, 1, +, *, id, if • Nonterminals • A, B, C, … N • example: expr, term, stmt • Grammar symbols • X, Y, Z (N  T) • Strings of terminals • u, v, w, x, y, z T* • Strings of grammar symbols (sentential form) • , ,   (N  T)* • The head of the first production is the start symbol, unless stated.

  9. Derivations • The one-step derivation is defined by A    where A   is a production in the grammar • In addition, we define •  is leftmost  lm if  does not contain a nonterminal •  is rightmost  rm if  does not contain a nonterminal • Transitive closure  * (zero or more steps) • Positive closure  + (one or more steps)

  10. Sentence and Language • Sentence form • If S * in the grammar G, then  is a sentential form of G • Sentence • A sentential form of G has no nonterminals. • Language • The language generated by G is it’s set of sentences. • The language generated by G is defined by L(G) = { w  T* | S *w } • A language that can be generated by a grammar is said to be a Context-Free language. • If two grammars generate the same language, the grammars are said to be equivalent.

  11. Example • G = < T, N, P, S> • T = {+, *, (, ), -, id } • N = { E } • P = E  E+EE  E*EE  (E)E  -EE  id • S = E • E  lm– E  lm–(E)  lm–(E+E) lm–(id +E)  lm–(id+id)

  12. Ambiguity • A grammar that produces more than one parse tree for some sentence is said to be ambiguous. • Example • id + id * id E → E + E | E * E | ( E ) | id • E  E+E •  id + E •  id + E * E •  id + id* E •  id + id*id • E  E*E •  E + E * E •  id + E * E •  id + id* E •  id + id*id

  13. Grammar Classification • A grammar G is said to be • Regular (Type 3) • right linear A  a B or A  a • left linear A  B a or A  a • Context free (Type 2) •   where  N and   ( N  T )* • Context sensitive (Type 1) •  A     where A  N, , ,   (N T)*, |  | > 0 • Unrestricted (Type 0) •    where ,   ( N  T )*, 

  14. Language Classification • The set of all languages generated by grammars G of type T • L(T) = { L(G) | G is of type T } • L(regular) L(context free)  L(context sensitive)  L(unrestricted)

  15. Example ( a | b )* a b b a A0 A1 b A2 b A3 a b • A0  aA0 | bA0 | aA1 • A1 bA2 • A2 bA3 • A3 ε

  16. f Example • L = { anbn | n 1 } is context free. path labeled aj-i Si S0 … … path labeled ai path labeled bi ajbi can be accepted by D, but ajbi is not in the L. Finite automata cannot count. CFG can count two items but not three.

  17. Writing a Predictive Parsing Grammar • Eliminating Ambiguity • Elimination of Left Recursion • Left Factoring ( Elimination of Left Factor ) • Compute FIRST and FOLLOW • Two variants: • Recursive (recursive calls) • Non-recursive (table-driven)

  18. Eliminating Ambiguity • Dangling-else Grammar stmt  ifexpr then stmt | if expr then stmt else stmt | other ifE1thenS1elseifE2thenS2elseS3

  19. Eliminating Ambiguity(2) ifE1thenifE2thenS1elseS2

  20. Eliminating Ambiguity(3) • Rewrite the dangling-else grammar stmt  matched_stmt | open_stmt matched_stmt  ifexpr then matched_stmt else matched_stmt | other open_stmt  ifexpr then stmt | ifexpr then matched_stmt else open_stmt

  21. Elimination of Left Recursion • Productions of the formA A | are left recursive • Non-left-recursions A  A’ A’   A’ | ε • When one of the productions in a grammar is left recursive then a predictive parser loops forever on certain inputs

  22. Immediate Left-Recursion Elimination • Group the Productions as AA1 | A2 | … | Am| 1 | 2 | … | n Whereno i begins with an A • Replace the A-Productions by A 1A’ | 2 A’| … | n A’ A’  1A’ | 2 A’ | … | mA’ | ε

  23. Example • Left-recursive grammar A A  |  |  | A  • Into a right-recursive production A  AR|  AR AR AR|  AR| 

  24. Non-Immediate Left-Recursion • The Grammar S A a | b A  A c |S d | ε • The nonterminal S is left recursive, because S A a  Sda But S is not immediately left recursive.

  25. Elimination of Left Recursion • Eliminating left recursion algorithm Arrange the nonterminals in some order A1, A2, …, Anfor (each i from 1 to n) {for (each j from 1 to i-1){ replace each productionAi Aj withAi 1 | 2 | … | k whereAj 1 | 2 | … | k } eliminate the immediate left recursion in Ai}

  26. Example A BC | aB C A | AbC A B | C C | a

  27. Exercise • The grammar S A a | b A  A c |S d | ε • Answer • A A c | A a d | b d | 

  28. Left Factoring • Left Factoring is a grammar transformation. • Predictive Parsing • Top-down Parsing • Replace productionsA   1|  2| … |  n| withA  AR| AR 1| 2| … | n

  29. Example • The Grammar stmt  ifexpr then stmt | if expr then stmtelse stmt • Replace with stmt ifexpr then stmt stmts stmts  else stmt | ε

  30. Exercise • The following grammar S iEtS | iEtSeS | a E  b • Answer S iEtSS’ | a S’ e S | ε E  b

  31. Non-Context-Free Grammar Constructs • A few syntactic constructs found in typical programming languages cannot be specified using grammars alone. • Checking the identifiers are declared before they are used in a program. • The abstract language is L1 = { wcw | w is in (a|b)* } • aabcaab is in L1 and L1 is not CFG. • C/C++ and Java does not distinguish among identifiers that are different character strings. All identifiers are represented by a token such as id in a grammar. • In the semantic-analysis phase checks that identifiers are declared before they are used.

  32. E E E T T T T T T + id + id + id Top-Down Parsing • LL methods and recursive-descent parsing • Left-to-right, Leftmost derivation • Creating the nodes of the parse tree in preorder ( depth-first ) GrammarE  T+TT  (E)T  -ET  id Leftmost derivationE lmT + T lmid+T lm id + id E

  33. recursive-descent parsing LL Predictive Parsing Top-down Parsing • Give a Grammar G E → TE’ E’ → + TE’ | ε T → FT’ T’ → * F T’ | ε F → ( E ) | id

  34. FIRST and FOLLOW S a A β α c γ c is in FIRST(A) a is in FOLLOW(A)

  35. FIRST and FOLLOW • The constructed of both top-down and bottom-up parsers is aided by two functions, FIRST and FOLLOW, associated with a grammar G. • During top-down parsing, FIRST and FOLLOW allow us to choose which production to apply. • During panic-mode error recovery, sets of tokens produced by FOLLOW can be used as synchronizing tokens.

  36. FIRST • FIRST() • The set of terminals that begin all strings derived from  • FIRST(a) = { a } if a  T • FIRST() = {  } • FIRST(A) = AFIRST () for A  P • FIRST(X1X2…Xk) = if   FIRST (Xj) for all j = 1, …, i-1thenadd non- in FIRST(Xi) to FIRST(X1X2…Xk) if   FIRST (Xj) for all j = 1, …, kthenadd  to FIRST (X1X2…Xk)

  37. FIRST(1) • By definition of the FIRST, we can compute FIRST(X) • If XT, then FIRST(X) = {X}. • If XN, X→, then add  to FIRST(X). • If XN, and X → Y1 Y2 . . . Yn, then add all non- elements of FIRST(Y1) to FIRST(X),if FIRST(Y1), then add all non- elements of FIRST(Y2) to FIRST(X), ..., if FIRST(Yn), then add  to FIRST(X).

  38. FOLLOW • FOLLOW(A) • the set of terminals that can immediately follow nonterminal A • FOLLOW(A) =for all (B  A )  P doadd FIRST()-{} to FOLLOW(A)for all (B  A )  Pand   FIRST() doadd FOLLOW(B) to FOLLOW(A)for all (B  A)  P doadd FOLLOW(B) to FOLLOW(A)ifA is the start symbol Sthen add $ to FOLLOW(A)

  39. FOLLOW(1) • By definition of the FOLLOW, we can compute FOLLOW(X) • Put $ into FOLLOW(S). • For each A B, add all non- elements ofFIRST()to FOLLOW(B). • For each A B or A B, where FIRST(), add all of FOLLOW(A) to FOLLOW(B).

  40. Example • Give a Grammar G E → TE’ E’ → + TE’ | ε T → FT’ T’ → * F T’ | ε F → ( E ) | id

  41. Recursive Descent Parsing • Every nonterminal has one (recursive) procedure responsible for parsing the nonterminal’s syntactic category of input tokens • When a nonterminal has multiple productions, each production is implemented in a branch of a selection statement based on input look-ahead information

  42. Procedure in Recursive-Descent Parsing void A() { Choose an A-Production, AX1X2…Xk; for (i = 1 to k) { if ( Xi is a nonterminal) call procedure Xi(); else if ( Xi = current input symbol a ) advance the input to the next symbol; else/* an error has occurred */ } }

  43. Using FIRST and FOLLOW to Write a Recursive Descent Parser rest(){ if (lookahead in FIRST(+term rest) ) {match(‘+’); term(); rest() } else if (lookahead in FIRST(-term rest) ) { match(‘-’); term(); rest() }else if (lookahead in FOLLOW(rest) ) return else error()} expr term rest rest +term rest | -term rest | term id FIRST(+term rest) = { + }FIRST(-term rest) = { - }FOLLOW(rest) = { $ }

  44. LL(1)

  45. LL(1) Grammar • Predictive parsers, that is, recursive-descent parsers needing no backtracking, can be constructed for a class of grammars called LL(1) • First “L” means the input from left to right. • Second “L” means leftmost derivation. • “1” for using one input symbol of lookahead at each step tp make parsing action decisions. • No left-recursive. • No ambiguous.

  46. LL(1) • A grammar G is LL(1) if it is not left recursive and for each collection of productionsA  1 |2 |… | nfor nonterminal A the following holds: • FIRST(i)  FIRST(j) =  for all i  j如果交集不是空集合,會如何? • if i *  then • j *  for all i  j • FIRST(j)  FOLLOW(A) =  for all i  j

  47. Example

  48. Non-Recursive Predictive Parsing • Table-Driven Parsing • Given an LL(1) grammar G = <N, T, P, S> construct a table M[A,a] for A N, a  Tand use a driver program with a stack input stack Predictive parsingprogram (driver) output Parsing tableM

  49. Predictive Parsing Table Algorithm

More Related