1 / 38

Pushdown automata (PDAs).

Pushdown automata (PDAs). The automata that accept context-free languages are called pushdown automata (PDAs). They are essentially just NFAs equipped with a stack-based memory. Compared with NFAs, they have two extra components.

fala
Download Presentation

Pushdown automata (PDAs).

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pushdown automata (PDAs). • The automata that accept context-free languages are called pushdownautomata (PDAs). • They are essentially just NFAs equipped with a stack-based memory. • Compared with NFAs, they have two extra components. • That is, a PDA has 7 components, conventionally called Q, S, G, d, q0, z, and F.

  2. Components of a PDA • Q, S, d, q0, and F play the same role as they do in NFAs. • G is the stack alphabet • The special symbol zmarks the bottom of the stack. • The transition function d needs to say how each move of the PDA affects the stack.

  3. Transition function of a PDA • The (nondeterministic) transition function d maps Q x (SU {l}) x G to the set of finite subsets of Q x G*. • That is, its input is a state, an input symbol (or l), and the top symbol of the stack. • Its output gives a choice of state and symbol. • For any choice (q,g), the PDA is to move to state q and replace the top stack symbol by g • Stacks can’t empty until computation ends

  4. A PDA example • A PDA accepting the nonempty even length palindromes {wwR | w ε{a,b}+} could work by • pushing input symbols onto the stack, until it guesses that it’s at the middle of the string, • then moving to a popping state without consuming an input symbol, • then reading input symbols and popping matching stack symbols until the stack is emptied • If anything goes wrong, the computation halts without reaching an accepting state.

  5. Nondeterminism and PDAs • Note that nondeterminism is essential to the example above • the PDA can't deterministically find the middle of the input string by reading it from left to right • A PDA working as described above is treated on pp. 181-2 of Linz • There q0 is the pushing state, q1 is the popping state, and q2 is an accepting state

  6. Transitions for our sample PDA • Pushing cases: • d(q0, e, z) = {(q0, ez)} for e ε {0,1} • d(q0, e, f) = {(q0, ef)} for e ε {0,1}, f ε {0,1} • Transition to popping: • d(q0, l, e) = {(q1, e)} for e ε {0,1} • Popping and accepting: • d(q1, e, e) = {(q1, l)} for e ε {0,1} • d(q1, l, z) = {(q2, z)}

  7. PDA computations • The progress of a computation may be described by listing the current state, remaining input, and stack contents. • So an accepting computation for the PDA above for the string abbbba may be written (q0, abbbba, z) ├ (q0, bbbba, az) ├ (q0, bbba, baz) ├ (q0,bba, bbaz) ├ (q1, bba, bbaz) ├ (q1, ba, baz) ├ (q1, a, az) ├ (q1, l, z) ├ (q2, l, z)

  8. Instantaneous descriptions (IDs) • Note that in the above example, the stack contents are written with the top to the left. • Also in the example, the symbol ├ plays the role of the => symbol in CFG derivations. • Once it’s defined in terms of d, we may define acceptance in terms of the transitive closure of ├. • A useful name for the ordered triple of (state, remaining input, stack contents) is instantaneous description, or ID.

  9. PDA computations • Note that the initial ID in a computation is always (q0, x, z), where x is the input string. • The relation ├ on IDs is defined as follows: • We have (q, aw, Xb) ├(p, w, ab) iff (p, a) ε d (q, a, X) • where a=l or a εT

  10. PDA computations and acceptance • The (reflexive) transitive closure ├* of the ├ relation then represents a sequence of 0 or more legal moves. • The PDA accepts x iff there is a sequence of legal moves from the initial configuration to an accepting configuration, or equivalently • (q0, x, z) |*- (qf, l, a), for some qfε F and aεG*. • Finally, for a PDA P, L(P) is defined as {x | P accepts x}.

  11. From CFGs to PDAs • For any CFG, there is an equivalent PDA • Theorem 7.1 of Linz shows how to construct the PDA for CFGs in Greibach Normal Form • The GNF assumption isn’t needed • In either case, only one state q1 is needed in addition to the start state q0 and a final state q2

  12. From CFGs to PDAs • The state q0 simply pushes S without reading an input symbol, and moves to q1. • If z appears at the top of the stack in state q1, q2 is entered without reading input • For every terminal symbol b, there’s a transition (q1, l) εd(q1, b,b) • For every grammar rule A →g, there’s a transition (q1, g) εd(q1 , l,A)

  13. From PDAs to CFGs • Could constructing a CFG equivalent to a given PDA just reverse the above steps? • That is, should a move that replaces A by BC on the stack correspond to a rule A → BC ? • The answer is yes, and the same proof steps (in reverse) still work • except for several complications

  14. From PDAs to CFGs -- issues • We have to start the RHS with whatever terminal symbol the PDA move consumes • The PDA’s goal is to get to a final state, not empty the stack (cf. Linz, point #1, p. 190) • The CFG, like the PDA, needs to ensure that the state where the popping of B ends is the state where the popping of C begins • and similarly for a longer RHS • cf. Linz, point #2, p. 190

  15. From PDAs to CFGs – handling the issues • We’ll deal with the stack vs. state issue (Linz’s #1) after describing our CFG’s form • In any case, we also need to ensure • that B starts popping in the destination state of the PDA move • that A and C stop popping in the same state • and similarly for longer RHS’s (Linz’s #2) • So we add two state components to each nonterminal

  16. From PDAs to CFGs – detail • Our new CFGs thus have nonterminals (in addition to S) of the form [q,A,p] • where A is a stack symbol and p and q are states • For a PDA with n states, the move (p, g) ε d(q, a, A) corresponds to nk rules if |g| = k • If g = g1g2g3…gk , each rule has the form • [q,A,rk] → a[p,g1,r1] [r1,g2,r2 ] [r2,g3,r3]…a[q,gk,rk] • But for popping moves, p appears on the LHS • That is, the move (p,l) ε d(q, a, A) gives only the rule • [q,A,p] → a

  17. PDAs to CFGs – an example • Suppose we modify the sample PDA above (from Linz, pp. 180-1) so that its last move is • d(q1, l, z) = {(q2, l)} • Then the PDA achieves a final state iff it achieves an empty stack • And we don’t need the extra states and stack symbol when converting to a CFG.

  18. PDAs to CFGs – an example • For the modified PDA, our CFG has a single S-rule S → [q0,z,q2] • since we can use the old start & final states • The two families of pushing moves give rules [q0,z,r2] → e[q0,e,r1] [r1,z,r2 ] for e ε {0,1}; r1,r2 εQ [q0,f,r2] → e[q0,e,r1] [r1,f,r2 ] for e,f ε {0,1}; r1,r2 εQ • The family of transition moves give rules [q0,e,r1] → [q1,e,r1] for e ε {0,1}; r1 εQ

  19. PDAs to CFGs – end of sample construction • The family of popping moves gives rules [q1,e,q1] → e for e ε {0,1} • The new move gives the rule [q1,z,q2] →l • So we get 1+18+36+6+2+1 rules in all • but most will disappear after simplification • this large reduction after simplification is very common

  20. Simplifying the new CFG • The symbols generating strings of T* are • at level 1: [q1,0,q1] [q1,1,q1] , [q1,z,q2] • at level 2: [q0,0,q1] [q0,1,q1] • at level 3: [q0,z,q2] • at level 4: S

  21. Further simplifying the new CFG • We are left with the rules • S → [q0,z,q2] • [q1,0,q1] → 0, [q1,1,q1] → 1, [q1,z,q2] →l • [q0,0,q1] → [q1,0,q1] • → 0[q0,0,q1] [q1,0,q1] | 1[q0,1,q1][q1,0,q1] • [q0,1,q1] → [q1,1,q1] • → 0[q0,0,q1] [q1,1,q1] | 1[q0,1,q1][q1,1,q1] • [q0,z,q2] → 0 [q0,0,q1] [q1,z,q2] | 1[q0,1,q1] [q1,z,q2]

  22. Final simplification of the new CFG • A little substitution gives the rules • S → 0 [q0,0,q1] | 1 [q0,1,q1] • [q0,0,q1] → 0 | 0 [q0,0,q1] 0 | 1 [q0,1,q1] 0 • [q0,1,q1] → 1 | 0 [q0,0,q1] 1 | 1 [q0,1,q1] 1 • Using Z for [q0,0,q1] and W for [q0,1,q1] gives • S → 0Z | 1W • Z → 0 | 0Z0 | 1W0 • W → 1 | 0Z1 | 1W1

  23. Deterministic PDAs • A deterministic PDA (DPDA) is a PDA where • each value of d is a set of size at most 1, and • for no b,c, and q are d (q,l,b) and d(q,c,b) both nonempty • Some CFLs cannot be recognized by DPDAs • (e.g., {wwr | w l {0,1}*}). • A deterministic CFL (DCFL) is a language accepted by some DPDA, • so the DCFLs are a proper subset of the CFLs.

  24. Regular languages as DCFLs • All regular languages can be recognized by DPDAs (whose stacks always contain only z, and whose moves mimic moves of a DFA). • So every regular language is a DCFL.

  25. DCFLs and ambiguity • All DCFLs have unambiguous grammars • our PDA to CFG construction gives one when given a DPDA. • CFLs with unambiguous grammars needn’t be DCFLs – e.g., {wwr | w l {0,1}*} • with rules S → 0S0 | 1S1 | l

  26. Closure properties for CFLs. • CFLs are closed under union, concatenation, and Kleene closure. • We’ve seen the proof when observing that all regular languages are CFLs • Recall that it involves creating new rules of the form • S → S1 | S2, S → S1S2, or S → l | S1S

  27. Using CFL closure properties • For example, the languages below are CFLs • {0m1n2n | m,n>0} and {0m1m2n | m,n>0} • The first is {0m | m>0} ∙ {1n2n | n>0} • The second is {0m1m | m>0} ∙ {2n | n>0}. • But the intersection of these two CFLs is {0n1n2n | n>0}, which isn’t a CFL. • So the class of CFLs is not closed under intersection

  28. Nonclosure under complementation • Suppose that the class of CFLs was closed under complementation. • Then it would be closed under intersection, just as for regular languages. • So the class of CFLs isn't closed under intersection • But the intersection of a CFL and a regular language is always a CFL. • This is shown as Theorem 8.5 of Linz

  29. Intersecting CFLs with regular languages • The intuition is that a DFA and a PDA may be run in parallel • just as we did for two DFAs, by using the Cartesian product of the sets of states • This won't work for two PDAs • since we can't sensibly define the Cartesian product of two stacks. • But if there is only one PDA, then we can simply use its stack with no problem.

  30. Decision algorithms for CFGs • To check whether L(G) is empty for a CFG G • we merely check whether S generates a string of terminals • To check whether L(G) is infinite for a CFG G • eliminate useless symbols, l-productions, and unit productions from G to get a new CFG G' • then determine whether any variable is nontrivially reachable from itself in G'. • This is equivalent to determining whether there is a cycle in a dependency graph

  31. A sample CFG generating an infinite language • Consider the CFG G with rules below, where we ignore rules for Det, N, V, P • S → NP VP • NP → Det N | Det N PP • VP → V NP • PP → P NP • Since there’s a cycle involving NP and PP, L(G) is infinite • That is, NP =*> Det N P NP • where Det, N, and P generate nontrivial strings

  32. Nonexistence of decision algorithms • For some questions about CFGs, there is no possible solution algorithm! • for example, there’s no algorithm for determining whether two CFGs generate the same language • Showing how to justify such a claim is a major goal of the rest of the course

  33. The pumping lemma for CFLs • CFLs have an analog of the pumping lemma. • Its statement is slightly more complicated than for regular languages. • The basic idea is again simple • every sufficiently deep parse tree must have a long path, and thus a repeated symbol.

  34. Deriving the pumping lemma for CFLs • Suppose a CFG G has m variables. • We may assume that G is in CNF (we won’t care whether l ε L(G). • If a parse tree has height greater than m, then it must have a path with more than m nonleaves. • Among the lowest m+1 nonleaves must be two that correspond to the same variable A.

  35. Repeated variables on a path • If the higher node is Ahi and the lower one Alo • then the yield of Aj is a substring of the yield of Ai. • Let w be the yield of Alo and vwx that of Ahi. • Let uvwxy be the yield of the entire tree. • Since G is context free, we may replace the tree rooted at Alo by a copy of the tree rooted at Ahi.

  36. The ability to pump • The yield of the new tree will be uv2wx2y. • We may repeat the process to get trees that yield uviwxiy for any i>0. • We may also replace the tree rooted at Ahi by the tree rooted at Alo to get a tree yielding uwy. • So for any nonzero i, uviwxiy is in L(G). • That is, z = uvwxy may be pumped • but slightly differently than for DFAs.

  37. Making the pumped part short • Here, it is vwx whose length we will bound. • Since G is in CNF, a parse tree of height at most m has yield of length at most 2m-1 • For n = 2m, any string z of length at least n has a path of length greater than m • z can be pumped, • |vwx| <= n, • and vwx is a proper superstring of w. • We get a new pumping lemma (Linz, Th. 8.1)

  38. Using the pumping lemma for CFLs. • Suppose that the language L = {0j1j2j | j>=0} is a CFL. • If we choose z = 0n1n2n in the pumping lemma for CFLs, then vwx must contain at most two distinct symbols. • Therefore uwy contains more of the third symbol than of one of the other two symbols, and is thus not in L. • So L can't be a CFL.

More Related