Theory of Computation Lecture Notes PDF
Document Details
Uploaded by HallowedZombie3388
Tags
Related
- 01 Handout 1: Mathematical Preliminaries (Theory of Computation with Automata) PDF
- 01 Handout 1: Mathematical Preliminaries - Theory of Computation with Automata PDF
- Introduction to Theory of Computation Lecture Notes PDF
- Theory of Computation Automata PDF
- NPTEL Theory of Computation Assignment - Week 1 PDF
- Theory of Computation Unit 1 PDF
Summary
These notes provide an overview of theory of computation, focusing on pushdown automata. The document explains the formal definitions and properties of pushdown automata, determinisitic pushdown automata, and context-free grammars. It details the different ways that pushdown automata accept languages.
Full Transcript
THEORY OF COMPUTATION LECTURE NOTES UNIT 4 (10 Lectures) Pushdown Automata: Definition Formal Definition of Pushdown Automata, A Graphical Notation for PDA‟s, Instantaneous Descriptions of a PDA, Lan...
THEORY OF COMPUTATION LECTURE NOTES UNIT 4 (10 Lectures) Pushdown Automata: Definition Formal Definition of Pushdown Automata, A Graphical Notation for PDA‟s, Instantaneous Descriptions of a PDA, Languages of PDA: Acceptance by Final State, Acceptance by Empty Stack, From Empty Stack to Final State, From Final State to Empty Stack Equivalence of PDA‟s and CFG‟s: From Grammars to Pushdown Automata, From PDA‟s to Grammars Deterministic Pushdown Automata: Definition of a Deterministic PDA, Regular Languages and Deterministic PDA‟s, DPDA‟s and Context-Free Languages, DPDA‟s and Ambiguous Grammars Properties of Context-Free Languages: Normal Forms for Context-Free Grammars, The Pumping Lemma for Context-Free Languages, Closure Properties of Context-Free Languages, Decision Properties of CFL‟s Push down automata: Regular language can be charaterized as the language accepted by finite automata. Similarly, we can characterize the context-free language as the langauge accepted by a class of machines called "Pushdown Automata" (PDA). A pushdown automation is an extension of the NFA. It is observed that FA have limited capability. (in the sense that the class of languages accepted or characterized by them is small). This is due to the "finite memory" (number of states) and "no external memory" involved with them. A PDA is simply an NFA augmented with an "external stack memory". The addition of a stack provides the PDA with a last-in, first-out memory management cpapability. This "Stack" or "pushdown store" can be used to record a potentially unbounded information. It is due to this memory management capability with the help of the stack that a PDA can overcome the memory limitations that prevents a FA to accept many interesting languages like. Although, a PDA can store an unbounded amount of information on the stack, its access to the information on the stack is limited. It can push an element onto the top of the stack and pop off an element from the top of the stack. To read down into the stack the top elements must be popped off and are lost. Due to this limited access to the information on the stack, a PDA still has some limitations and cannot accept some other interesting languages. As shown in figure, a PDA has three components: an input tape with read only head, a finite control and a pushdown store. The input head is read-only and may only move from left to right, one symbol (or cell) at a time. In each step, the PDA pops the top symbol off the stack; based on this symbol, the input symbol it is currently reading, and its present state, it can push a sequence of symbols onto the stack, move its read-only head one cell (or symbol) to the right, and enter a new state, as defined by the transition rules of the PDA. PDA are nondeterministic, by default. That is, - transitions are also allowed in which the PDA can pop and push, and change state without reading the next input symbol or moving its read-only head. Besides this, there may be multiple options for possible next moves. Formal Definitions : Formally, a PDA M is a 7-tuple M = where, is a finite set of states, is a finite set of input symbols (input alphabets), is a finite set of stack symbols (stack alphabets), is a transition function from to subset of is the start state , is the initial stack symbol, and , is the final or accept states. Explanation of the transition function, : If, for any ,. This means intitutively that whenever the PDA is in state q reading input symbol a and z on top of the stack, it can nondeterministically for any i, go to state pop z off the stack push onto the stack (where ) (The usual convention is that if , then will be at the top and at the bottom.) move read head right one cell past the current symbol a. If a = , then means intitutively that whenver the PDA is in state q with z on the top of the stack regardless of the current input symbol, it can nondeterministically for any i, , go to state pop z off the stack push onto the stack, and leave its read-only head where it is. State transition diagram : A PDA can also be depicted by a state transition diagram. The labels on the arcs indicate both the input and the stack operation. The transition for and is depicted by Final states are indicated by double circles and the start state is indicated by an arrow to it from nowhere. Configuration or Instantaneous Description (ID) : A configuration or an instantaneous description (ID) of PDA at any moment during its computation is an element of describing the current state, the portion of the input remaining to be read (i.e. under and to the right of the read head), and the current stack contents. Only these three elements can affect the computation from that point on and, hence, are parts of the ID. The start or inital configuartion (or ID) on input is. That is, the PDA always starts in its start state, with its read head pointing to the leftmost input symbol and the stack containing only the start/initial stack symbol,. The "next move relation" one figure describes how the PDA can move from one configuration to another in one step. Formally, iff 'a' may be or an input symbol. Let I, J, K be IDs of a PDA. We define we write I K, if ID I can become K after exactly i moves. The relations and define as follows IK I J if such that I K and K J I J if such that I J. That is, is the reflexive, transitive closure of. We say that I J if the ID J follows from the ID I in zero or more moves. ( Note : subscript M can be dropped when the particular PDA M is understood. ) Language accepted by a PDA M There are two alternative definiton of acceptance as given below. 1. Acceptance by final state : Consider the PDA. Informally, the PDA M is said to accept its input by final state if it enters any final state in zero or more moves after reading its entire input, starting in the start configuration on input. Formally, we define L(M), the language accepted by final state to be { | for some and } 2. Acceptance by empty stack (or Null stack) : The PDA M accepts its input by empty stack if starting in the start configuration on input , it ever empties the stack w/o pushing anything back on after reading the entire input. Formally, we define N(M), the language accepted by empty stack, to be { | for some } Note that the set of final states, F is irrelevant in this case and we usually let the F to be the empty set i.e. F = Q. Example 1 : Here is a PDA that accepts the language. , and consists of the following transitions The PDA can also be described by the adjacent transition diagram. Informally, whenever the PDA M sees an input a in the start state with the start symbol z on the top of the stack it pushes a onto the stack and changes state to. (to remember that it has seen the first 'a'). On state if it sees anymore a, it simply pushes it onto the stack. Note that when M is on state , the symbol on the top of the stack can only be a. On state if it sees the first b with a on the top of the stack, then it needs to start comparison of numbers of a's and b's, since all the a's at the begining of the input have already been pushed onto the stack. It start this process by popping off the a from the top of the stack and enters in state q3 (to remember that the comparison process has begun). On state , it expects only b's in the input (if it sees any more a in the input thus the input will not be in the proper form of anbn). Hence there is no more on input a when it is in state. On state it pops off an a from the top of the stack for every b in the input. When it sees the last b on state q3 (i.e. when the input is exaushted), then the last a from the stack will be popped off and the start symbol z is exposed. This is the only possible case when the input (i.e. on -input ) the PDA M will move to state which is an accept state. we can show the computation of the PDA on a given input using the IDs and next move relations. For example, following are the computation on two input strings. Let the input be aabb. we start with the start configuration and proceed to the subsequent IDs using the transition function defined ( using transition 1 ) ( using transition 2 ) ( using transition 3 ) ( using transition 4 ), ( using transition 5 ) , is final state. Hence , accept. So the string aabb is rightly accepted by M we can show the computation of the PDA on a given input using the IDs and next move relations. For example, following are the computation on two input strings. i) Let the input be aabab. No further move is defined at this point. Hence the PDA gets stuck and the string aabab is not accepted. Example 2 : We give an example of a PDA M that accepts the set of balanced strings of parentheses [] by empty stack. The PDA M is given below. where is defined as Informally, whenever it sees a [, it will push the ] onto the stack. (first two transitions), and whenever it sees a ] and the top of the stack symbol is [, it will pop the symbol [ off the stack. (The third transition). The fourth transition is used when the input is exhausted in order to pop z off the stack ( to empty the stack) and accept. Note that there is only one state and no final state. The following is a sequence of configurations leading to the acceptance of the string [ [ ] [ ] ] [ ]. Equivalence of acceptance by final state and empty stack. It turns out that the two definitions of acceptance of a language by a PDA - accpetance by final state and empty stack- are equivalent in the sense that if a language can be accepted by empty stack by some PDA, it can also be accepted by final state by some other PDA and vice versa. Hence it doesn't matter which one we use, since each kind of machine can simulate the other.Given any arbitrary PDA M that accpets the language L by final state or empty stack, we can always construct an equivalent PDA M with a single final state that accpets exactly the same language L. The construction process of M' from M and the proof of equivalence of M & M' are given below. There are two cases to be considered. CASE I : PDA M accepts by final state, Let Let qf be a new state not in Q. Consider the PDA where as well as the following transition. contains and. It is easy to show that M and M' are equivalent i.e. L(M) = L( ) Let L(M). Then for some and Then Thus accepts Conversely, let accepts i.e. L( ), then for inherits all other moves except the last one from M. Hence for some. Thus M accepts. Informally, on any input simulate all the moves of M and enters in its own final state whenever M enters in any one of its final status in F. Thus accepts a string iff M accepts it. CASE II : PDA M accepts by empty stack. We will construct from M in such a way that simulates M and detects when M empties its stack. enters its final state when and only when M empties its stack.Thus will accept a string iff M accepts. Let where and X and contains all the transition of , as well as the following two transitions. and Transitions 1 causes to enter the initial configuration of M except that will have its own bottom-of-stack marker X which is below the symbols of M's stack. From this point onward will simulate every move of M since all the transitions of M are also in If M ever empties its stack, then when simulating M will empty its stack except the symbol X at the bottom. At this point, will enter its final state by using transition rule 2, thereby (correctly) accepting the input. We will prove that M and are equivalent. Let M accepts. Then for some. But then ( by transition rule 1) ( Since includes all the moves of M ) ( by transition rule 2 ) Hence, also accepts. Conversely, let accepts. Then for some Every move in the sequence, were taken from M. Hence, M starting with its initial configuration will eventually empty its stack and accept the input i.e. Equivalence of PDA’s and CFG’s: We will now show that pushdown automata and context-free grammars are equivalent in expressive power, that is, the language accepted by PDAs are exactly the context-free languages. To show this, we have to prove each of the following: i) Given any arbitrary CFG G there exists some PDA M that accepts exactly the same language generated by G. ii) Given any arbitrary PDA M there exists a CFG G that generates exactly the same language accpeted by M. (i) CFA to PDA We will first prove that the first part i.e. we want to show to convert a given CFG to an equivalent PDA. Let the given CFG is. Without loss of generality we can assume that G is in Greibach Normal Form i.e. all productions of G are of the form. where and. From the given CFG G we now construct an equivalent PDA M that accepts by empty stack. Note that there is only one state in M. Let , where q is the only state is the input alphabet, N is the stack alphabet , q is the start state. S is the start/initial stack symbol, and , the transition relation is defined as follows For each production ,. We now want to show that M and G are equivalent i.e. L(G)=N(M). i.e. for any. iff. If , then by definition of L(G), there must be a leftmost derivation starting with S and deriving w. i.e. Again if , then one sysmbol. Therefore we need to show that for any. iff. But we will prove a more general result as given in the following lemma. Replacing A by S (the start symbol) and by gives the required proof. Lemma For any , and , via a leftmost derivative iff. Proof : The proof is by induction on n. Basis : n = 0 iff i.e. and iff iff Induction Step : First, assume that via a leftmost derivation. Let the last production applied in their derivation is for some and. Then, for some , where and Now by the indirection hypothesis, we get,.............................................................................(1) Again by the construction of M, we get so, from (1), we get since and , we get That is, if , then. Conversely, assume that and let be the transition used in the last move. Then for some , and where and. Now, by the induction hypothesis, we get via a leftmost derivation. Again, by the construction of M, must be a production of G. [ Since ]. Applying the production to the sentential form we get i.e. via a leftmost derivation. Hence the proof. Example : Consider the CFG G in GNF S aAB A a / aA B a / bB The one state PDA M equivalent to G is shown below. For convenience, a production of G and the corresponding transition in M are marked by the same encircled number. (1) S aAB (2) A a (3) A aA (4) B a (5) B bB. We have used the same construction discussed earlier Some Useful Explanations : Consider the moves of M on input aaaba leading to acceptance of the string. Steps 1. (q, aaaba, s) ( q, aaba, AB ) 2. ( q, aba, AB ) 3. ( q, ba, B ) 4. ( q, a, B ) 5. ( q, , ) Accept by empty stack. Note : encircled numbers here shows the transitions rule applied at every step. Now consider the derivation of the same string under grammar G. Once again, the production used at every step is shown with encircled number. S aAB aaAB aaaB aaabB aaaba Steps 1 2 3 4 5 Observations: There is an one-to-one correspondence of the sequence of moves of the PDA M and the derivation sequence under the CFG G for the same input string in the sense that - number of steps in both the cases are same and transition rule corresponding to the same production is used at every step (as shown by encircled number). considering the moves of the PDA and derivation under G together, it is also observed that at every step the input read so far and the stack content together is exactly identical to the corresponding sentential form i.e. = Say, at step 2, Read so far = a stack = AB Sentential form = aAB From this property we claim that iff. If the claim is true, then apply with and we get iff or iff ( by definition ) Thus N(M) = L(G) as desired. Note that we have already proved a more general version of the claim PDA and CFG: We now want to show that for every PDA M that accpets by empty stack, there is a CFG G such that L(G) = N(M) we first see whether the "reverse of the construction" that was used in part (i) can be used here to construct an equivalent CFG from any PDA M. It can be show that this reverse construction works only for single state PDAs. That is, for every one-state PDA M there is CFG G such that L(G) = N(M). For every move of the PDA M we introduce a production in the grammar where N = T and. we can now apply the proof in part (i) in the reverse direction to show that L(G) = N(M). But the reverse construction does not work for PDAs with more than one state. For example, consider the PDA M produced here to accept the langauge Now let us construct CFG using the "reverse" construction. ( Note ). Transitions in M Corresponding Production in G We can drive strings like aabaa which is in the language. But under this grammar we can also derive some strings which are not in the language. e.g and. But Therefore, to complete the proof of part (ii) we need to prove the following claim also. Claim: For every PDA M there is some one-state PDA such that. It is quite possible to prove the above claim. But here we will adopt a different approach. We start with any arbitrary PDA M that accepts by empty stack and directly construct an equivalent CFG G. PDA to CFG We want to construct a CFG G to simulate any arbitrary PDA M with one or more states. Without loss of generality we can assume that the PDA M accepts by empty stack. The idea is to use nonterminal of the form whenever PDA M in state P with A on top of the stack goes to state. That is, for example, for a given transition of the PDA corresponding production in the grammar as shown below, And, we would like to show, in general, that iff the PDA M, when started from state P with A on the top of the stack will finish processing , arrive at state q and remove A from the stack. we are now ready to give the construction of an equivalent CFG G from a given PDA M. we need to introduce two kinds of producitons in the grammar as given below. The reason for introduction of the first kind of production will be justified at a later point. Introduction of the second type of production has been justified in the above discussion. Let be a PDA. We construct from M a equivalent CFG Where N is the set of nonterminals of the form for and and P contains the follwoing two kind of production 1. 2. If , then for every choice of the sequence , ,. Include the follwoing production If n = 0, then the production is.For the whole exercise to be meaningful we want means there is a sequence of transitions ( for PDA M ), starting in state q, ending in , during which the PDA M consumes the input string and removes A from the stack (and, of course, all other symbols pushed onto stack in A's place, and so on.) That is we want to claim that iff If this claim is true, then let to get iff for some. But for all we have as production in G. Therefore, iff i.e. iff PDA M accepts w by empty stack or L(G) = N(M) Now, to show that the above construction of CFG G from any PDA M works, we need to prove the proposed claim. Note: At this point, the justification for introduction of the first type of production (of the form ) in the CFG G, is quite clear. This helps use deriving a string from the start symbol of the grammar. Proof : Of the claim iff for some , and The proof is by induction on the number of steps in a derivation of G (which of course is equal to the number of moves taken by M). Let the number of steps taken is n. The proof consists of two parts: ' if ' part and ' only if ' part. First, consider the ' if ' part If then. Basis is n =1 Then. In this case, it is clear that. Hence, by construction is a production of G. Then Inductive Hypothesis : Inductive Step : For n >1, let w = ax for some and consider the first move of the PDA M which uses the general transition =. Now M must remove from stack while consuming x in the remaining n-1 moves. Let , where is the prefix of x that M has consumed when first appears at top of the stack. Then there must exist a sequence of states in M (as per construction) (with ), such that [ This step implies ] [ This step implies ]... = [ Note: Each step takes less than or equal to n -1 moves because the total number of moves required assumed to be n-1.] That is, in general ,. So, applying inductive hypothesis we get ,. But corresponding to the original move in M we have added the following production in G. We can show the computation of the PDA on a given input using the IDs and next move relations. For example, following are the computation on two input strings. i) Let the input be aabb. we start with the start configuration and proceed to the subsequent IDs using the transition function defined ( using transition 1 ) , ( using transition 2 ) ( using transition 3 ), ( using transition 4 ) ( using transition 5 ) , is final state. Hence, accept. So the string aabb is rightly accepted by M. we can show the computation of the PDA on a given input using the IDs and next move relations. For example, following are the computation on two input strings. i) Let the input be aabab. No further move is defined at this point. Hence the PDA gets stuck and the string aabab is not accepted. The following is a sequence of configurations leading to the acceptance of the string [ [ ] [ ] ] [ ]. Equivalence of acceptance by final state and empty stack. It turns out that the two definitions of acceptance of a language by a PDA - accpetance by final state and empty stack- are equivalent in the sense that if a language can be accepted by empty stack by some PDA, it can also be accepted by final state by some other PDA and vice versa. Hence it doesn't matter which one we use, since each kind of machine can simulate the other.Given any arbitrary PDA M that accpets the language L by final state or empty stack, we can always construct an equivalent PDA M with a single final state that accpets exactly the same language L. The construction process of M' from M and the proof of equivalence of M & M' are given below There are two cases to be considered. CASE 1 : PDA M accepts by final state, Let. Let be a new state not in Q. Consider the PDA where as well as the following transition. contains and. It is easy to show that M and are equivalent i.e.. Let. Then for some and Then. Thus accepts. Conversely, let accepts i.e. , then for some. inherits all other moves except the last one from M. Hence for some. Thus M accepts. Informally, on any input simulate all the moves of M and enters in its own final state whenever M enters in any one of its final status in F. Thus accepts a string iff M accepts it. CASE 2 : PDA M accepts by empty stack. we will construct from M in such a way that simulates M and detects when M empties its stack. enters its final state when and only when M empties its stack.Thus will accept a string iff M accepts. Let where and and contains all the transition of , as well as the following two transitions. and Transitions 1 causes to enter the initial configuration of M except that will have its own bottom-of-stack marker X which is below the symbols of M's stack. From this point onward M' will simulate every move of M since all the transitions of M are also in. If M ever empties its stack, then when simulating M will empty its stack except the symbol X at the bottom. At this point , will enter its final state by using transition rule 2, thereby (correctly) accepting the input. we will prove that M and are equivalent. Let M accepts. Then for some. But then, ( by transition rule 1 ) ( since include all the moves of M ) ( by transition rule 2 ) Hence, also accepts.Conversely, let accepts. Then for some Q. Every move in the sequence were taken from M. Hence, M starting with its initial configuration will eventually empty its stack and accept the input i.e.. Deterministic PDA: Regular Languages and DPDA’s The DPDA’s accepts a class of languages that is in between the regular languages and CFL’s. Deterministic Pushdown Automata (DPDA) and Deterministic Context-free Languages (DCFLs) Pushdown automata that we have already defined and discussed are nondeterministic by default, that is , there may be two or more moves involving the same combinations of state, input symbol, and top of the stock, and again, for some state and top of the stock the machine may either read and input symbol or make an - transition (without consuming any input). In deterministic PDA , there is never a choice of move in any situation. This is handled by preventing the above mentioned two cases as described in the definition below. Defnition : Let be a PDA. Then M is deterministic if and only if both the following conditions are satisfied. 1. has at most one element for any and (this condition prevents multiple choice f any combination of ) 2. If and for every (This condition prevents the possibility of a choice between a move with or without an input symbol). Empty Production Removal The productions of context-free grammars can be coerced into a variety of forms without affecting the expressive power of the grammars. If the empty string does not belong to a language, then there is a way to eliminate the productions of the form A from the grammar. If the empty string belongs to a language, then we can eliminate from all productions save for the single production S . In this case we can also eliminate any occurrences of S from the right-hand side of productions. Procedure to find CFG with out empty Productions Unit production removal Left Recursion Removal NORMAL FORMS Two kinds of normal forms viz., Chomsky Normal Form and Greibach Normal Form (GNF) are considered here. Chomsky Normal Form (CNF) Any context-free language L without any -production is generated by a grammar is which productions are of the form A BC or A a, where A, B VN , and a V . Procedure to find Equivalent Grammar in CNF (i) Eliminate the unit productions, and -productions if any, (ii) Eliminate the terminals on the right hand side of length two or more. (iii) Restrict the number of variables on the right hand side of productions to two. Proof: For Step (i): Apply the following theorem: “Every context free language can be generated by a grammar with no useless symbols and no unit productions”. At the end of this step the RHS of any production has a single terminal or two or more symbols. Let us assume the equivalent resulting grammar as G (VN ,VT ,P ,S ). For Step (ii): Consider any production of the form Example Obtain a grammar in Chomsky Normal Form (CNF) equivalent to the grammar G with productions P given Solution Pumping Lemma for CFG A “Pumping Lemma” is a theorem used to show that, if certain strings belong to a language, then certain other strings must also belong to the language. Let us discuss a Pumping Lemma for CFL. We will show that , if L is a context-free language, then strings of L that are at least „m‟ symbols long can be “pumped” to produce additional strings in L. The value of „m‟ depends on the particular language. Let L be an infinite context-free language. Then there is some positive integer „m‟ such that, if S is a string of L of Length at least „m‟, then (i) S = uvwxy (for some u, v, w, x, y) (ii) | vwx| m (iii) | vx| 1 (iv) uv iwx i yL. for all non-negative values of i. It should be understood that (i) If S is sufficiently long string, then there are two substrings, v and x, somewhere in S. There is stuff (u) before v, stuff (w) between v and x, and stuff (y), after x. (ii) The stuff between v and x won‟t be too long, because | vwx | can‟t be larger than m. (iii) Substrings v and x won‟t both be empty, though either one could be. (iv) If we duplicate substring v, some number (i) of times, and duplicate x the same number of times, the resultant string will also be in L. Definitions A variable is useful if it occurs in the derivation of some string. This requires that (a) the variable occurs in some sentential form (you can get to the variable if you start from S), and (b) a string of terminals can be derived from the sentential form (the variable is not a “dead end”). A variable is “recursive” if it can generate a string containing itself. For example, variable A is recursive if Proof of Pumping Lemma (a) Suppose we have a CFL given by L. Then there is some context-free Grammar G that generates L. Suppose (i) L is infinite, hence there is no proper upper bound on the length of strings belonging to L. (ii) L does not contain l. (iii) G has no productions or l-productions. There are only a finite number of variables in a grammar and the productions for each variable have finite lengths. The only way that a grammar can generate arbitrarily long strings is if one or more variables is both useful and recursive. Suppose no variable is recursive. Since the start symbol is non recursive, it must be defined only in terms of terminals and other variables. Then since those variables are non recursive, they have to be defined in terms of terminals and still other variables and so on. After a while we run out of “other variables” while the generated string is still finite. Therefore there is an upper bond on the length of the string which can be generated from the start symbol. This contradicts our statement that the language is finite. Hence, our assumption that no variable is recursive must be incorrect. (b) Let us consider a string X belonging to L. If X is sufficiently long, then the derivation of X must have involved recursive use of some variable A. Since A was used in the derivation, the derivation should have started as Usage of Pumping Lemma Hence our original assumption, that L is context free should be false. Hence the language L is not con text-free. Example Check whether the language given by L {a mbmcn : m n 2m} is a CFL or not. Solution Closure properties of CFL – Substitution Applications of substitution theorem Reversal Inverse Homomorphism: