Unsolvable Problems and Computational Functions TOC 5 PDF
Document Details
Uploaded by SmartestBlueTourmaline6568
Tags
Related
Summary
This document covers the topic of unsolvable problems and computational functions, including primitive recursive functions, recursive and recursively enumerable languages, and universal Turing machines. It also details methods for measuring and classifying complexity, like tractable and intractable problems, P, and NP completeness. The document contains course content, definitions, examples, and potentially questions.
Full Transcript
# Unsolvable Problems and Computational Functions ## Syllabus - Unsolvable problems and computable functions - Primitive recursive functions - Recursive and recursively enumerable languages - Universal Turing machine - Measuring and Classifying Complexity: Tractable and intractable problems - Tract...
# Unsolvable Problems and Computational Functions ## Syllabus - Unsolvable problems and computable functions - Primitive recursive functions - Recursive and recursively enumerable languages - Universal Turing machine - Measuring and Classifying Complexity: Tractable and intractable problems - Tractable and possibly intractable problems - P and NP completeness - Polynomial time reductions ## Contents - 6.1 Unsolvability - 6.2 Primitive Recursive Functions - 6.3 Recursive and Recursively Enumerable Languages - 6.4 Universal Turing Machine - 6.5 Tractable and Intractable Problems - 6.6 Tractable and Possibly Intractable Problems: P and NP - 6.7 NP Completeness - 6.8 Polynomial Time Reductions - 6.9 Proving NP Complete Problems - 6.10 Post's Correspondence Problem - 6.11 Two Marks Question with Answers - 6.12 University Questions with Answers (Long Answered Questions) ## 6.1 Unsolvability In the theory of computation we often come across such problems that are answered either yes or no. The class of problems which can be answered as yes are is called solvable or decidable, otherwise the class of problems is said to be Unsolvable or Undecidable. ## 6.2 Primitive Recursive Functions **AU: Dec.-16, Marks 10** Recursive function is a class of functions those are turing computable. The theory of recursive functions is just converse to Church's hypothesis. Recursive function theory begins with some very elementary functions that are intuitively effective. Then is provides a few methods for building more complicated functions from simpler functions. That means the computability of given function can be proved using the initial function and some building operations which are few in number. **Initial function:** The initial functions are the elementary functions whose values are independent of their smaller arguments. The following functions comprise the class of recursive functions. - The zero function: *z(x) = 0* - The successor function: *s(x) = successor of x (roughly, "x+1")* - The identity function: *id(x) = x* The zero functions returns zero regardless of its argument. The successor functions returns the successor of its argument. Since successorship is a more primitive notion. The zero and successor functions take only one argument each. But the identity functions is designed to take any number of arguments. When it takes one argument (as above) it returns its argument as its value. When it takes more than one argument, it returns one of them. That means, - *id(x,y) = x* - *id(x,y) = y* **Building operations:** We will build more complex functions from the initial set by using only three methods that are, - Composition - Primitive recursion - Minimization. **1) Composition** We will start with the successor function, *s(x) = x+1* Then we may replace its argument, x, with a function. If we replace the argument, x, with the zero function. *z(x)* then the result is the successor of zero, *s(z(x)) = 1* *s(s(z(x))) = 2* and so on. In this way, with the help of the initial functions we can describe the natural numbers. This building operations is called "composition". It should be clear that when composition is applied to computable functions, only computable functions will result. **2) Primitive recursion** The second building operation is called primitive recursion. Primitive recursion is a method of defining new functions from old function. The function *h* is defined through functions *f* and *g* by primitive recursion when *h(x, 0) = f(x)* *h(x, s(y)) = g(x, h(x, y))* where *f* and *g* are known computable functions. There are two equations. When *h*’s second argument is zero, the first equation applies; when it is not zero, we use the second. Use of successor function in second equation enforces the condition that the argument be greater than zero. Hence, the first equation applies in the minimal case and the second applied in every other case. To solve the function by primitive recursion 1. When the second argument of *h* is zero, the function is equivalent to some known function *f* and we compute it; 2. Otherwise it is equivalent to some known function *g* and we compute it. Thus the function obtained will be computable in nature. For example, we can calculate the factorial function using recursion as: Initially *1! = 1* and to calculate *n!*, if we multiply *n* by (*n - 1*) then it will generate a nonrecursive series. Instead of that we will multiply *n* by (*n - 1*)!. We can express the calculation of factorial function by following two equations - *1! = 1* *n! = n*(n - 1)* A strict definition of the factorial function, *f(n)* then, consists of these two equations : *f(n) = 1* when *n = 1* *f(n) = n*(f(n-1))* when *n > 1)* Consider *n = 5* then using equation (6.2.2) we will get, *f(5) = 5*f(4)* *f(4) = 4*f(3)* *f(3) = 3*f(2)* *f(2) = 2*f(1)* By putting the value of equation (6.2.1) for calculating *f(2)* we will get, *f(2) = 2*1 = 2* Then *f(3) = 3*f(2)* = 3*2 = 6 Then *f(4) = 4*f(3)* = 4*6 = 24 Then *f(5) = 5*f(4)* = 5*24 = 120 Primitive recursion is like mathematical induction. The first equation defines the basis, and the second defines the induction step. **3) Minimization** If *g(x)* is a function that computes the least *x* such that *f(x) = 0*, then we know that *g* is computable. And then we can say that *g* is produced from *f* by minimization. But we can build *g* by minimization only if *f(x)* is already known to be computable. For example: Suppose we want to obtain least *x* which makes *f(x) = 0* then we will try the natural numbers *0,1,2,.......* until we reach the first value that gives *f(x) = 0*. Now if such search for *x* never gets terminated then it is called unbounded minimization. And if we obtain such *x* within some tests then it is called bounded minimization. While unbounded minimization has the disadvantages of a partial function which may never terminate, bounded minimization has the disadvantage of sometimes failing to minimize. ## 6.2.1 Class of Recursive Functions The classification of recursive functions is as shown in Fig. 6.2.1. **1. Partial Recursive Function** The function is called partial recursive function if it can be obtained by applying composition, primitive recursion and minimization as building operations. **2. General Recursive Function** The function is called general recursive if it is obtained by applying composition, primitive recursive and an unbounded minimization that happen to terminate. The general recursive function is a larger class than partial recursive functions. **3. Primitive Recursive Function** The function is called primitive recursive if it is obtained by applying composition, primitive recursion and unbounded minimization that does not terminate. The set of general recursive function is the same as the set of turing computable functions. The example of general recursive function is an Ackermann's function. The Ackermann's function can be defined as follows - *A(0, y) = y + 1* *A(x+1, 0) = A(x, 1)* *A (x+1, y+1) = A(x, A (x+1, y))* Then we can calculate *A(x,y)* for every pair of (*x,y*). **Example 6.2.1** Compute *A(1, 1) A(1, 2), A(2, 1), A(2, 2)* using Ackermann's function. **Solution:** Let *A(0, y) = y + 1* *A(x+1, 0) = A(x, 1)* *A(x+1, y+1) = A(x, A(x+1, y))* Be the Ackermann's function, To compute *A(1, 1)* put *x = 0, y = 0*, then *A(1, 1) = A(0+1, 0+1)* = A(0, A(0+1, 0)) = A(0, A(0, 1)) = A(0, 2) Hence *A(1, 1) = 3* To compute *A(1, 2)* put *x = 0* and *y = 1* *A(1, 2) = A(0+1, 1 + 1)* = A(0, A(1, 1)) = A(0, 3) = 4 To compute *A(2, 1)* put *x = 1* and *y = 0* *A(2, 1) = A(1+1, 0+1)* = A(1, A(2, 0)) = A(1, A(1, 1)) = A(1, 3 ) = A(0+1, 2+1) = A(0, A(1, 2)) = A(0, 4 ) = 5 To compute *A(2, 2)* put *x = 1* and *y = 1* *A(2, 2) = A (1+1, 1+1)* = A(1, A(2, 1)) = A(1, 5) Now we will compute *A(1, 5)* where in *x = 0* and *y = 4*. *A(1, 5) = A(0+1, 4+1)* = A(0, A(1, 4)) = A(0, (A(0+1, 3+1 ))) = A(0,(A(0,A(1, 3)))) = A(0,(A(0,A(0+1,(2+1)))) = A (0, (A (0,A(0,A(1, 2)))) = A(0,(A(0,A(0,(4)))) = A(0,(A(0, 5))) = A(0, 6) * *A(1,5) = 7* Hence *A(2, 2) = A(1, 5)* *A(2, 2) = 7* ## 6.3 Recursive and Recursively Enumerable Languages **AU: Dec.-16, Marks 10** **AU: Dec.-03,04, 11, 15, 16, May-05, 06, 07, 09, 11, 12, Marks 16** The recursively enumerable language has two categories. The first category consists of all the languages that has some algorithm and an algorithm for language *L* can be modeled by a Turing machine. Naturally such turing machine always halts on valid input and enters in accept state, but on invalid input (the input that is not belonging to language *L*) it halts without entering in halt state.) Thus languages of this category possess a property of definiteness. The languages of this category are particularly called as recursive languages. In the second category, the languages can be modeled by Turing machines but there is no guarantee that the TM will eventually halt. In the sense that the we can not predict that Turing machine will halt or will enter in an infinite loop for certain input. Such type of languages can be denoted by pair (*M, W*) where *M* is a Turing machine and *w* is an input string. These languages of this category are typically called as Recursively Enumerable languages (RE). There are three categories of the languages - 1. Recursive language for which the algorithm exists. 2. Recursively enumerable language for which it is not sure that on which input the TM will ever halt. Such languages are not recursive. 3. The non recursively enumerable languages for which there is no Turing Machine at all. Following Fig. 6.3.1 can show the relationship between these languages. ## 6.4 Universal Turing Machine. **May-05, 06, 07, 09, 12, 14, 16, Dec.-03, 04, 11, 14, 15, ... Marks 16** This the concept of universal Turing machine came up. The universal language can be represented by *Lu = L(U)* where *U* is a universal Turing Machine. In fact *U* is a binary string. This binary string represents various codes of many Turing machines. Thus the universal Turing machine is a turing machine which accepts many Turing machines. The TM *U* is a multitape Turing machine which can be as shown in Fig. 6.4.1. - The universal Turing machine *U* accepts the TM *M* - The transitions of *M* are stored initially on first tape along with the string *w*. - On the second tape the simulated tape of *M* is placed. Here the tape symbol *X₁* of *M* will be represented by *0¹* and tape symbols are separated by single *1*’s. - On the third tape various states of machine *M* are stored. The state *qi* is represented by *i 0*’s. **Operations on Turing Machine** 1. The first tape is observed carefully to check whether the code for *M* is valid or not. If the code is not valid then *U* halts without accepting. As invalid codes are assumed to represent the TM with no moves, such TM accepts no inputs and halts. 2. Initialize the second tape for placing the tape values of *M* in encoded form that means for each *0* of the input *w* place *10* on the second tape and for each *1* place *100* there. The blank letters of tape of machine *M* are represented by *1000*. But actually blank appears at the end of input *w* for the machine *U*. However the blanks of *M* are simulated by *1000*. 3. As the third tape stores states of machine *M*, place *0* there as the start state of machine *M*. The head of third tape will now position at start state (i.e. at *0*) and the head of third tape will now position at start state (i.e. at *0*) and the head for the tape *2* will position at the first simulated cell of second tape. 4. Now the move of *M* can be simulated. As on the first tape transitions are stored *U* first searches the tape *1* to know the transitive first tape transitions are stored *U* consists of *0^i 10^j 10^k 10^m*, as we have already seen which are always given in the form *0^i 10^j 10^k 10^m*, as we have already seen that such a Turing machine code consists of - *0^i* means current state, - *0^j* means current input tape symbol, - *0^k* means changed state, - *0^l* means changed input tape symbol, - *0^m* means direction in which the head is to be moved. Now to simulate the move of *M* change the content of tape *3* to *0^k* that is, simulate the state change of *M*. Then replace *0* on tape *2* by *0^l*, that is simulate the change in input. Then move the tape head on tape to the position of next *1*. The tape head moves to left or right depending upon the value of *m*, that means if *m = 1* then move left and if *m = 2* move right. Thus *U* simulates the move of *M* to left or to the right. 5. If *M* has no transition then no transition is performed. Therefore *M* halts in the simulated configuration and hence *U* halts. 6. If *M* enters in accept state, then *U* accepts. Thus the TM *U* simulates *M* on binary input string *w* where a pair (*M, W*) is also coded in binary form. ## 6.4.1 Undecidability of the Universal Languages **AU: May-14 Marks 14** The universal language *L₁* is a recursively enumerable language we have to prove that it is undecidable (means not recursive). **Theorem:** *L₁* is RE but not recursive. **Proof:** Consider that language *L₁* is recursively enumerable language. We will assume that *L₁* is recursive. Then the complement of *L₁* that is *L'₁* is also recursive. However if we have a TM *M* to accept *L'₁* then we can construct a TM *Ld*. But *Ld* the diagonalization language is not RE. Thus our assumption that *L₁* is recursive is wrong (not RE means not recursive). Hence We can say that *L₁* is RE but not recursive. The construction of M for L *d* is as shown in Fig. 6.4.2. ## 6.4.2 Rice's Theorem **Theorem:** Every non trivial property of RE language is undecidable. **Proof:** Property of a language is a set of strings for which the turing machines can be drawn which ultimately represents certain class. For example a property of "context free language" in TM contains all the codes for the turing machine *M* such that all the *L(M)* are context free languages. Let *P* be any non trivial property of RE languages. That means at least one language has the property and at least one language does not. Initially we will assume & an empty language is not having the property *P*. As *P* is the nontrivial, there should be some language *L* having property *P* i.e. *L = Lp* and therefore there exists a coded TM accepting language *L*. Let us call such TM as *Mr*. Let us reduce a universal language *Lu* to *Lp* and as *Lu* is undecidable *Lp* is also undecidable. The turing machine *M'* can be produced on reduction. For the reduction algorithm the pair (*M,w*) can be given as input. Basically *M'* is a two-tape turing machine such that - One tape is used to simulate TM *M* for the input *w*. Then input (*M,w*) is used for designing the transitions of *M΄*. - The second tape is used to simulate *M₁* on input *x*. Again a pair (*M', x*) is used to build the transitions of *M'*. The TM *M'* does the following things 1. The turing machine *M* is simulated for the input *w*. Then TM *M'* acts as a universal turing machine by accepting the pair (*M, w*). 2. There exists two cases either *M* accepts *w* or *M* does not. If *M* accepts *w* then *M'* does nothing else. *M'* cannot have its own input *x*, in such case *L(M') = $*. But as we have assumed o is not in property *P* that also means code for *M'* is not in *Lp*. 3. If *M* accepts *w* then *M'* simulates *M₁* for its own input *x* and hence *M'* actually accepts the language of *M₁*. In other words, *M'* accepts the language *L*. But the language *L* is having the property *P*, therefore the code for *M'* is in *Lp*. 4. From above three steps we cannot decide whether code for *M'* is in *Lp*. This proves that property *P* is undecidable. Now there exists another possibility that if o is in *P* then *P'* is a set of languages not having property *P*. As *P* is undecidable *P'* is also undecidable. But there is a Turing machine which accepts a recursively enumerable language *Lp1*. The language *Lp1* is actually a set of turing machine code. These codes actually accept the language having property *P*. Even such set of languages is same as the set of TM codes that accept a set of languages having property *P'*. If at all *Lp* is decidable then *Lp'* is also decidable. ## 6.5 Tractable and Intractable Problems **AU: Dec.-15, 16, Marks 6** - The class of solvable problems is known as decidable problems. That means decidable problems can be solved in measurable amount of time or space. - The tractable problems are the class of problems that can be solved within reasonable time and space. For example - Sorting a list, multiplication of integers. - The intractable problems are the class of problems that can be solved within polynomial time. For example - List of all permutations of *n* numbers, Tower of Hanoi. This has lead to two classes of solving problems - *P* and *NP* class problems. ## 6.6 Tractable and Possibly Intractable Problems : P and NP **AU: Dec.-12, 13, Marks 6** There are two groups in which a problem can be classified. The first group consists of the problems that can be solved in polynomial time. For example: searching of an element from the list *O(logn) ,* sorting of elements *O(logn)*. The second group consists of problems that can be solved in non-deterministic polynomial time. For example: Knapsack problem *O(2^n/2)* and Traveling Salesperson Problem (O(n2^n)). - Any problem for which answer is either yes or no is called decision problem. The algorithm for decision problem is called decision algorithm. - Any problem that involves the identification of optimal cost (minimum or maximum) is called optimization problem. The algorithm for optimization problem is called optimization algorithm. - **Definition of P Problems** that can be solved in polynomial time. ("P" stands for polynomial). Examples - Searching of key element, Sorting of elements, All pair shortest path. - **Definition of NP** - It stands for "non-deterministic polynomial time". Note that NP does not stand for "non-polynomial". Examples - Travelling Salesperson problem, Graph coloring problem, Knapsack problem, Hamiltonian circuit problems. The NP class problems can be further categorized into NP-complete and NP hard problems. **A problem D is called NP-complete if -** - It belongs to class NP - Every problem in NP can also be solved in polynomial time. - If an NP-hard problem can be solved in polynomial time then all NP-complete problems can also be solved in polynomial time. - All NP-complete problems are NP-hard but all NP-hard problems can not be NP-complete. **The NP class problems** are the decision problems that can be solved by non-deterministic polynomial algorithms. ## 6.6.1 Example of P Class Problem **Kruskal's Algorithm:** In Kruskal's algorithm the minimum weight is obtained. In this algorithm also the circuit should not be formed. Each time the edge of minimum weight has to be selected, from the graph. It is not necessary in this algorithm to have edges of minimum weights to be adjacent. Let us solve one example by Kruskal's algorithm. **Example 1:** **Example 2 :** Find the minimum spanning tree for the following figure using Kruskal's algorithm. In Kruskal's algorithm, we will start with some vertex and will cover all the vertices with minimum weight. The vertices need not be adjacent. ## 6.7 NP Completeness **AU: Dec.-11, Marks 8** - As we know, *P* denotes the class of all deterministic polynomial language problems and *NP* denotes the class of all non-deterministic polynomial language problems. Hence - *P C NP*. - The question of whether or not - *P = NP* holds, is the most famous outstanding problem in the computer science. - Problems which are known to lie in *P* are often called as tractable. - Problems which lie outside of *P* are often termed as intractable. Thus, the question of whether *P=NP* or *P ≠ NP* is the same as that of asking whether there exist problems in *NP* which are intractable or not. The relationship between *P* and *NP* is depicted by Fig. 6.7.1. We don't know if *P = NP*. However in 1971 S. A. Cook proved that a particular NP problem known as *SAT*(Satisfiability of sets of Boolean clauses) has the property that, if it is solvable in polynomial time, so are all NP problems. This is called a “NP-complete” problem. Let *A* and *B* are two problems then problem *A* reduces to *B* if and only if there is a way to solve *A* by deterministic polynomial time algorithm using a deterministic algorithm that solves *B* in polynomial time. A reduces to *B* can be denoted as *A < B*. In other words we can say that if there exists any polynomial time algorithm which solves *B* then we can solve *A* in polynomial time. We can also state that if *A < B* and *B < C* then *A < C*. A NP problem such that, if it is in *P*, then *NP = P*. If a (not necessarily NP) problem has this same property then it is called “NP-hard”. Thus the class of NP-complete problem is the intersection of the NP and NP-hard classes. - Normally the decision problems are NP-complete but optimization problems are NP-hard. However if problem *A* is a decision problem and *B* is optimization problem then it is possible that *A < B*. For instance the Knapsack decision problem can be Knapsack optimization problem. - There are some NP-hard problems that are not NP-complete. For example halting problem. The halting problem states that : "Is it possible to determine whether an algorithm will ever halt or enter in a loop on certain input?" - Two problems *P* and *Q* are said to be polynomially equivalent if and only if *P < Q* and *Q < P*. ## 6.8 Polynomial Time Reductions **AU: Dec.-10,11,16, May-12, Marks 8** To prove whether particular problem is NP complete or not we use polynomial time reducibility. That means if *A* **Poly** → *B* and *B* **Poly** → *C* then *A* **Poly** → *C*. The reduction is an important task in NP completeness proofs. This can be illustrated by Fig. 6.8.1. ## 6.9 Proving NP Complete Problems **AU: May-12, Marks 8** … ## 6.10 Post's Correspondence Problem **AU: Dec.-07, 10, 11, 12, 14, Marks 8, May-13, 16, Marks 16, Dec.-15, Marks 10** In this section, we will discuss the undecidability of strings and not of Turing machines. The undecidability of strings is determined with the help of Post's Correspondence Problem (PCP). Let us define the PCP. "The post's correspondence problem consists of two lists of strings that are of equal length over the input Σ. The two lists are A = *W₁ W₂ W₃... , Wn* and B = *X₁, X₂, X₃... ✗n* then there exists a non empty set of integers *i₁, i₂, i₃, ... in* such that *W₁₁, W₁₂, W₁₃..., W₁n = X₁₁, X₁₂, X₁₃..., ✗₁n*." To solve the post correspondence problem we try all the combinations of *i₁, i₂, i₃ ..., in* to find the *w₁ = x₁* then we say that PCP has a solution. **Example 6.10.1** Consider the correspondence system as given below - A = *(1; 0; 010; 11)* and *B = (10; 10; 01; 1)*. The input set is *∑ = {0, 1}*. find the solution. **Solution:** A solution is *1; 2; 1; 3; 3; 4*. That means *W₁W₂W₁W₃W₃W₄ = X₁X₂X₁X₃X₃X₄*. The constructed string from both lists is *10101001011*. **Example 6.10.2** Obtain the solution for the following correspondence system. A = *{ba, ab, a, baa, b}*, B = *{bab, baa, ba, a, aba}*. The input set is *{a, b}* **Solution:** To obtain the corresponding system the one sequence can be chosen. Hence we get *w₁W₅W₂W₃W₄W₄W₃W₄ = X₁X₅X₂X₃X₄X₄W₃W₄*. This solution gives a unique string *babababaabaaabaa = babababaabaaabaa*. Hence solution is *15234434*. **Example 6.10.3** Obtain the solution for the following system of posts correspondence problem. A = *{100, 0, 1}*, B = *{1, 100, 00}* **Solution:** The solution is *131 132 2*. The string is *A₁A₃A₁A₁A₁A₃A₂A₂ = 100 + 1 + 100 + 100 + 1 + 0 + 0 = 1001100100100* *B₁B₃B₁B₁B₃B₂B₂ = 1 + 00 + 1 + 1 + 00 + 100 + 100 = 1001100100100* **Example 6.10.4** Obtain the solution for the following system of posts correspondence problem A = *{ba, abb, bab}* B = *{bab, bb, abb}*. **Solution:** Now to consider *1, 3, 2* the string *babababb* from set *A* and *bababbbb* from set *B* thus the two strings obtained are not equal. As we can try various combinations from both the sets to find the unique sequence but we could not get such a sequence. Hence there is no solution for this system. **Example 6.10.5** Does PCP with two lists *x = (b, bab³, ba)* and *y = (b³, ba, a)* have a solution? **AU: May-07, Marks 5** **Solution:** Now we have to find out such a sequence that strings formed by *x* and *y* are identical. Such a sequence is *2, 1, 1, 3*. Hence from *x* and *y* list *bab³ b b ba = b³ ba ba* *2 1 1 3* *2 1 1 3* which forms *bab³-b³a*. Thus PCP has a solution. **Example 6.10.6** Explain how post correspondence problem can be treated as a game of dominoes. **Solution:** As in the game of dominoes, the upper half corresponds to some strings and lower half corresponds to some strings. To win a game, the same string appear in upper and lower half. Winning of a game is equivalent to getting solution for post correspondence problem. **Example 6.10.7** Define post correspondence problem. Let *∑ = {0, 1}*. Let *A* and *B* be the lists of three strings each, defined as, **AU: May-14, Marks 6** **List A** | **List B** -------|--------- *w₁* | *x₁* 1 | 111 1 | 1 **Does this PCP here have a solution?** **Solution:** Refer similar example 6.10.5 by assuming *a* as *0* and *b* as *1*. ## 6.10.1 Modified PCP Given a set of pairs of strings (*w₁x₁*), (*w₂x₂*)..., (*w*kx*k*), is there then the solution is an instance such that *w₁ w₁₁ w₁₂.. w₁m = x₁ x₁₁ x₁₂...x₁m* That means the pair (*w₁x₁*) is forced to be at the beginning of two strings. For example consider two lists *A* and *B* as | **i** | **List A** | **List B** | |---|---|---| | 1 | 11 | 111 | | 2 | 100 | 001 | | 3 | 111 | 11 | Then the solution is *w₁ w₂ w₃ = x₁ x₂ x₃* *11100111 = 11100111* That means it is essential to have *w₁ and x₁* at the beginning of the list. **Proof:** To prove that modified PCP is decidable we have to show that PCP is decidable. That is MPCP reduces to PCP. While proving this we will consider some instance of MPCP with ∑. Then we construct an instance of PCP by introducing a new symbol *. The construction PCP will be done by following method - Let, we are given an instance of MPCP with lists *A = W₁,W₂,.... Wk* and *B = X1. X2,.... Xk*. Now we will insert new symbols * and $ in ∑. *D = Z0, Z1, ... Zk + 1* as follows *y₁ = w₁* with * after each symbol of *w₁* and *zi = x₁* with * before each symbol of *x₁*. For example : Consider | **i** | **List A** | **List B** | **List C** | **List D** | |---|---|---|---|---| | 1 | 11 | 111 | 1*1* *1*1*1 | 1 | | 2 | 100 | 001 | 1*0*0* *0*0*1 | 2 | | 3 | 111 | 11 | 1*1*1* *1*1 | 3 | - Now assign *yo = * y₁* and *zo = * z₁*. That means first pair and zeroth pair should be same except that zeroth pair has extra * at the beginning of the string from first list. Whereas from the second list zeroth and first pair must be the same. For example | **i** | **List C** | **List D** | **List C** | **List D** | |---|---|---|---|---| | 0 | *1*1* *1*1*1 | | 1*1* *1*1*1 | 1 | | | | | | | | 1 |