Solutions for Exercise Sheet 1
Total Page:16
File Type:pdf, Size:1020Kb
Computational Complexity, 2012{13 1 Solutions for Exercise Sheet 1 Rather than simply stating the solutions, I will try to describe intuitions and problem-solving strategies that are natural for each problem. Hopefully, this will be helpful when solving future exercises, and in the final exam as well. Some solutions also have a Notes section explaining the motivation behind the problem, and/or a Related Problems section to help you test your understanding of the solution method. 1. Question: An infinite-state Turing machine is a Turing machine defined in the usual manner, except that the state set Q is infinite. The input and tape alphabets, though, remain finite. Show that for any language L ⊆ f0; 1g∗, there is an infinite-state Turing machine deciding L in linear time. Solution: Perhaps the most natural way to decide a language or compute a function is to use a \lookup table", which tells you the answer for each possible input. This is not typically useful unless you're dealing with finite languages or functions, because Turing machines as they're usually defined have a finite description. Allowing the Turing machine to have an infinite number of states opens up the possiblity of using the simple lookup table strategy again. We can create a state for each string, essentially recording whether that string is a YES instance or not. Since there are a countably infinite number of strings to consider, there will also be a countably infinite number of states. More formally, let L be any language of binary strings. We define a Turing machine M = (Q; Σ; Γ; δ; qi; qf ) as follows. Σ = f0; 1g and Γ = f0; 1;Bg(we won't be using any tape apart from the read-only input tape). Q contains ∗ a state qw for each string w 2 f0; 1g , as well as the initial state qi and final state qf . The transition function is as follows. If the machine is in state qi and the input symbol being read is 0, the machine goes to state q0, otherwise it goes to state q1. In general, if the machine is in state qw and reads 0, it goes to state qw0; if it reads 1, it goes to state qw1. If it reads the blank symbol (which means the entire input has been read), it goes to the accept state qf in case w 2 L, otherwise the transition function is undefined (which implies rejection). Clearly, M accepts those w which are in L and no others. Also, it operates in linear time, since it reads each input symbol exactly once and then makes a decision. Notes: This question is meant to illustrate how the concept of computation becomes trivial when the computational model is not finitistic. It is a remarkable fact - the Church-Turing thesis - that all strong enough models of computation that are finitistic are equivalent to each other in terms of deciding power, and in particular equivalent to the multi-tape Turing machine. Computational Complexity, 2012{13 2 jxj 2. Question: Let L = fxy j jxj = jyj and Σi=0xiyi = 1 mod 2g. Prove: (a) L 2 DTIME(n) (b) L 2 DSPACE(log(n)) Note: You do not need to specify the Turing machines accepting L in full detail, but you need to give a clear high-level description and argue that the resource bounds are as claimed. Solution: This problem is similar to the examples discussed in class of the Parity and Duplication languages, for which we analyzed the time and space complexity. We solve the first part first. We are asked to construct a deterministic Turing machine M which decides L in linear time. M should accept iff its input is of the form xy, where jxj = jyj, and if the inner product of x and y is odd. For the first condition to hold, the input length should be even. So we count the input length first - this also allows us to split up the input into x and y, which facilitates the computation of the inner product. The counting can be done by implementing a counter on a read/write tape, and incrementing the counter for each input bit read. This might seem to take Ω(n log(n)) time, since the counter is of size O(log(n)) and every increment can take time up to log(n). However, the amortized complexity of repeated incrementation is linear - we need to use time i for counter incrementation only about 1=2i'th fraction of the time. For instance, when the counter is even, incrementing it just requires changing one bit. Once we've computed the count, we first check if it's even; if not, we reject. If the input length is even, we compute n which is half the input length, simply by removing the least significant bit from the counter. Once we know n, we can determine the boundary between x and y on the input tape, simply by incrementing a new counter each time an input bit is read, and stopping when the counter reaches n. Since the computation of the inner product involves multiplying xi and yi for various bit positions i, it's convenient to have x and y on different tapes, which can then be read from left to right while the computation is performed. So, once we know where y begins on the input tape, we copy it bit by bit onto a new read/write tape. This only takes linear time. We then initialize the input tape head to the first bit of x and the tape head of the new read/write tape to the first bit on that tape, which again takes linear time. Now we simply scan the tapes from left to right, recording in our state whether the inner product so far is odd or even. This can be updated in constant time per input bit read. When we come to the end of Computational Complexity, 2012{13 3 x and hence of y, we either accept or reject depending on whether the inner product is odd or even. It is easy to see that this procedure correctly decides L, and the time taken by the machine M we've defined is linear in n. For the second part, we need to construct an M 0 deciding L which operates in a more space-efficient way. Thus we can no longer afford to copy all of y onto a new tape. So instead, we will just maintain pointers to the current position in y and the current position in x on a read/write tape, so that we can update the inner product modulo 2 in the same way as before. Each such pointer can be represented in O(log(n)) bits, which will give us a space-efficient computation. The first part of the operation of M 0 is eactly as before - we count the input length, and check it's even. This just involves maintaining a counter, which only takes logarithmic space. Then we need to maintain an updated partial inner product modulo 2. This essentially involves maintaining the sum of wjwn+j for j going from 1 to i, where w is the input. This is because wj = xj and wn+j = yj. To update this product mod 2, we access xj+1 using a counter, store it in our state, then access yj+1 = wn+j+1, again using a counter, compute the product of these two bits, and record the updated parity of the inner product in our current state. We then increment j. When j reaches n, we stop and either accept or reject depending on whether the current parity is 1 or 0. Notes: We construct different machines the first of which is time-efficient and space-efficient. In fact, there is no machine for solving this problem which is both linear in time and logarithmic in space. This can be proved formally - you should try it if you welcome a challenge! The point of this exercise is that there is a tradeoff between time and space complexity for some natural problems. 3. Question: Show that P 6= NSPACE(n). HINT: Consider the closures of these classes under polynomial-time m- reductions. Solution: Here we are asked to separate two complexity classes. Whenever we are asked for a complexity class separation, we should try to use diago- nalization in some way, since this is the only technique we've discussed so far for unconditionally separating classes. Sometimes diagonalization can be used directly, but here that's not an obvious possibility, since one of the classes is a time class and the other a space class. Let's try to use diago- nalization indirectly instead, by assuming the opposite of what we're trying to prove, and then deriving a contradiction to a hierarchy theorem. So now assume P = NSPACE(n). The hint suggests that it might be useful to consider the closure of these classes under polynomial-time m-reductions. Computational Complexity, 2012{13 4 The closure of P under polynomial-time reductions is clearly P itself. How about the closure of NSPACE(n)? Well, we saw in class that there are NP-complete languages in NTIME(n). Basically the same proof (using a translation technique) gives us that for any language L in NPSPACE, there is a language L0 in NSPACE(n), such that L m-reduces to L0. Thus NPSPACE is contained in the m-closure of NSPACE(n). We haven't used our assumed equality P = NSPACE(n) thus far, and we'll do so now.