<<

CSCI 1590 Intro to Computational Complexity Parallel Computation and Complexity Classes

John Savage

Brown University

April 13, 2009

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 1 / 16 Summary

1 Turing Machines and Complexity

2 Parallel Models of Computation

3 PRAM and Complexity Classes

4 Circuits and Complexity Classes

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 2 / 16 Turing Machines

At the beginning of this semester we defined the Turing machine. The Church-Turing thesis asserts that any function that can be physically realized (i.. “computed”) can be computed by a Turing Machine. The somewhat controversial Strong-Church Turing thesis states that “any ’reasonable’ model of computation can be efficiently simulated on a probabilistic Turing machine.” (Bernstein, Vazirani 1997) Here “efficient” means “with polynomial resources”. If BPP = P, the Turing machins can be deterministic.

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 3 / 16 Languages and Complexity Classes

A Turing machine recognizes a language if it accepts exactly the strings in that language. In the first portion of this class we introduced a number of complexity classes, sets of languages defined by the resources required by a Turing Machine to recognize them. We also related Turing machines to the more practical RAM model. In the second portion of this class we considered a wider range of computational models. These models, which allow for parallelism, force us to consider a wider range of computational resources. Circuits Formulas VLSI Networks PRAM In this lecture we relate these models back to Turing machines.

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 4 / 16

Using Turing machines, we defined a number of time complexity classes P, EXPTIME NP, NEXPTIME p p Σi ,Πi , PH We used reducibility and completeness to characterize these languages. Both deterministic and nondeterministic time hierarchy theorems can be proven through diagonalization.

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 5 / 16 Space Complexity

We also used Turing machines to define a number of space complexity classes , PSPACE NL, NPSPACE Savitch’s Theorem shows that PSPACE = NPSPACE, NL ⊆ L2 Diagonalization can be used to establish a space hierarchy theorem. Recall that TQBF is PSPACE-complete.

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 6 / 16 Relationships between classes

L ⊆ NL ⊆ L2 ⊂ P ⊆ NP ⊆ PH ⊆ PSPACE ⊆ EXPTIME ⊆ NEXPTIME Few lower bounds are known. Where do the functions computed by circuits, networks, PRAM fit in?

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 7 / 16 Circuits

A circuit is a directed acyclic network with a function computed at each non-leaf node. Unlike Turing machines, circuits compute functions on a finite number of inputs. A Turing machine computation of fixed length can be efficiently implemented with a circuit. We used this fact to prove Cook’s Theorem. For circuits, instead of considering space and time, we considered size, depth, fanout, and the functions computed at each node. Circuit lower bounds imply lower bounds for Turing machines. (but what about upper bounds?)

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 8 / 16 Embedded Circuits

The circuit model of computation allows for arbitrary acyclic graphs. To model VLSI, we embed these graphs in a plane. We allow only a fixed number of edge crossings and a bounded fanout. Circuit size is replaced with area. Space-time tradeoffs become area-time tradeoffs.

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 9 / 16 Networks of Processors

In a circuit, gates compute only a few simple functions. If each gate is replaced with a processor, say a RAM, we have a model of parallel computation. Since nodes now have memory we can reasonably consider cyclic graphs. For different network topologies, we ask how efficiently different problems can be parallelized. For practicality, we consider networks with a limited number of edges, as well as networks that can be efficiently embedded on a chip. Moving data between processors is a crucial challenge.

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 10 / 16 PRAM

Many parallel computers (e.g. a hypercube) can simulate arbitrary communication with only logarithmic overhead. As a result theorists often focus on whether computations can be efficiently parallelized in a fully connected network of processors. This is known as the PRAM model of computation. The PRAM model can be thought of as a multiheaded Turing machine where the number of heads grows with input length. Heads are each given a unique index, but otherwise they are identical. As with a Turing machine (and a RAM), a bounded length PRAM computation can be efficiently simulated by a circuit.

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 11 / 16 PRAM and Complexity

As we did for Turing machines, we can ask what problems can be recognized by a PRAM in polynomial time. If the number of processors is exponential in input length, we can easily solve problems in NP or coNP in polynomial time (or even constant time on a CRCW PRAM). What if we limit the number of processors to be polynomial in input length?

Definition NC is the set of languages that can be recognized by a PRAM with a polynomial number of processors in a logarithmic number of steps.

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 12 / 16 NC and Circuit Families

Since PRAM can be simulated by circuits, NC can be defined in terms of (uniform) circuits. To relate circuits (which are finite) to Turing machines (which operate on unbounded inputs), we consider families of circuits. A circuit family is a sequence of circuits in which the i th circuit takes inputs of length i. Notice that there exists a family of circuits that computes the halting problem.

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 13 / 16 Uniform Circuit Families

Definition A time (n) (or space r(n)) uniform family of circuits is a circuit family for which there exists a deterministic Turing machine that constructs the i th circuit using time (or space) r(i) given as input the integer i in unary.

With the uniformity condition, the halting problem remains unsolvable. The notion of uniformity directly connects Turing machines to circuits. For example, P = NP iff there exists a polynomial time uniform family of circuits that decide some NP-complete language.

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 14 / 16 The Class NC

Let NCi be the set of languages recognized by a log-space uniform family of circuits of polynomial-size and depth O(logi (n)) steps. Then S NC = NCi .

NC1 ⊆ L ⊆ NL ⊆ NC2 .

It is not known if NCi are distinct. As with PH, NC collapses if NCi = NCi+1 for some i. It is open whether or not P = NC.

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 15 / 16 P/Poly

What class of languages is obtained if we drop the uniformity condition but require that the circuits have polynomial size? Definition The P/poly is the set of languages recognized by families of polynomial sized circuits.

Alternatively, P/poly is the set of languages recognized by deterministic Turing machines in polynomial time, where the Turing machine is allowed to receive a polynomial-sized “piece of advice”, a(n), that is only a function of input length. (See pg. 383 of Models of Computation.) p If NP ∈ P/poly, it is not hard to show that PH = Σ2 (see Arora and Barak, Section 6.2)

John Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 13, 2009 16 / 16