<<

Descriptive Complexity

Neil Immerman

College of Computer and Information Sciences University of Massachusetts, Amherst Amherst, MA, USA people.cs.umass.edu/˜immerman

Neil Immerman Descriptive Complexity P =

∞ [ DTIME[nk ] k=1

P is a good mathematical wrapper for “truly feasible”.

co-.e. complete r.e. complete

co-r.e. r.e. Recursive

co-NP complete NP complete

co-NP NP NP co-NP ∩ P complete P

“truly

feasible” “truly feasible” is FO(CFL) the informal set of

problems we can FO(REGULAR) solve exactly on reasonably sized FO instances.

Neil Immerman Descriptive Complexity P is a good mathematical wrapper for “truly feasible”.

co-r.e. complete r.e. complete

co-r.e. r.e. P = Recursive ∞ [ DTIME[nk ] k=1

co-NP complete NP complete

co-NP NP NP co-NP ∩ P complete P

“truly

feasible” “truly feasible” is FO(CFL) the informal set of

problems we can FO(REGULAR) solve exactly on all reasonably sized FO instances.

Neil Immerman Descriptive Complexity co-r.e. complete r.e. complete

co-r.e. r.e. P = Recursive ∞ [ DTIME[nk ] k=1

P is a good co-NP complete NP complete co-NP NP mathematical NP co-NP ∩ wrapper for “truly P complete P feasible”. “truly

feasible” “truly feasible” is FO(CFL) the informal set of

problems we can FO(REGULAR) solve exactly on all reasonably sized FO instances.

Neil Immerman Descriptive Complexity NTIME[t(n)]: a mathematical fiction

0

0 0 0 input w, w = n 0 | | 0 0 0 N accepts w 0 1 0 t(n) if at least s 0 2 0 one of the 2t(n) 0 0 0 paths accepts. 0 0 0 0 0 0 t(n) 0

Neil Immerman Descriptive Complexity

b b b b 1 2 3 ··· t(n) Many optimization problems we want to solve are NP complete. SAT, TSP, 3-COLOR, CLIQUE, . . . As descison problems, all NP complete problems are isomorphic.

co-r.e. complete r.e. complete

NP = co-r.e. r.e. ∞ Recursive [ NTIME[nk ] k=1

co-NP complete NP complete SAT SAT co-NP NP NP co-NP ∩ P complete P

“truly

feasible” FO(CFL)

FO(REGULAR)

FO

Neil Immerman Descriptive Complexity As descison problems, all NP complete problems are isomorphic.

co-r.e. complete r.e. complete

NP = co-r.e. r.e. ∞ Recursive [ NTIME[nk ] k=1 Many optimization problems we want co-NP complete NP complete to solve are NP SAT SAT co-NP NP NP co-NP complete. ∩ P complete P SAT, TSP, 3-COLOR, “truly CLIQUE, . . . feasible” FO(CFL)

FO(REGULAR)

FO

Neil Immerman Descriptive Complexity co-r.e. complete r.e. complete

NP = co-r.e. r.e. ∞ Recursive [ NTIME[nk ] k=1 Many optimization problems we want co-NP complete NP complete to solve are NP SAT SAT co-NP NP NP co-NP complete. ∩ P complete P SAT, TSP, 3-COLOR, “truly CLIQUE, . . . feasible” FO(CFL) As descison

problems, all NP FO(REGULAR) complete problems are isomorphic. FO

Neil Immerman Descriptive Complexity co-r.e. complete r.e. complete

NP = co-r.e. r.e. ∞ Recursive [ NTIME[nk ] k=1

Many optimization PSPACE problems we want co-NP complete NP complete to solve are NP SAT SAT co-NP NP NP co-NP complete. ∩ P complete P SAT, TSP, 3-COLOR, “truly CLIQUE, . . . feasible” FO(CFL) As descison

problems, all NP FO(REGULAR) complete problems are isomorphic. FO

Neil Immerman Descriptive Complexity co-r.e. complete r.e. complete

NP = co-r.e. r.e. ∞ Recursive [ NTIME[nk ]

k=1 EXPTIME

Many optimization PSPACE problems we want co-NP complete NP complete to solve are NP SAT SAT co-NP NP NP co-NP complete. ∩ P complete P SAT, TSP, 3-COLOR, “truly CLIQUE, . . . feasible” FO(CFL) As descison

problems, all NP FO(REGULAR) complete problems are isomorphic. FO

Neil Immerman Descriptive Complexity S ··· ···

Restrict attention to the complexity of computing individual bits of the output, i.e., decision problems.

How hard is it to check if input has property S ?

How rich a language do we need to express property S?

There is a constructive isomorphism between these two approaches.

Descriptive Complexity

Query Answer Computation q1 q2 qn 7→ 7→ a1 a2 ai am ··· ··· ···

Neil Immerman Descriptive Complexity S ··· ···

How hard is it to check if input has property S ?

How rich a language do we need to express property S?

There is a constructive isomorphism between these two approaches.

Descriptive Complexity

Query Answer Computation q1 q2 qn 7→ 7→ a1 a2 ai am ··· ··· ···

Restrict attention to the complexity of computing individual bits of the output, i.e., decision problems.

Neil Immerman Descriptive Complexity How rich a language do we need to express property S?

There is a constructive isomorphism between these two approaches.

Descriptive Complexity

Query Answer Computation q1 q2 qn 7→ 7→ a1 a2 ai am ··· ··· S ··· ··· ···

Restrict attention to the complexity of computing individual bits of the output, i.e., decision problems.

How hard is it to check if input has property S ?

Neil Immerman Descriptive Complexity There is a constructive isomorphism between these two approaches.

Descriptive Complexity

Query Answer Computation q1 q2 qn 7→ 7→ a1 a2 ai am ··· ··· S ··· ··· ···

Restrict attention to the complexity of computing individual bits of the output, i.e., decision problems.

How hard is it to check if input has property S ?

How rich a language do we need to express property S?

Neil Immerman Descriptive Complexity Descriptive Complexity

Query Answer Computation q1 q2 qn 7→ 7→ a1 a2 ai am ··· ··· S ··· ··· ···

Restrict attention to the complexity of computing individual bits of the output, i.e., decision problems.

How hard is it to check if input has property S ?

How rich a language do we need to express property S?

There is a constructive isomorphism between these two approaches.

Neil Immerman Descriptive Complexity Think of the Input as a Finite Logical Structure

Graph G = ( v ,..., vn , , E, s, t) { 1 } ≤ ¨* H ¨¨ H ¨ r r HHj HH ¨¨* ¨¨* t s H ¨ ¨ r Hj¨H r ¨ r H 2 H Σg = (E , s, t) r Hj r r

Binary String w = ( p ,..., p , , S) A { 1 8} ≤ S = p , p5, p7, p { 2 8} 1 Σs = (S ) w = 01001011

Neil Immerman Descriptive Complexity In this setting, with the structure of interest being the finite input, FO is a weak, low-level .

First-Order Logic

input symbols: from Σ variables: x, y, z,... boolean connectives: , , ∧ ∨ ¬ quantifiers: , ∀ ∃ numeric symbols: =, , +, , min, max ≤ ×

α x y(E(x, y)) (Σg) ≡∀ ∃ ∈

β x y(x y S(x)) (Σs) ≡∃ ∀ ≤ ∧ ∈L

β S(min) (Σs) ≡ ∈L

Neil Immerman Descriptive Complexity First-Order Logic

input symbols: from Σ variables: x, y, z,... boolean connectives: , , ∧ ∨ ¬ quantifiers: , ∀ ∃ numeric symbols: =, , +, , min, max ≤ ×

α x y(E(x, y)) (Σg) ≡∀ ∃ ∈L

β x y(x y S(x)) (Σs) ≡∃ ∀ ≤ ∧ ∈L

β S(min) (Σs) ≡ ∈L

In this setting, with the structure of interest being the finite input, FO is a weak, low-level complexity class.

Neil Immerman Descriptive Complexity Fagin’s Theorem: NP = SO ∃

s d t

b

a c f

g e

Second-Order Logic: FO plus Relation Variables

Φ R1 G1 B1 x y ((R(x) G(x) B(x)) (E(x, y) 3color ≡ ∃ ∀ ∨ ∨ ∧ → ( (R(x) R(y)) (G(x) G(y)) (B(x) B(y))))) ¬ ∧ ∧ ¬ ∧ ∧ ¬ ∧

s d t

b

a c f

g e

Neil Immerman Descriptive Complexity Second-Order Logic: FO plus Relation Variables

Fagin’s Theorem: NP = SO ∃ Φ R1 G1 B1 x y ((R(x) G(x) B(x)) (E(x, y) 3color ≡ ∃ ∀ ∨ ∨ ∧ → ( (R(x) R(y)) (G(x) G(y)) (B(x) B(y))))) ¬ ∧ ∧ ¬ ∧ ∧ ¬ ∧

s d t

b

a c f

g e

Neil Immerman Descriptive Complexity co-r.e. complete r.e. complete

co-r.e. r.e. Recursive

EXPTIME

PSPACE

SO co-NP complete PTIME Hierarchy NP complete SAT SAT SO SO co-NP ∀ ∃ NP NP co-NP ∩ P complete P

“truly

feasible” FO(CFL)

FO(REGULAR)

FO

Neil Immerman Descriptive Complexity  C(i) ( j > i) A(j) B(j) ≡ ∃ ∧ ∧  ( k.j > k > i)(A(k) B(k)) ∀ ∨

Q+(i) A(i) B(i) C(i) ≡ ⊕ ⊕

Addition is First-Order

Q+ : STRUC[Σ ] STRUC[Σs] AB →

A a1 a2 ... an−1 an B + b1 b2 ... bn−1 bn S s1 s2 ... sn−1 sn

Neil Immerman Descriptive Complexity Q+(i) A(i) B(i) C(i) ≡ ⊕ ⊕

Addition is First-Order

Q+ : STRUC[Σ ] STRUC[Σs] AB →

A a1 a2 ... an−1 an B + b1 b2 ... bn−1 bn S s1 s2 ... sn−1 sn

 C(i) ( j > i) A(j) B(j) ≡ ∃ ∧ ∧  ( k.j > k > i)(A(k) B(k)) ∀ ∨

Neil Immerman Descriptive Complexity Addition is First-Order

Q+ : STRUC[Σ ] STRUC[Σs] AB →

A a1 a2 ... an−1 an B + b1 b2 ... bn−1 bn S s1 s2 ... sn−1 sn

 C(i) ( j > i) A(j) B(j) ≡ ∃ ∧ ∧  ( k.j > k > i)(A(k) B(k)) ∀ ∨

Q+(i) A(i) B(i) C(i) ≡ ⊕ ⊕

Neil Immerman Descriptive Complexity Quantifiers are Parallel

Assume array A[x]: x = 1,..., r in memory. x(A(x)) write(1); proc p : if (A[i] = 0) then write(0) ∀ ≡ i

01

Parallel Machines:

CRAM[t(n)] = CRCW-PRAM-TIME[t(n)]-HARD[nO(1)]

P P r 1

P 2 Global

Memory P3

P 4

P 5

Neil Immerman Descriptive Complexity x(A(x)) write(1); proc p : if (A[i] = 0) then write(0) ∀ ≡ i

01

Parallel Machines: Quantifiers are Parallel

CRAM[t(n)] = CRCW-PRAM-TIME[t(n)]-HARD[nO(1)] Assume array A[x]: x = 1,..., r in memory.

P P r 1

P 2 Global

Memory P3

P 4

P 5

Neil Immerman Descriptive Complexity proc pi : if (A[i] = 0) then write(0)

0

Parallel Machines: Quantifiers are Parallel

CRAM[t(n)] = CRCW-PRAM-TIME[t(n)]-HARD[nO(1)] Assume array A[x]: x = 1,..., r in memory. x(A(x)) write(1); ∀ ≡

P P r 1

P 1 2 Global

Memory P3

P 4

P 5

Neil Immerman Descriptive Complexity 1

Parallel Machines: Quantifiers are Parallel

CRAM[t(n)] = CRCW-PRAM-TIME[t(n)]-HARD[nO(1)] Assume array A[x]: x = 1,..., r in memory. x(A(x)) write(1); proc p : if (A[i] = 0) then write(0) ∀ ≡ i

P P r 1

P 0 2 Global

Memory P3

P 4

P 5

Neil Immerman Descriptive Complexity FO( ) co-r.e. complete Arithmetic Hierarchy N r.e. complete Halt Halt co-r.e. FO (N) r.e. FO (N) ∀ ∃ Recursive

FO EXPTIME

= PSPACE SO co-NP complete PTIME Hierarchy NP complete SAT SAT co-NP SO NP SO CRAM[1] ∀ ∃ NP co-NP ∩ P complete = P

AC0 “truly feasible” = FO(CFL)

Logarithmic-Time Hierarchy FO(REGULAR)

FO LOGTIME Hierarchy CRAM[1] AC0

Neil Immerman Descriptive Complexity E?(x, y) def= x = y E(x, y) z(E ?(x, z) E ?(z, y)) ∨ ∨ ∃ ∧

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧ ϕG : binRel(G) binRel(G) is a monotone operator

? G REACH G = (LFPϕtc)(s, t) E = (LFPϕtc) ∈ ⇔ | REACH FO 6∈

Inductive Definitions and Least Fixed Point

 ? REACH = G, s, t s t →

¨¨* HH ¨¨ HHj H r ¨* r ¨* t s H ¨¨ ¨¨ r HHj¨ r ¨ r HH r HHj r r Neil Immerman Descriptive Complexity E?(x, y) def= x = y E(x, y) z(E ?(x, z) E ?(z, y)) ∨ ∨ ∃ ∧

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧ ϕG : binRel(G) binRel(G) is a monotone operator tc →

? G REACH G = (LFPϕtc)(s, t) E = (LFPϕtc) ∈ ⇔ |

Inductive Definitions and Least Fixed Point

 ? REACH = G, s, t s t REACH FO → 6∈

¨¨* HH ¨¨ HHj H r ¨* r ¨* t s H ¨¨ ¨¨ r HHj¨ r ¨ r HH r HHj r r Neil Immerman Descriptive Complexity ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧ ϕG : binRel(G) binRel(G) is a monotone operator tc →

? G REACH G = (LFPϕtc)(s, t) E = (LFPϕtc) ∈ ⇔ |

Inductive Definitions and Least Fixed Point

E?(x, y) def= x = y E(x, y) z(E ?(x, z) E ?(z, y)) ∨ ∨ ∃ ∧

 ? REACH = G, s, t s t REACH FO → 6∈

¨¨* HH ¨¨ HHj H r ¨* r ¨* t s H ¨¨ ¨¨ r HHj¨ r ¨ r HH r HHj r r Neil Immerman Descriptive Complexity ϕG : binRel(G) binRel(G) is a monotone operator tc →

? G REACH G = (LFPϕtc)(s, t) E = (LFPϕtc) ∈ ⇔ |

Inductive Definitions and Least Fixed Point

E?(x, y) def= x = y E(x, y) z(E ?(x, z) E ?(z, y)) ∨ ∨ ∃ ∧

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧

 ? REACH = G, s, t s t REACH FO → 6∈

¨¨* HH ¨¨ HHj H r ¨* r ¨* t s H ¨¨ ¨¨ r HHj¨ r ¨ r HH r HHj r r Neil Immerman Descriptive Complexity ? G REACH G = (LFPϕtc)(s, t) E = (LFPϕtc) ∈ ⇔ |

Inductive Definitions and Least Fixed Point

E?(x, y) def= x = y E(x, y) z(E ?(x, z) E ?(z, y)) ∨ ∨ ∃ ∧

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧ ϕG : binRel(G) binRel(G) is a monotone operator tc →

 ? REACH = G, s, t s t REACH FO → 6∈

¨¨* HH ¨¨ HHj H r ¨* r ¨* t s H ¨¨ ¨¨ r HHj¨ r ¨ r HH r HHj r r Neil Immerman Descriptive Complexity G REACH G = (LFPϕtc)(s, t) ∈ ⇔ |

Inductive Definitions and Least Fixed Point

E?(x, y) def= x = y E(x, y) z(E ?(x, z) E ?(z, y)) ∨ ∨ ∃ ∧

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧ ϕG : binRel(G) binRel(G) is a monotone operator tc →

? E = (LFPϕtc)

 ? REACH = G, s, t s t REACH FO → 6∈

¨¨* HH ¨¨ HHj H r ¨* r ¨* t s H ¨¨ ¨¨ r HHj¨ r ¨ r HH r HHj r r Neil Immerman Descriptive Complexity Inductive Definitions and Least Fixed Point

E?(x, y) def= x = y E(x, y) z(E ?(x, z) E ?(z, y)) ∨ ∨ ∃ ∧

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧ ϕG : binRel(G) binRel(G) is a monotone operator tc →

? G REACH G = (LFPϕtc)(s, t) E = (LFPϕtc) ∈ ⇔ |  ? REACH = G, s, t s t REACH FO → 6∈

¨¨* HH ¨¨ HHj H r ¨* r ¨* t s H ¨¨ ¨¨ r HHj¨ r ¨ r HH r HHj r r Neil Immerman Descriptive Complexity proof: Monotone means, for all R S, ϕ(R) ϕ(S). ⊆ ⊆ Let I0 def= ; Ir+1 def= ϕ(Ir ) Thus, = I0 I1 It . ∅ ∅ ⊆ ⊆ · · · ⊆ Let t be min such that It = It+1. Note that t nk where ≤ n = V G . ϕ(It ) = It , It is a fixed point of ϕ. | | Suppose ϕ(F) = F. By induction on r, for all r, Ir F. ⊆ base case: I0 = F. ∅ ⊆ inductive case: Assume Ij F ⊆ By monotonicity, ϕ(Ij ) ϕ(F), i.e., Ij+1 F. ⊆ ⊆ Thus It F and It = LFP(ϕ). ⊆ 

Tarski-Knaster Theorem

Thm. If ϕ : Relk (G) Relk (G) is monotone, then LFP(ϕ) → exists and can be computed in P.

Neil Immerman Descriptive Complexity Let I0 def= ; Ir+1 def= ϕ(Ir ) Thus, = I0 I1 It . ∅ ∅ ⊆ ⊆ · · · ⊆ Let t be min such that It = It+1. Note that t nk where ≤ n = V G . ϕ(It ) = It , so It is a fixed point of ϕ. | | Suppose ϕ(F) = F. By induction on r, for all r, Ir F. ⊆ base case: I0 = F. ∅ ⊆ inductive case: Assume Ij F ⊆ By monotonicity, ϕ(Ij ) ϕ(F), i.e., Ij+1 F. ⊆ ⊆ Thus It F and It = LFP(ϕ). ⊆ 

Tarski-Knaster Theorem

Thm. If ϕ : Relk (G) Relk (G) is monotone, then LFP(ϕ) → exists and can be computed in P. proof: Monotone means, for all R S, ϕ(R) ϕ(S). ⊆ ⊆

Neil Immerman Descriptive Complexity Thus, = I0 I1 It . ∅ ⊆ ⊆ · · · ⊆ Let t be min such that It = It+1. Note that t nk where ≤ n = V G . ϕ(It ) = It , so It is a fixed point of ϕ. | | Suppose ϕ(F) = F. By induction on r, for all r, Ir F. ⊆ base case: I0 = F. ∅ ⊆ inductive case: Assume Ij F ⊆ By monotonicity, ϕ(Ij ) ϕ(F), i.e., Ij+1 F. ⊆ ⊆ Thus It F and It = LFP(ϕ). ⊆ 

Tarski-Knaster Theorem

Thm. If ϕ : Relk (G) Relk (G) is monotone, then LFP(ϕ) → exists and can be computed in P. proof: Monotone means, for all R S, ϕ(R) ϕ(S). ⊆ ⊆ Let I0 def= ; Ir+1 def= ϕ(Ir ) ∅

Neil Immerman Descriptive Complexity Let t be min such that It = It+1. Note that t nk where ≤ n = V G . ϕ(It ) = It , so It is a fixed point of ϕ. | | Suppose ϕ(F) = F. By induction on r, for all r, Ir F. ⊆ base case: I0 = F. ∅ ⊆ inductive case: Assume Ij F ⊆ By monotonicity, ϕ(Ij ) ϕ(F), i.e., Ij+1 F. ⊆ ⊆ Thus It F and It = LFP(ϕ). ⊆ 

Tarski-Knaster Theorem

Thm. If ϕ : Relk (G) Relk (G) is monotone, then LFP(ϕ) → exists and can be computed in P. proof: Monotone means, for all R S, ϕ(R) ϕ(S). ⊆ ⊆ Let I0 def= ; Ir+1 def= ϕ(Ir ) Thus, = I0 I1 It . ∅ ∅ ⊆ ⊆ · · · ⊆

Neil Immerman Descriptive Complexity ϕ(It ) = It , so It is a fixed point of ϕ. Suppose ϕ(F) = F. By induction on r, for all r, Ir F. ⊆ base case: I0 = F. ∅ ⊆ inductive case: Assume Ij F ⊆ By monotonicity, ϕ(Ij ) ϕ(F), i.e., Ij+1 F. ⊆ ⊆ Thus It F and It = LFP(ϕ). ⊆ 

Tarski-Knaster Theorem

Thm. If ϕ : Relk (G) Relk (G) is monotone, then LFP(ϕ) → exists and can be computed in P. proof: Monotone means, for all R S, ϕ(R) ϕ(S). ⊆ ⊆ Let I0 def= ; Ir+1 def= ϕ(Ir ) Thus, = I0 I1 It . ∅ ∅ ⊆ ⊆ · · · ⊆ Let t be min such that It = It+1. Note that t nk where n = V G . ≤ | |

Neil Immerman Descriptive Complexity Suppose ϕ(F) = F. By induction on r, for all r, Ir F. ⊆ base case: I0 = F. ∅ ⊆ inductive case: Assume Ij F ⊆ By monotonicity, ϕ(Ij ) ϕ(F), i.e., Ij+1 F. ⊆ ⊆ Thus It F and It = LFP(ϕ). ⊆ 

Tarski-Knaster Theorem

Thm. If ϕ : Relk (G) Relk (G) is monotone, then LFP(ϕ) → exists and can be computed in P. proof: Monotone means, for all R S, ϕ(R) ϕ(S). ⊆ ⊆ Let I0 def= ; Ir+1 def= ϕ(Ir ) Thus, = I0 I1 It . ∅ ∅ ⊆ ⊆ · · · ⊆ Let t be min such that It = It+1. Note that t nk where ≤ n = V G . ϕ(It ) = It , so It is a fixed point of ϕ. | |

Neil Immerman Descriptive Complexity By induction on r, for all r, Ir F. ⊆ base case: I0 = F. ∅ ⊆ inductive case: Assume Ij F ⊆ By monotonicity, ϕ(Ij ) ϕ(F), i.e., Ij+1 F. ⊆ ⊆ Thus It F and It = LFP(ϕ). ⊆ 

Tarski-Knaster Theorem

Thm. If ϕ : Relk (G) Relk (G) is monotone, then LFP(ϕ) → exists and can be computed in P. proof: Monotone means, for all R S, ϕ(R) ϕ(S). ⊆ ⊆ Let I0 def= ; Ir+1 def= ϕ(Ir ) Thus, = I0 I1 It . ∅ ∅ ⊆ ⊆ · · · ⊆ Let t be min such that It = It+1. Note that t nk where ≤ n = V G . ϕ(It ) = It , so It is a fixed point of ϕ. | | Suppose ϕ(F) = F.

Neil Immerman Descriptive Complexity base case: I0 = F. ∅ ⊆ inductive case: Assume Ij F ⊆ By monotonicity, ϕ(Ij ) ϕ(F), i.e., Ij+1 F. ⊆ ⊆ Thus It F and It = LFP(ϕ). ⊆ 

Tarski-Knaster Theorem

Thm. If ϕ : Relk (G) Relk (G) is monotone, then LFP(ϕ) → exists and can be computed in P. proof: Monotone means, for all R S, ϕ(R) ϕ(S). ⊆ ⊆ Let I0 def= ; Ir+1 def= ϕ(Ir ) Thus, = I0 I1 It . ∅ ∅ ⊆ ⊆ · · · ⊆ Let t be min such that It = It+1. Note that t nk where ≤ n = V G . ϕ(It ) = It , so It is a fixed point of ϕ. | | Suppose ϕ(F) = F. By induction on r, for all r, Ir F. ⊆

Neil Immerman Descriptive Complexity inductive case: Assume Ij F ⊆ By monotonicity, ϕ(Ij ) ϕ(F), i.e., Ij+1 F. ⊆ ⊆ Thus It F and It = LFP(ϕ). ⊆ 

Tarski-Knaster Theorem

Thm. If ϕ : Relk (G) Relk (G) is monotone, then LFP(ϕ) → exists and can be computed in P. proof: Monotone means, for all R S, ϕ(R) ϕ(S). ⊆ ⊆ Let I0 def= ; Ir+1 def= ϕ(Ir ) Thus, = I0 I1 It . ∅ ∅ ⊆ ⊆ · · · ⊆ Let t be min such that It = It+1. Note that t nk where ≤ n = V G . ϕ(It ) = It , so It is a fixed point of ϕ. | | Suppose ϕ(F) = F. By induction on r, for all r, Ir F. ⊆ base case: I0 = F. ∅ ⊆

Neil Immerman Descriptive Complexity By monotonicity, ϕ(Ij ) ϕ(F), i.e., Ij+1 F. ⊆ ⊆ Thus It F and It = LFP(ϕ). ⊆ 

Tarski-Knaster Theorem

Thm. If ϕ : Relk (G) Relk (G) is monotone, then LFP(ϕ) → exists and can be computed in P. proof: Monotone means, for all R S, ϕ(R) ϕ(S). ⊆ ⊆ Let I0 def= ; Ir+1 def= ϕ(Ir ) Thus, = I0 I1 It . ∅ ∅ ⊆ ⊆ · · · ⊆ Let t be min such that It = It+1. Note that t nk where ≤ n = V G . ϕ(It ) = It , so It is a fixed point of ϕ. | | Suppose ϕ(F) = F. By induction on r, for all r, Ir F. ⊆ base case: I0 = F. ∅ ⊆ inductive case: Assume Ij F ⊆

Neil Immerman Descriptive Complexity Thus It F and It = LFP(ϕ). ⊆ 

Tarski-Knaster Theorem

Thm. If ϕ : Relk (G) Relk (G) is monotone, then LFP(ϕ) → exists and can be computed in P. proof: Monotone means, for all R S, ϕ(R) ϕ(S). ⊆ ⊆ Let I0 def= ; Ir+1 def= ϕ(Ir ) Thus, = I0 I1 It . ∅ ∅ ⊆ ⊆ · · · ⊆ Let t be min such that It = It+1. Note that t nk where ≤ n = V G . ϕ(It ) = It , so It is a fixed point of ϕ. | | Suppose ϕ(F) = F. By induction on r, for all r, Ir F. ⊆ base case: I0 = F. ∅ ⊆ inductive case: Assume Ij F ⊆ By monotonicity, ϕ(Ij ) ϕ(F), i.e., Ij+1 F. ⊆ ⊆

Neil Immerman Descriptive Complexity Tarski-Knaster Theorem

Thm. If ϕ : Relk (G) Relk (G) is monotone, then LFP(ϕ) → exists and can be computed in P. proof: Monotone means, for all R S, ϕ(R) ϕ(S). ⊆ ⊆ Let I0 def= ; Ir+1 def= ϕ(Ir ) Thus, = I0 I1 It . ∅ ∅ ⊆ ⊆ · · · ⊆ Let t be min such that It = It+1. Note that t nk where ≤ n = V G . ϕ(It ) = It , so It is a fixed point of ϕ. | | Suppose ϕ(F) = F. By induction on r, for all r, Ir F. ⊆ base case: I0 = F. ∅ ⊆ inductive case: Assume Ij F ⊆ By monotonicity, ϕ(Ij ) ϕ(F), i.e., Ij+1 F. ⊆ ⊆ Thus It F and It = LFP(ϕ). ⊆ 

Neil Immerman Descriptive Complexity  I1 = ϕG( ) = (a, b) V G V G dist(a, b) 1 tc ∅ ∈ × ≤  I2 = (ϕG)2( ) = (a, b) V G V G dist(a, b) 2 tc ∅ ∈ × ≤  I3 = (ϕG)3( ) = (a, b) V G V G dist(a, b) 4 tc ∅ ∈ × ≤ . . . . = . .  − Ir = (ϕG)r ( ) = (a, b) V G V G dist(a, b) 2r 1 tc ∅ ∈ × ≤ . . . . = . . d + e  (ϕG) 1 log n ( ) = (a, b) V G V G dist(a, b) n tc ∅ ∈ × ≤ d1+log ne LFP(ϕtc) = ϕ ( ); REACH IND[log n] tc ∅ ∈ Next we will show that IND[t(n)] = FO[t(n)].

Inductive Definition of

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧

Neil Immerman Descriptive Complexity  I2 = (ϕG)2( ) = (a, b) V G V G dist(a, b) 2 tc ∅ ∈ × ≤  I3 = (ϕG)3( ) = (a, b) V G V G dist(a, b) 4 tc ∅ ∈ × ≤ . . . . = . .  − Ir = (ϕG)r ( ) = (a, b) V G V G dist(a, b) 2r 1 tc ∅ ∈ × ≤ . . . . = . . d + e  (ϕG) 1 log n ( ) = (a, b) V G V G dist(a, b) n tc ∅ ∈ × ≤ d1+log ne LFP(ϕtc) = ϕ ( ); REACH IND[log n] tc ∅ ∈ Next we will show that IND[t(n)] = FO[t(n)].

Inductive Definition of Transitive Closure

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧  I1 = ϕG( ) = (a, b) V G V G dist(a, b) 1 tc ∅ ∈ × ≤

Neil Immerman Descriptive Complexity  I3 = (ϕG)3( ) = (a, b) V G V G dist(a, b) 4 tc ∅ ∈ × ≤ . . . . = . .  − Ir = (ϕG)r ( ) = (a, b) V G V G dist(a, b) 2r 1 tc ∅ ∈ × ≤ . . . . = . . d + e  (ϕG) 1 log n ( ) = (a, b) V G V G dist(a, b) n tc ∅ ∈ × ≤ d1+log ne LFP(ϕtc) = ϕ ( ); REACH IND[log n] tc ∅ ∈ Next we will show that IND[t(n)] = FO[t(n)].

Inductive Definition of Transitive Closure

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧  I1 = ϕG( ) = (a, b) V G V G dist(a, b) 1 tc ∅ ∈ × ≤  I2 = (ϕG)2( ) = (a, b) V G V G dist(a, b) 2 tc ∅ ∈ × ≤

Neil Immerman Descriptive Complexity . . . . = . .  − Ir = (ϕG)r ( ) = (a, b) V G V G dist(a, b) 2r 1 tc ∅ ∈ × ≤ . . . . = . . d + e  (ϕG) 1 log n ( ) = (a, b) V G V G dist(a, b) n tc ∅ ∈ × ≤ d1+log ne LFP(ϕtc) = ϕ ( ); REACH IND[log n] tc ∅ ∈ Next we will show that IND[t(n)] = FO[t(n)].

Inductive Definition of Transitive Closure

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧  I1 = ϕG( ) = (a, b) V G V G dist(a, b) 1 tc ∅ ∈ × ≤  I2 = (ϕG)2( ) = (a, b) V G V G dist(a, b) 2 tc ∅ ∈ × ≤  I3 = (ϕG)3( ) = (a, b) V G V G dist(a, b) 4 tc ∅ ∈ × ≤

Neil Immerman Descriptive Complexity d + e  (ϕG) 1 log n ( ) = (a, b) V G V G dist(a, b) n tc ∅ ∈ × ≤ d1+log ne LFP(ϕtc) = ϕ ( ); REACH IND[log n] tc ∅ ∈ Next we will show that IND[t(n)] = FO[t(n)].

Inductive Definition of Transitive Closure

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧  I1 = ϕG( ) = (a, b) V G V G dist(a, b) 1 tc ∅ ∈ × ≤  I2 = (ϕG)2( ) = (a, b) V G V G dist(a, b) 2 tc ∅ ∈ × ≤  I3 = (ϕG)3( ) = (a, b) V G V G dist(a, b) 4 tc ∅ ∈ × ≤ . . . . = . .  − Ir = (ϕG)r ( ) = (a, b) V G V G dist(a, b) 2r 1 tc ∅ ∈ × ≤ . . . . = . .

Neil Immerman Descriptive Complexity d1+log ne LFP(ϕtc) = ϕ ( ); REACH IND[log n] tc ∅ ∈ Next we will show that IND[t(n)] = FO[t(n)].

Inductive Definition of Transitive Closure

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧  I1 = ϕG( ) = (a, b) V G V G dist(a, b) 1 tc ∅ ∈ × ≤  I2 = (ϕG)2( ) = (a, b) V G V G dist(a, b) 2 tc ∅ ∈ × ≤  I3 = (ϕG)3( ) = (a, b) V G V G dist(a, b) 4 tc ∅ ∈ × ≤ . . . . = . .  − Ir = (ϕG)r ( ) = (a, b) V G V G dist(a, b) 2r 1 tc ∅ ∈ × ≤ . . . . = . . d + e  (ϕG) 1 log n ( ) = (a, b) V G V G dist(a, b) n tc ∅ ∈ × ≤

Neil Immerman Descriptive Complexity Next we will show that IND[t(n)] = FO[t(n)].

Inductive Definition of Transitive Closure

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧  I1 = ϕG( ) = (a, b) V G V G dist(a, b) 1 tc ∅ ∈ × ≤  I2 = (ϕG)2( ) = (a, b) V G V G dist(a, b) 2 tc ∅ ∈ × ≤  I3 = (ϕG)3( ) = (a, b) V G V G dist(a, b) 4 tc ∅ ∈ × ≤ . . . . = . .  − Ir = (ϕG)r ( ) = (a, b) V G V G dist(a, b) 2r 1 tc ∅ ∈ × ≤ . . . . = . . d + e  (ϕG) 1 log n ( ) = (a, b) V G V G dist(a, b) n tc ∅ ∈ × ≤ d1+log ne LFP(ϕtc) = ϕ ( ); REACH IND[log n] tc ∅ ∈

Neil Immerman Descriptive Complexity Inductive Definition of Transitive Closure

ϕtc(R, x, y) x = y E(x, y) z(R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧  I1 = ϕG( ) = (a, b) V G V G dist(a, b) 1 tc ∅ ∈ × ≤  I2 = (ϕG)2( ) = (a, b) V G V G dist(a, b) 2 tc ∅ ∈ × ≤  I3 = (ϕG)3( ) = (a, b) V G V G dist(a, b) 4 tc ∅ ∈ × ≤ . . . . = . .  − Ir = (ϕG)r ( ) = (a, b) V G V G dist(a, b) 2r 1 tc ∅ ∈ × ≤ . . . . = . . d + e  (ϕG) 1 log n ( ) = (a, b) V G V G dist(a, b) n tc ∅ ∈ × ≤ d1+log ne LFP(ϕtc) = ϕ ( ); REACH IND[log n] tc ∅ ∈ Next we will show that IND[t(n)] = FO[t(n)].

Neil Immerman Descriptive Complexity 2. Using , replace two occurrences of R with one: ∀

ϕtc(R, x, y) ( z.M )( z)( uv.M )R(u, v) ≡ ∀ 1 ∃ ∀ 2 M (u = x v = z) (u = z v = y) 2 ≡ ∧ ∨ ∧ 3. Requantify x and y.

M (x = u y = v) 3 ≡ ∧

ϕtc(R, x, y) [( z.M )( z)( uv.M )( xy.M )] R(x, y) ≡ ∀ 1 ∃ ∀ 2 ∃ 3 Every FO inductive definition is equivalent to a quantifier block.

ϕtc(R, x, y) x = y E(x, y) z (R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧ 1. Dummy universal quantification for base case:

ϕtc(R, x, y) ( z.M )( z)(R(x, z) R(z, y)) ≡ ∀ 1 ∃ ∧ M (x = y E(x, y)) 1 ≡ ¬ ∨

Neil Immerman Descriptive Complexity 3. Requantify x and y.

M (x = u y = v) 3 ≡ ∧

ϕtc(R, x, y) [( z.M )( z)( uv.M )( xy.M )] R(x, y) ≡ ∀ 1 ∃ ∀ 2 ∃ 3 Every FO inductive definition is equivalent to a quantifier block.

ϕtc(R, x, y) x = y E(x, y) z (R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧ 1. Dummy universal quantification for base case:

ϕtc(R, x, y) ( z.M )( z)(R(x, z) R(z, y)) ≡ ∀ 1 ∃ ∧ M (x = y E(x, y)) 1 ≡ ¬ ∨ 2. Using , replace two occurrences of R with one: ∀

ϕtc(R, x, y) ( z.M )( z)( uv.M )R(u, v) ≡ ∀ 1 ∃ ∀ 2 M (u = x v = z) (u = z v = y) 2 ≡ ∧ ∨ ∧

Neil Immerman Descriptive Complexity ϕtc(R, x, y) x = y E(x, y) z (R(x, z) R(z, y)) ≡ ∨ ∨ ∃ ∧ 1. Dummy universal quantification for base case:

ϕtc(R, x, y) ( z.M )( z)(R(x, z) R(z, y)) ≡ ∀ 1 ∃ ∧ M (x = y E(x, y)) 1 ≡ ¬ ∨ 2. Using , replace two occurrences of R with one: ∀

ϕtc(R, x, y) ( z.M )( z)( uv.M )R(u, v) ≡ ∀ 1 ∃ ∀ 2 M (u = x v = z) (u = z v = y) 2 ≡ ∧ ∨ ∧ 3. Requantify x and y.

M (x = u y = v) 3 ≡ ∧

ϕtc(R, x, y) [( z.M )( z)( uv.M )( xy.M )] R(x, y) ≡ ∀ 1 ∃ ∀ 2 ∃ 3 Every FO inductive definition is equivalent to a quantifier block.

Neil Immerman Descriptive Complexity ϕtc(R, x, y) [QB ]R(x, y) ≡ tc ϕr ( ) [QB ]r (false) tc ∅ ≡ tc

Thus, for any structure STRUC[Σg], A ∈

REACH = (LFPϕtc)(s, t) A ∈ ⇔ A | = ([QB ]d1+log ||A||e false)(s, t) ⇔ A | tc

QB [( z.M1)( z)( uv.M2)( xy.M3)] tc ≡ ∀ ∃ ∀ ∀

ϕtc(R, x, y) [( z.M )( z)( uv.M )( xy.M )]R(x, y) ≡ ∀ 1 ∃ ∀ 2 ∃ 3

Neil Immerman Descriptive Complexity ϕr ( ) [QB ]r (false) tc ∅ ≡ tc

Thus, for any structure STRUC[Σg], A ∈

REACH = (LFPϕtc)(s, t) A ∈ ⇔ A | = ([QB ]d1+log ||A||e false)(s, t) ⇔ A | tc

QB [( z.M1)( z)( uv.M2)( xy.M3)] tc ≡ ∀ ∃ ∀ ∀

ϕtc(R, x, y) [( z.M )( z)( uv.M )( xy.M )]R(x, y) ≡ ∀ 1 ∃ ∀ 2 ∃ 3

ϕtc(R, x, y) [QB ]R(x, y) ≡ tc

Neil Immerman Descriptive Complexity Thus, for any structure STRUC[Σg], A ∈

REACH = (LFPϕtc)(s, t) A ∈ ⇔ A | = ([QB ]d1+log ||A||e false)(s, t) ⇔ A | tc

QB [( z.M1)( z)( uv.M2)( xy.M3)] tc ≡ ∀ ∃ ∀ ∀

ϕtc(R, x, y) [( z.M )( z)( uv.M )( xy.M )]R(x, y) ≡ ∀ 1 ∃ ∀ 2 ∃ 3

ϕtc(R, x, y) [QB ]R(x, y) ≡ tc ϕr ( ) [QB ]r (false) tc ∅ ≡ tc

Neil Immerman Descriptive Complexity QB [( z.M1)( z)( uv.M2)( xy.M3)] tc ≡ ∀ ∃ ∀ ∀

ϕtc(R, x, y) [( z.M )( z)( uv.M )( xy.M )]R(x, y) ≡ ∀ 1 ∃ ∀ 2 ∃ 3

ϕtc(R, x, y) [QB ]R(x, y) ≡ tc ϕr ( ) [QB ]r (false) tc ∅ ≡ tc

Thus, for any structure STRUC[Σg], A ∈

REACH = (LFPϕtc)(s, t) A ∈ ⇔ A | = ([QB ]d1+log ||A||e false)(s, t) ⇔ A | tc

Neil Immerman Descriptive Complexity CRAM[t(n)] = concurrent parallel random access machine; polynomial hardware, parallel time O(t(n))

IND[t(n)] = first-order, depth t(n) inductive definitions

FO[t(n)] = t(n) repetitions of a block of restricted quantifiers:

QB = [(Q x .M ) (Q x .M )]; M quantifier-free 1 1 1 ··· k k k i

ϕn = [QB][QB] [QB] M0 | {z··· } t(n)

Neil Immerman Descriptive Complexity proof idea: CRAM[t(n)] FO[t(n)]: For QB with k variables, keep in memory current value⊇ of formula on all possible assignments, using nk bits of global memory. Simulate each next quantifier in constant parallel time. CRAM[t(n)] FO[t(n)]: Inductively define new state of every bit of every register⊆ of every processor in terms of this global state at the previous time step. 

Thm. For all t(n), even beyond polynomial,

CRAM[t(n)] = FO[t(n)]

parallel time = inductive depth = QB iteration Thm. For all constructible, polynomially bounded t(n),

CRAM[t(n)] = IND[t(n)] = FO[t(n)]

Neil Immerman Descriptive Complexity Simulate each next quantifier in constant parallel time. CRAM[t(n)] FO[t(n)]: Inductively define new state of every bit of every register⊆ of every processor in terms of this global state at the previous time step. 

Thm. For all t(n), even beyond polynomial,

CRAM[t(n)] = FO[t(n)]

parallel time = inductive depth = QB iteration Thm. For all constructible, polynomially bounded t(n),

CRAM[t(n)] = IND[t(n)] = FO[t(n)]

proof idea: CRAM[t(n)] FO[t(n)]: For QB with k variables, keep in memory current value⊇ of formula on all possible assignments, using nk bits of global memory.

Neil Immerman Descriptive Complexity CRAM[t(n)] FO[t(n)]: Inductively define new state of every bit of every register⊆ of every processor in terms of this global state at the previous time step. 

Thm. For all t(n), even beyond polynomial,

CRAM[t(n)] = FO[t(n)]

parallel time = inductive depth = QB iteration Thm. For all constructible, polynomially bounded t(n),

CRAM[t(n)] = IND[t(n)] = FO[t(n)]

proof idea: CRAM[t(n)] FO[t(n)]: For QB with k variables, keep in memory current value⊇ of formula on all possible assignments, using nk bits of global memory. Simulate each next quantifier in constant parallel time.

Neil Immerman Descriptive Complexity Thm. For all t(n), even beyond polynomial,

CRAM[t(n)] = FO[t(n)]

parallel time = inductive depth = QB iteration Thm. For all constructible, polynomially bounded t(n),

CRAM[t(n)] = IND[t(n)] = FO[t(n)]

proof idea: CRAM[t(n)] FO[t(n)]: For QB with k variables, keep in memory current value⊇ of formula on all possible assignments, using nk bits of global memory. Simulate each next quantifier in constant parallel time. CRAM[t(n)] FO[t(n)]: Inductively define new state of every bit of every register⊆ of every processor in terms of this global state at the previous time step. 

Neil Immerman Descriptive Complexity parallel time = inductive depth = QB iteration Thm. For all constructible, polynomially bounded t(n),

CRAM[t(n)] = IND[t(n)] = FO[t(n)]

proof idea: CRAM[t(n)] FO[t(n)]: For QB with k variables, keep in memory current value⊇ of formula on all possible assignments, using nk bits of global memory. Simulate each next quantifier in constant parallel time. CRAM[t(n)] FO[t(n)]: Inductively define new state of every bit of every register⊆ of every processor in terms of this global state at the previous time step. 

Thm. For all t(n), even beyond polynomial,

CRAM[t(n)] = FO[t(n)]

Neil Immerman Descriptive Complexity FO( ) co-r.e. complete Arithmetic Hierarchy N r.e. complete Halt Halt FO (N) FO (N) co-r.e. ∀ ∃ r.e. Recursive

For t(n) poly bdd, EXPTIME

[2nO(1) ] O(1) CRAM PSPACE FO[2n ] SO co-NP complete PTIME Hierarchy NP complete SAT SAT CRAM[t(n)] co-NP NP NP co-NP ∩ = FO[nO(1)] P complete CRAM[nO(1)] P FO(LFP)

IND[t(n)] FO[logO(1) n] “truly CRAM[logO(1) n] NC

FO[log n] feasible” CRAM[log n] AC1 = FO(CFL) FO[t(n)]

FO(REGULAR)

FO LOGTIME Hierarchy CRAM[1] AC0

Neil Immerman Descriptive Complexity FO( ) co-r.e. complete Arithmetic Hierarchy N r.e. complete Halt Halt FO (N) FO (N) co-r.e. ∀ ∃ r.e. Recursive

Remember that EXPTIME

nO(1) for all t(n), FO[2 ] PSPACE SO co-NP complete PTIME Hierarchy NP complete SAT SAT SO SO co-NP ∀ ∃ NP NP co-NP CRAM[t(n)] ∩ FO[nO(1)] P complete Horn- SAT P = FO(LFP) FO[logO(1) n] “truly NC

FO[t(n)] FO[log n] feasible” AC1 FO(CFL)

FO(REGULAR)

FO LOGTIME Hierarchy AC0

Neil Immerman Descriptive Complexity Since variables range over a universe of size n, a constant number of variables can specify a polynomial number of gates.

The proof is just a more detailed look at CRAM[t(n)] = FO[t(n)].

A bounded number, k, of variables, is k log n bits and corresponds to nk gates, i.e., polynomially much hardware.

A second-order variable of arity r is nr bits, corresponding to 2nr gates.

Number of Variables Determines Amount of Hardware

Thm. For k = 1, 2,..., DSPACE[nk ] = VAR[k + 1]

Neil Immerman Descriptive Complexity The proof is just a more detailed look at CRAM[t(n)] = FO[t(n)].

A bounded number, k, of variables, is k log n bits and corresponds to nk gates, i.e., polynomially much hardware.

A second-order variable of arity r is nr bits, corresponding to 2nr gates.

Number of Variables Determines Amount of Hardware

Thm. For k = 1, 2,..., DSPACE[nk ] = VAR[k + 1]

Since variables range over a universe of size n, a constant number of variables can specify a polynomial number of gates.

Neil Immerman Descriptive Complexity A bounded number, k, of variables, is k log n bits and corresponds to nk gates, i.e., polynomially much hardware.

A second-order variable of arity r is nr bits, corresponding to 2nr gates.

Number of Variables Determines Amount of Hardware

Thm. For k = 1, 2,..., DSPACE[nk ] = VAR[k + 1]

Since variables range over a universe of size n, a constant number of variables can specify a polynomial number of gates.

The proof is just a more detailed look at CRAM[t(n)] = FO[t(n)].

Neil Immerman Descriptive Complexity A second-order variable of arity r is nr bits, corresponding to 2nr gates.

Number of Variables Determines Amount of Hardware

Thm. For k = 1, 2,..., DSPACE[nk ] = VAR[k + 1]

Since variables range over a universe of size n, a constant number of variables can specify a polynomial number of gates.

The proof is just a more detailed look at CRAM[t(n)] = FO[t(n)].

A bounded number, k, of variables, is k log n bits and corresponds to nk gates, i.e., polynomially much hardware.

Neil Immerman Descriptive Complexity Number of Variables Determines Amount of Hardware

Thm. For k = 1, 2,..., DSPACE[nk ] = VAR[k + 1]

Since variables range over a universe of size n, a constant number of variables can specify a polynomial number of gates.

The proof is just a more detailed look at CRAM[t(n)] = FO[t(n)].

A bounded number, k, of variables, is k log n bits and corresponds to nk gates, i.e., polynomially much hardware.

A second-order variable of arity r is nr bits, corresponding to 2nr gates.

Neil Immerman Descriptive Complexity With r = m2n processors, recognize 3-SAT in constant time! Let S be the first n bits of our processor number. If processors S1,... Sm notice that truth assignment S makes all m clauses of ϕ true, then ϕ 3-SAT, so S1 writes a 1. ∈

1

SO: Parallel Machines with Exponential Hardware Given ϕ with n variables and m clauses, is ϕ 3-SAT? ∈

P P r 1

P 0 2 Global

Memory P3

P 4

P 5

Neil Immerman Descriptive Complexity Let S be the first n bits of our processor number. If processors S1,... Sm notice that truth assignment S makes all m clauses of ϕ true, then ϕ 3-SAT, so S1 writes a 1. ∈

1

SO: Parallel Machines with Exponential Hardware Given ϕ with n variables and m clauses, is ϕ 3-SAT? ∈ With r = m2n processors, recognize 3-SAT in constant time!

P P r 1

P 0 2 Global

Memory P3

P 4

P 5

Neil Immerman Descriptive Complexity If processors S1,... Sm notice that truth assignment S makes all m clauses of ϕ true, then ϕ 3-SAT, so S1 writes a 1. ∈

1

SO: Parallel Machines with Exponential Hardware Given ϕ with n variables and m clauses, is ϕ 3-SAT? ∈ With r = m2n processors, recognize 3-SAT in constant time! Let S be the first n bits of our processor number.

P P r 1

P 0 2 Global

Memory P3

P 4

P 5

Neil Immerman Descriptive Complexity so S1 writes a 1.

1

SO: Parallel Machines with Exponential Hardware Given ϕ with n variables and m clauses, is ϕ 3-SAT? ∈ With r = m2n processors, recognize 3-SAT in constant time! Let S be the first n bits of our processor number. If processors S1,... Sm notice that truth assignment S makes all m clauses of ϕ true, then ϕ 3-SAT, ∈ P P r 1

P 0 2 Global

Memory P3

P 4

P 5

Neil Immerman Descriptive Complexity 0

SO: Parallel Machines with Exponential Hardware Given ϕ with n variables and m clauses, is ϕ 3-SAT? ∈ With r = m2n processors, recognize 3-SAT in constant time! Let S be the first n bits of our processor number. If processors S1,... Sm notice that truth assignment S makes all m clauses of ϕ true, then ϕ 3-SAT, so S1 writes a 1. ∈ P P r 1

P 1 2 Global

Memory P3

P 4

P 5

Neil Immerman Descriptive Complexity proof: SO[t(n)] is like FO[t(n)] but using a quantifier block containing both first-order and second-order quantifiers. The proof is similar to FO[t(n)] = CRAM[t(n)]. 

Cor.

O(1) SO = PTIME Hierarchy = CRAM[1]-HARD[2n ]

O(1) SO[nO(1)] = PSPACE = CRAM[nO(1)]-HARD[2n ]

O(1) O(1) O(1) SO[2n ] = EXPTIME = CRAM[2n ]-HARD[2n ]

SO: Parallel Machines with Exponential Hardware

O(1) Thm. SO[t(n)] = CRAM[t(n)]-HARD[2n ] .

Neil Immerman Descriptive Complexity Cor.

O(1) SO = PTIME Hierarchy = CRAM[1]-HARD[2n ]

O(1) SO[nO(1)] = PSPACE = CRAM[nO(1)]-HARD[2n ]

O(1) O(1) O(1) SO[2n ] = EXPTIME = CRAM[2n ]-HARD[2n ]

SO: Parallel Machines with Exponential Hardware

O(1) Thm. SO[t(n)] = CRAM[t(n)]-HARD[2n ] . proof: SO[t(n)] is like FO[t(n)] but using a quantifier block containing both first-order and second-order quantifiers. The proof is similar to FO[t(n)] = CRAM[t(n)]. 

Neil Immerman Descriptive Complexity O(1) SO[nO(1)] = PSPACE = CRAM[nO(1)]-HARD[2n ]

O(1) O(1) O(1) SO[2n ] = EXPTIME = CRAM[2n ]-HARD[2n ]

SO: Parallel Machines with Exponential Hardware

O(1) Thm. SO[t(n)] = CRAM[t(n)]-HARD[2n ] . proof: SO[t(n)] is like FO[t(n)] but using a quantifier block containing both first-order and second-order quantifiers. The proof is similar to FO[t(n)] = CRAM[t(n)]. 

Cor.

O(1) SO = PTIME Hierarchy = CRAM[1]-HARD[2n ]

Neil Immerman Descriptive Complexity O(1) O(1) O(1) SO[2n ] = EXPTIME = CRAM[2n ]-HARD[2n ]

SO: Parallel Machines with Exponential Hardware

O(1) Thm. SO[t(n)] = CRAM[t(n)]-HARD[2n ] . proof: SO[t(n)] is like FO[t(n)] but using a quantifier block containing both first-order and second-order quantifiers. The proof is similar to FO[t(n)] = CRAM[t(n)]. 

Cor.

O(1) SO = PTIME Hierarchy = CRAM[1]-HARD[2n ]

O(1) SO[nO(1)] = PSPACE = CRAM[nO(1)]-HARD[2n ]

Neil Immerman Descriptive Complexity SO: Parallel Machines with Exponential Hardware

O(1) Thm. SO[t(n)] = CRAM[t(n)]-HARD[2n ] . proof: SO[t(n)] is like FO[t(n)] but using a quantifier block containing both first-order and second-order quantifiers. The proof is similar to FO[t(n)] = CRAM[t(n)]. 

Cor.

O(1) SO = PTIME Hierarchy = CRAM[1]-HARD[2n ]

O(1) SO[nO(1)] = PSPACE = CRAM[nO(1)]-HARD[2n ]

O(1) O(1) O(1) SO[2n ] = EXPTIME = CRAM[2n ]-HARD[2n ]

Neil Immerman Descriptive Complexity I We would love to understand this tradeoff.

I Is there such a thing as an inherently sequential problem?, i.e., is NC = P? 6

I Same tradeoff as number of variables vs. number of iterations of a quantifier block.

Parallel Time versus Amount of Hardware

O(1) O(1) PSPACE = FO[2n ] = CRAM[2n ]-HARD[nO(1)]

O(1) = SO[nO(1)] = CRAM[nO(1)]-HARD[2n ]

Neil Immerman Descriptive Complexity I Is there such a thing as an inherently sequential problem?, i.e., is NC = P? 6

I Same tradeoff as number of variables vs. number of iterations of a quantifier block.

Parallel Time versus Amount of Hardware

O(1) O(1) PSPACE = FO[2n ] = CRAM[2n ]-HARD[nO(1)]

O(1) = SO[nO(1)] = CRAM[nO(1)]-HARD[2n ]

I We would love to understand this tradeoff.

Neil Immerman Descriptive Complexity I Same tradeoff as number of variables vs. number of iterations of a quantifier block.

Parallel Time versus Amount of Hardware

O(1) O(1) PSPACE = FO[2n ] = CRAM[2n ]-HARD[nO(1)]

O(1) = SO[nO(1)] = CRAM[nO(1)]-HARD[2n ]

I We would love to understand this tradeoff.

I Is there such a thing as an inherently sequential problem?, i.e., is NC = P? 6

Neil Immerman Descriptive Complexity Parallel Time versus Amount of Hardware

O(1) O(1) PSPACE = FO[2n ] = CRAM[2n ]-HARD[nO(1)]

O(1) = SO[nO(1)] = CRAM[nO(1)]-HARD[2n ]

I We would love to understand this tradeoff.

I Is there such a thing as an inherently sequential problem?, i.e., is NC = P? 6

I Same tradeoff as number of variables vs. number of iterations of a quantifier block.

Neil Immerman Descriptive Complexity FO( ) co-r.e. complete Arithmetic Hierarchy N r.e. complete Halt Halt FO (N) FO (N) co-r.e. ∀ ∃ r.e. Recursive

O(1) SO[2n ] EXPTIME

O(1) FO[2n ] SO[nO(1)] PSPACE

SO co-NP complete PTIME Hierarchy NP complete SAT SAT SO[t(n)] SO SO co-NP ∀ ∃ NP NP co-NP ∩ = FO[nO(1)] P complete Horn- P FO(LFP) SAT

CRAM[t(n)]- FO[logO(1) n] “truly NC

O(1) 1 HARD-[2n ] FO[log n] feasible” AC FO(CFL)

FO(REGULAR)

FO LOGTIME Hierarchy AC0

Neil Immerman Descriptive Complexity Recent Breakthroughs in Descriptive Complexity

Theorem [Ben Rossman] Any first-order formula with any numeric relations ( , +, ,...) that means “I have a clique of ≤ × size k” must have at least k/4 variables. Creative new proof idea using Hastad’s˚ Switching Lemma gives the essentially optimal bound. This lower bound is for a fixed formula, if it were for a sequence of polynomially-sized formulas, i.e., a fixed-point formula, it would follow that CLIQUE P and thus P = NP. 6∈ 6 Best previous bounds:

I k variables necessary and sufficient without ordering or other numeric relations [I 1980].

I Nothing was known with ordering except for the trivial fact that 2 variables are not enough.

Neil Immerman Descriptive Complexity Recent Breakthroughs in Descriptive Complexity

Theorem [Martin Grohe] Fixed-Point Logic with Counting captures Polynomial Time on all classes of graphs with excluded minors. Grohe proves that for every class of graphs with excluded minors, there is a constant k such that two graphs of the class are isomorphic iff they agree on all k-variable formulas in fixed-point logic with counting. Using Ehrenfeucht-Fra¨ısse´ games, this can be checked in polynomial time, (O(nk (log n))). In the same time we can give a canonical description of the isomorphism type of any graph in the class. Thus every class of graphs with excluded minors admits the same general polynomial time canonization : we’ isomorphic iff we agree on all formulas in Ck and in particular, you are isomorphic to me iff your Ck canonical description is equal to mine.

Neil Immerman Descriptive Complexity Thm. REACH is complete for NL = NSPACE[log n].

Proof: Let A NL, A = (N), uses c log n bits of worktape. Input w, n =∈ w L | |

w CompGraph(N, w) = (V , E, s, t) 7→

 V = ID = q, h, p q States(N), h n, p c log n h i ∈ ≤ | | ≤ d e  E = (ID1, ID2) ID1(w) ID2(w) −→N s = initial ID

t = accepting ID

Neil Immerman Descriptive Complexity NSPACE[log n]

n read-only input w h

1 2

q

worktape p

c(log n)

Neil Immerman Descriptive Complexity Claim. w (N) CompGraph(N, w) REACH ∈ L ⇔ ∈

t s

v



Neil Immerman Descriptive Complexity Cor: NL P ⊆

Proof: REACH P ∈

P is closed under (logspace) reductions. i.e., (B P A B) A P ∈ ∧ ≤ ⇒ ∈ 

Neil Immerman Descriptive Complexity NSPACE vs. DSPACE

Prop. NSPACE[s(n)] NTIME[2O(s(n))] DSPACE[2O(s(n))] ⊆ ⊆ We can do much better!

Neil Immerman Descriptive Complexity Savitch’s Theorem REACH DSPACE(log n)2 ∈

s t

p

proof:

G REACH G = PATHn(s, t) ∈ ⇔ | PATH (x, y) x = y E(x, y) 1 ≡ ∨ PATH (x, y) z (PATH (x, z) PATH (z, y)) 2d ≡ ∃ d ∧ d

Sn(d) = space to check paths of dist. d in n-nodegraphs

Sn(n) = log n + Sn(n/2) = O((log n)2)  Neil Immerman Descriptive Complexity Savitch’s Theorem

DSPACE[s(n)] NSPACE[n] DSPACE[(s(n))2] ⊆ ⊆ proof: Let A NSPACE[s(n)]; A = (N) ∈ L w A CompGraph(N, w) REACH ∈ ⇔ ∈

w = n; CompGraph(N, w) = 2O(s(n)) | | | |

Testing if CompGraph(N, w) REACH takes space, ∈ (log( CompGraph(N, w) ))2 = (log(2O(s(n))))2 | | = O((s(n))2)

From w build CompGraph(N, w) in DSPACE[s(n)]. 

Neil Immerman Descriptive Complexity REACH NL ∈  proof: Fix G, let Nd = v distance(s, v) d ≤ Claim: The following problems are in NL: 1. dist(x, d): distance(s, x) d ≤ 2.NDIST (x, d; m): if m = N then dist(x, d) d ¬ proof: 1. Guess the path of length d from s to x. ≤ 2. Guess m vertices, v = x, with dist(v, d). 6 c := 0; for v := 1 to n do // nondeterministically { ( dist(v, d) && v = x; c + + ) ( no-op) 6 || } if (c == m) then ACCEPT 

Neil Immerman Descriptive Complexity Claim. We can compute Nd in NL.

proof: By induction on d.

Base case: N0 = 1

Inductive step: Suppose we have Nd . 1. c := 0; 2. for v := 1 to n do // nondeterministically { 3.( dist(v, d + 1); c + +) || 4.( z (NDIST(z, d; Nd ) (z = v E(z, v)))) 5. ∀ ∨ 6 ∧ ¬ } 6. Nd+1 := c 

G REACH NDIST(t, n; Nn) ∈ ⇔ 

Neil Immerman Descriptive Complexity Thm. NSPACE[s(n)] = co-NSPACE[s(n)]. proof: Let A NSPACE[s(n)]; A = (N) ∈ L w A CompGraph(N, w) REACH ∈ ⇔ ∈

w = n; CompGraph(N, w) = 2O(s(n)) | | | |

Testing if CompGraph(N, w) REACH takes space, ∈

log( CompGraph(N, w) ) = log(2O(s(n))) | | = O(s(n)) 

Neil Immerman Descriptive Complexity I Natural Complexity Classes have Natural Complete Problems SAT for NP, CVAL for P, QSAT for PSPACE, . . .

I Major Missing Idea: concept of work or conservation of energy in computation, i.e, in order to solve SAT or other hard problem we must do a certain amount of computational work.

What We Know

I Diagonalization: more of the same resource gives us more: ⊂ 2 DTIME[n] 6= DTIME[n ], same for DSPACE, NTIME, NSPACE, . . .

Neil Immerman Descriptive Complexity I Major Missing Idea: concept of work or conservation of energy in computation, i.e, in order to solve SAT or other hard problem we must do a certain amount of computational work.

What We Know

I Diagonalization: more of the same resource gives us more: ⊂ 2 DTIME[n] 6= DTIME[n ], same for DSPACE, NTIME, NSPACE, . . .

I Natural Complexity Classes have Natural Complete Problems SAT for NP, CVAL for P, QSAT for PSPACE, . . .

Neil Immerman Descriptive Complexity What We Know

I Diagonalization: more of the same resource gives us more: ⊂ 2 DTIME[n] 6= DTIME[n ], same for DSPACE, NTIME, NSPACE, . . .

I Natural Complexity Classes have Natural Complete Problems SAT for NP, CVAL for P, QSAT for PSPACE, . . .

I Major Missing Idea: concept of work or conservation of energy in computation, i.e, in order to solve SAT or other hard problem we must do a certain amount of computational work.

Neil Immerman Descriptive Complexity I [Beame-Hastad]:˚ hierarchy remains strict to FO[log n/ log log n].

1 I NC FO[log n/ log log n] and this is tight. ⊆

I Does REACH require FO[log n]? This would imply NC1 = NL. 6

Strong Lower Bounds on FO[t(n)] for small t(n)

I [Sipser]: strict first-order alternation hierarchy: FO.

Neil Immerman Descriptive Complexity 1 I NC FO[log n/ log log n] and this is tight. ⊆

I Does REACH require FO[log n]? This would imply NC1 = NL. 6

Strong Lower Bounds on FO[t(n)] for small t(n)

I [Sipser]: strict first-order alternation hierarchy: FO.

I [Beame-Hastad]:˚ hierarchy remains strict up to FO[log n/ log log n].

Neil Immerman Descriptive Complexity I Does REACH require FO[log n]? This would imply NC1 = NL. 6

Strong Lower Bounds on FO[t(n)] for small t(n)

I [Sipser]: strict first-order alternation hierarchy: FO.

I [Beame-Hastad]:˚ hierarchy remains strict up to FO[log n/ log log n].

1 I NC FO[log n/ log log n] and this is tight. ⊆

Neil Immerman Descriptive Complexity Strong Lower Bounds on FO[t(n)] for small t(n)

I [Sipser]: strict first-order alternation hierarchy: FO.

I [Beame-Hastad]:˚ hierarchy remains strict up to FO[log n/ log log n].

1 I NC FO[log n/ log log n] and this is tight. ⊆

I Does REACH require FO[log n]? This would imply NC1 = NL. 6

Neil Immerman Descriptive Complexity n I We conjecture that SAT requires DTIME[Ω(2 )] for some  > 0, but no one has yet proved that it requires more than DTIME[n].

I Basic trade-offs are not understood, e.g., trade-off between time and number of processors. Are any problems inherently sequential? How can we best use mulitcores?

I SAT solvers are impressive new general purpose problem solvers, e.g., used in model checking, AI planning, code synthesis. How good are current SAT solvers? How much can they be improved?

Does It Matter? How important is P = NP? 6 I Much is known about approximation, e.g., some NP complete problems, e.g., Knapsack, Euclidean TSP, can be approximated as closely as we want, others, e.g., Clique, can’t be.

Neil Immerman Descriptive Complexity I Basic trade-offs are not understood, e.g., trade-off between time and number of processors. Are any problems inherently sequential? How can we best use mulitcores?

I SAT solvers are impressive new general purpose problem solvers, e.g., used in model checking, AI planning, code synthesis. How good are current SAT solvers? How much can they be improved?

Does It Matter? How important is P = NP? 6 I Much is known about approximation, e.g., some NP complete problems, e.g., Knapsack, Euclidean TSP, can be approximated as closely as we want, others, e.g., Clique, can’t be. n I We conjecture that SAT requires DTIME[Ω(2 )] for some  > 0, but no one has yet proved that it requires more than DTIME[n].

Neil Immerman Descriptive Complexity I SAT solvers are impressive new general purpose problem solvers, e.g., used in model checking, AI planning, code synthesis. How good are current SAT solvers? How much can they be improved?

Does It Matter? How important is P = NP? 6 I Much is known about approximation, e.g., some NP complete problems, e.g., Knapsack, Euclidean TSP, can be approximated as closely as we want, others, e.g., Clique, can’t be. n I We conjecture that SAT requires DTIME[Ω(2 )] for some  > 0, but no one has yet proved that it requires more than DTIME[n].

I Basic trade-offs are not understood, e.g., trade-off between time and number of processors. Are any problems inherently sequential? How can we best use mulitcores?

Neil Immerman Descriptive Complexity Does It Matter? How important is P = NP? 6 I Much is known about approximation, e.g., some NP complete problems, e.g., Knapsack, Euclidean TSP, can be approximated as closely as we want, others, e.g., Clique, can’t be. n I We conjecture that SAT requires DTIME[Ω(2 )] for some  > 0, but no one has yet proved that it requires more than DTIME[n].

I Basic trade-offs are not understood, e.g., trade-off between time and number of processors. Are any problems inherently sequential? How can we best use mulitcores?

I SAT solvers are impressive new general purpose problem solvers, e.g., used in model checking, AI planning, code synthesis. How good are current SAT solvers? How much can they be improved?

Neil Immerman Descriptive Complexity Descriptive Complexity

Fact: For constructible t(n), FO[t(n)] = CRAM[t(n)]

Fact: For k = 1, 2,..., VAR[k + 1] = DSPACE[nk ]

The complexity of computing a query is closely tied to the complexity of describing the query.

P = NP FO(LFP) = SO ⇔

ThC0 = NP FO(MAJ) = SO ⇔

P = PSPACE FO(LFP) = SO(TC) ⇔

Neil Immerman Descriptive Complexity FO( ) co-r.e. complete Arithmetic Hierarchy N r.e. complete Halt Halt FO (N) FO (N) co-r.e. ∀ ∃ r.e. Recursive

Primitive Recursive

O(1) SO[2n ] EXPTIME

QSAT PSPACE complete O(1) FO[2n ] SO[nO(1)] PSPACE

SO co-NP complete PTIME Hierarchy NP complete SAT SAT SO SO co-NP ∀ ∃ NP NP co-NP ∩ FO[nO(1)] P complete Horn- P FO(LFP) SAT

FO[logO(1) n] “truly NC

FO[log n] feasible” AC1 FO(CFL) sAC1 FO(TC) 2SAT NL comp. NL FO(DTC) 2COLOR L comp. L

FO(REGULAR) NC1 FO(COUNT) ThC0 FO LOGTIME Hierarchy AC0

Neil Immerman Descriptive Complexity