VIII. Approximation Algorithms: MAX- Problem (Outlook) Thomas Sauerwald

Easter 2017 Outline

Simple Algorithms for MAX-CUT

A Solution based on Semidefinite Programming

Summary

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 2 cluster analysis VLSI design

2 b Weighted MAX-CUT: Every edge e ∈ E has a non-negative weight w(e) a 1 3 5 4 c Given: Undirected graph G = (V , E) 2 1 Goal: Find a subset S ⊆ V such that |E(S, V \ S)| 2 h e 1 is maximized. g 3

Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := {u,v}∈E(S,V \S) w({u, v}) S = {a, b, g} w(S) = 18 Applications:

Max-Cut

MAX-CUT Problem

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design

2 b Weighted MAX-CUT: Every edge e ∈ E has a non-negative weight w(e) a 1 3 5 4 c 2 1 Goal: Find a subset S ⊆ V such that |E(S, V \ S)| 2 h e 1 is maximized. g 3

Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := {u,v}∈E(S,V \S) w({u, v}) S = {a, b, g} w(S) = 18 Applications:

Max-Cut

MAX-CUT Problem Given: Undirected graph G = (V , E)

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design

2 b Weighted MAX-CUT: Every edge e ∈ E has a non-negative weight w(e) a 1 3 5 4 c 2 1 2 h e 1 g 3

Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := {u,v}∈E(S,V \S) w({u, v}) S = {a, b, g} w(S) = 18 Applications:

Max-Cut

MAX-CUT Problem Given: Undirected graph G = (V , E) Goal: Find a subset S ⊆ V such that |E(S, V \ S)| is maximized.

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design

2 b a 1 3 5 4 c 2 1 2 h e 1 g 3

Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := {u,v}∈E(S,V \S) w({u, v}) S = {a, b, g} w(S) = 18 Applications:

Max-Cut

Weighted MAX-CUT: Every edge e ∈ E has a non-negative weight w(e)

MAX-CUT Problem Given: Undirected graph G = (V , E) Goal: Find a subset S ⊆ V such that |E(S, V \ S)| is maximized.

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design

2 b a 1 3 5 4 c 2 1 2 h e 1 g 3

d

S = {a, b, g} w(S) = 18 Applications:

Max-Cut

Weighted MAX-CUT: Every edge e ∈ E has a non-negative weight w(e)

MAX-CUT Problem Given: Undirected graph G = (V , E) Goal: Find a subset S ⊆ V such that |E(S, V \ S)| is maximized.

Weighted MAX-CUT: Maximize the weights of edges crossing P the cut, i.e., maximize w(S) := {u,v}∈E(S,V \S) w({u, v})

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design

b a

g

S = {a, b, g} w(S) = 18 Applications:

Max-Cut

2 b Weighted MAX-CUT: Every edge e ∈ E has a non-negative weight w(e) a 1 3 MAX-CUT Problem 5 4 c Given: Undirected graph G = (V , E) 2 1 Goal: Find a subset S ⊆ V such that |E(S, V \ S)| 2 h e 1 is maximized. g 3

Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := {u,v}∈E(S,V \S) w({u, v})

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design

w(S) = 18 Applications:

Max-Cut

2 b Weighted MAX-CUT: Every edge e ∈ E has a non-negative weight w(e) a 1 3 MAX-CUT Problem 5 4 c Given: Undirected graph G = (V , E) 2 1 Goal: Find a subset S ⊆ V such that |E(S, V \ S)| 2 h e 1 is maximized. g 3

Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := {u,v}∈E(S,V \S) w({u, v}) S = {a, b, g}

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design

Applications:

Max-Cut

2 b Weighted MAX-CUT: Every edge e ∈ E has a non-negative weight w(e) a 1 3 MAX-CUT Problem 5 4 c Given: Undirected graph G = (V , E) 2 1 Goal: Find a subset S ⊆ V such that |E(S, V \ S)| 2 h e 1 is maximized. g 3

Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := {u,v}∈E(S,V \S) w({u, v}) S = {a, b, g} w(S) = 18

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design

Max-Cut

2 b Weighted MAX-CUT: Every edge e ∈ E has a non-negative weight w(e) a 1 3 MAX-CUT Problem 5 4 c Given: Undirected graph G = (V , E) 2 1 Goal: Find a subset S ⊆ V such that |E(S, V \ S)| 2 h e 1 is maximized. g 3

Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := {u,v}∈E(S,V \S) w({u, v}) S = {a, b, g} w(S) = 18 Applications:

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 VLSI design

Max-Cut

2 b Weighted MAX-CUT: Every edge e ∈ E has a non-negative weight w(e) a 1 3 MAX-CUT Problem 5 4 c Given: Undirected graph G = (V , E) 2 1 Goal: Find a subset S ⊆ V such that |E(S, V \ S)| 2 h e 1 is maximized. g 3

Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := {u,v}∈E(S,V \S) w({u, v}) S = {a, b, g} w(S) = 18 Applications: cluster analysis

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 Max-Cut

2 b Weighted MAX-CUT: Every edge e ∈ E has a non-negative weight w(e) a 1 3 MAX-CUT Problem 5 4 c Given: Undirected graph G = (V , E) 2 1 Goal: Find a subset S ⊆ V such that |E(S, V \ S)| 2 h e 1 is maximized. g 3

Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := {u,v}∈E(S,V \S) w({u, v}) S = {a, b, g} w(S) = 18 Applications: cluster analysis VLSI design

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 E [ w(S, V \ S)]   X = E  w({u, v})  {u,v}∈E(S,V \S) X = Pr [ {u ∈ S ∩ v ∈ (V \ S)} ∪ {u ∈ (V \ S) ∩ v ∈ S} ] · w({u, v}) {u,v}∈E X  1 1  = + · w({u, v}) 4 4 {u,v}∈E

1 X 1 ∗ = w({u, v}) ≥ w . 2 2 {u,v}∈E

We could employ the same derandomisation used for MAX-3-CNF. Proof: We express the expected weight of the random cut (S, V \ S) as:

Random Sampling

Ex 35.4-3 Suppose that for each vertex v, we randomly and independently place v in S with probability 1/2 and in V \ S with probability 1/2. Then this algorithm is a randomized 2-.

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 4 We could employ the same derandomisation used for MAX-3-CNF.

E [ w(S, V \ S)]   X = E  w({u, v})  {u,v}∈E(S,V \S) X = Pr [ {u ∈ S ∩ v ∈ (V \ S)} ∪ {u ∈ (V \ S) ∩ v ∈ S} ] · w({u, v}) {u,v}∈E X  1 1  = + · w({u, v}) 4 4 {u,v}∈E

1 X 1 ∗ = w({u, v}) ≥ w . 2 2 {u,v}∈E

Random Sampling

Ex 35.4-3 Suppose that for each vertex v, we randomly and independently place v in S with probability 1/2 and in V \ S with probability 1/2. Then this algorithm is a randomized 2-approximation algorithm.

Proof: We express the expected weight of the random cut (S, V \ S) as:

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 4 We could employ the same derandomisation used for MAX-3-CNF.

  X = E  w({u, v})  {u,v}∈E(S,V \S) X = Pr [ {u ∈ S ∩ v ∈ (V \ S)} ∪ {u ∈ (V \ S) ∩ v ∈ S} ] · w({u, v}) {u,v}∈E X  1 1  = + · w({u, v}) 4 4 {u,v}∈E

1 X 1 ∗ = w({u, v}) ≥ w . 2 2 {u,v}∈E

Random Sampling

Ex 35.4-3 Suppose that for each vertex v, we randomly and independently place v in S with probability 1/2 and in V \ S with probability 1/2. Then this algorithm is a randomized 2-approximation algorithm.

Proof: We express the expected weight of the random cut (S, V \ S) as:

E [ w(S, V \ S)]

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 4 We could employ the same derandomisation used for MAX-3-CNF.

X = Pr [ {u ∈ S ∩ v ∈ (V \ S)} ∪ {u ∈ (V \ S) ∩ v ∈ S} ] · w({u, v}) {u,v}∈E X  1 1  = + · w({u, v}) 4 4 {u,v}∈E

1 X 1 ∗ = w({u, v}) ≥ w . 2 2 {u,v}∈E

Random Sampling

Ex 35.4-3 Suppose that for each vertex v, we randomly and independently place v in S with probability 1/2 and in V \ S with probability 1/2. Then this algorithm is a randomized 2-approximation algorithm.

Proof: We express the expected weight of the random cut (S, V \ S) as:

E [ w(S, V \ S)]   X = E  w({u, v})  {u,v}∈E(S,V \S)

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 4 We could employ the same derandomisation used for MAX-3-CNF.

X  1 1  = + · w({u, v}) 4 4 {u,v}∈E

1 X 1 ∗ = w({u, v}) ≥ w . 2 2 {u,v}∈E

Random Sampling

Ex 35.4-3 Suppose that for each vertex v, we randomly and independently place v in S with probability 1/2 and in V \ S with probability 1/2. Then this algorithm is a randomized 2-approximation algorithm.

Proof: We express the expected weight of the random cut (S, V \ S) as:

E [ w(S, V \ S)]   X = E  w({u, v})  {u,v}∈E(S,V \S) X = Pr [ {u ∈ S ∩ v ∈ (V \ S)} ∪ {u ∈ (V \ S) ∩ v ∈ S} ] · w({u, v}) {u,v}∈E

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 4 We could employ the same derandomisation used for MAX-3-CNF.

1 X 1 ∗ = w({u, v}) ≥ w . 2 2 {u,v}∈E

Random Sampling

Ex 35.4-3 Suppose that for each vertex v, we randomly and independently place v in S with probability 1/2 and in V \ S with probability 1/2. Then this algorithm is a randomized 2-approximation algorithm.

Proof: We express the expected weight of the random cut (S, V \ S) as:

E [ w(S, V \ S)]   X = E  w({u, v})  {u,v}∈E(S,V \S) X = Pr [ {u ∈ S ∩ v ∈ (V \ S)} ∪ {u ∈ (V \ S) ∩ v ∈ S} ] · w({u, v}) {u,v}∈E X  1 1  = + · w({u, v}) 4 4 {u,v}∈E

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 4 We could employ the same derandomisation used for MAX-3-CNF.

1 ≥ w ∗. 2

Random Sampling

Ex 35.4-3 Suppose that for each vertex v, we randomly and independently place v in S with probability 1/2 and in V \ S with probability 1/2. Then this algorithm is a randomized 2-approximation algorithm.

Proof: We express the expected weight of the random cut (S, V \ S) as:

E [ w(S, V \ S)]   X = E  w({u, v})  {u,v}∈E(S,V \S) X = Pr [ {u ∈ S ∩ v ∈ (V \ S)} ∪ {u ∈ (V \ S) ∩ v ∈ S} ] · w({u, v}) {u,v}∈E X  1 1  = + · w({u, v}) 4 4 {u,v}∈E 1 X = w({u, v}) 2 {u,v}∈E

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 4 We could employ the same derandomisation used for MAX-3-CNF.

Random Sampling

Ex 35.4-3 Suppose that for each vertex v, we randomly and independently place v in S with probability 1/2 and in V \ S with probability 1/2. Then this algorithm is a randomized 2-approximation algorithm.

Proof: We express the expected weight of the random cut (S, V \ S) as:

E [ w(S, V \ S)]   X = E  w({u, v})  {u,v}∈E(S,V \S) X = Pr [ {u ∈ S ∩ v ∈ (V \ S)} ∪ {u ∈ (V \ S) ∩ v ∈ S} ] · w({u, v}) {u,v}∈E X  1 1  = + · w({u, v}) 4 4 {u,v}∈E

1 X 1 ∗ = w({u, v}) ≥ w . 2 2 {u,v}∈E

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 4 We could employ the same derandomisation used for MAX-3-CNF.

Random Sampling

Ex 35.4-3 Suppose that for each vertex v, we randomly and independently place v in S with probability 1/2 and in V \ S with probability 1/2. Then this algorithm is a randomized 2-approximation algorithm.

Proof: We express the expected weight of the random cut (S, V \ S) as:

E [ w(S, V \ S)]   X = E  w({u, v})  {u,v}∈E(S,V \S) X = Pr [ {u ∈ S ∩ v ∈ (V \ S)} ∪ {u ∈ (V \ S) ∩ v ∈ S} ] · w({u, v}) {u,v}∈E X  1 1  = + · w({u, v}) 4 4 {u,v}∈E

1 X 1 ∗ = w({u, v}) ≥ w . 2 2 {u,v}∈E

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 4 Random Sampling

Ex 35.4-3 Suppose that for each vertex v, we randomly and independently place v in S with probability 1/2 and in V \ S with probability 1/2. Then this algorithm is a randomized 2-approximation algorithm.

We could employ the same derandomisation used for MAX-3-CNF. Proof: We express the expected weight of the random cut (S, V \ S) as:

E [ w(S, V \ S)]   X = E  w({u, v})  {u,v}∈E(S,V \S) X = Pr [ {u ∈ S ∩ v ∈ (V \ S)} ∪ {u ∈ (V \ S) ∩ v ∈ S} ] · w({u, v}) {u,v}∈E X  1 1  = + · w({u, v}) 4 4 {u,v}∈E

1 X 1 ∗ = w({u, v}) ≥ w . 2 2 {u,v}∈E

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 4 LOCAL SEARCH(G, w) 1: Let S be an arbitrary subset of V 2: do 3: flag = 0 4: if ∃u ∈ S with w(S \{u}, (V \ S) ∪ {u}) ≥ w(S, V \ S) then 5: S = S \{u} 6: flag = 1 7: end if 8: if ∃u ∈ V \ S with w(S ∪ {u}, (V \ S) \{u}) ≥ w(S, V \ S) then 9: S = S ∪ {u} 10: flag = 1 11: end if 12: while flag = 1 13: return S

Local Search

Local Search: Switch side of a vertex if it increases the cut.

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 5 Local Search

Local Search: Switch side of a vertex if it increases the cut.

LOCAL SEARCH(G, w) 1: Let S be an arbitrary subset of V 2: do 3: flag = 0 4: if ∃u ∈ S with w(S \{u}, (V \ S) ∪ {u}) ≥ w(S, V \ S) then 5: S = S \{u} 6: flag = 1 7: end if 8: if ∃u ∈ V \ S with w(S ∪ {u}, (V \ S) \{u}) ≥ w(S, V \ S) then 9: S = S ∪ {u} 10: flag = 1 11: end if 12: while flag = 1 13: return S

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 5 b a

e g

d

(local search terminates) AfterA better StepStepStep 5: solution5: 3:2:4:1:Local Move Move Search coulda intodgba intointo beV terminatesS\ found:S CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i

Cut = 0

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b

e g

d

(local search terminates) AfterA better StepStepStep 5: solution5: 3:2:4:Local Move Move Search coulda intodgb into beV terminatesS\ found:S CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i Step 1: Move a into S Cut = 0

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b

e g

d

(local search terminates) AfterA better StepStepStep 5: solution5: 3:2:4:Local Move Move Search coulda intodgb into beV terminatesS\ found:S CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i Step 1: Move a into S Cut = 0¡

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b

e g

d

(local search terminates) AfterA better StepStepStep 5: solution5: 3:2:4:1:Local Move Move Search coulda intodgba intointo beV terminatesS\ found:S CutCut==13111210085¡

Illustration of Local Search

b a f

c

h e g

d

i

Cut = 5

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b

e

d

(local search terminates) AfterA better StepStepStep 5: solution5: 3:4:1:Local Move Move Search coulda intodba intointo beV terminatesS\ found:S CutCut==13111210085¡

Illustration of Local Search

b a f

c

h e g

d

i Step 2: Move g into S Cut = 5

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b

e

d

(local search terminates) AfterA better StepStepStep 5: solution5: 3:4:1:Local Move Move Search coulda intodba intointo beV terminatesS\ found:S CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i Step 2: Move g into S Cut = 5¡

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b

e

d

(local search terminates) AfterA better StepStepStep 5: solution5: 3:2:4:1:Local Move Move Search coulda intodgba intointo beV terminatesS\ found:S CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i

Cut = 8

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b

e

d

(local search terminates) AfterA better StepStepStep 5: solution5: 2:4:1:Local Move Move Search coulda intogba into beV terminatesS\ found:S CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i Step 3: Move d into S Cut = 8

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b

e

d

(local search terminates) AfterA better StepStepStep 5: solution5: 2:4:1:Local Move Move Search coulda intogba into beV terminatesS\ found:S CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i Step 3: Move d into S Cut = 8¡

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b

e

d

(local search terminates) AfterA better StepStepStep 5: solution5: 3:2:4:1:Local Move Move Search coulda intodgba intointo beV terminatesS\ found:S CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i

Cut = 10

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b

e

d

(local search terminates) AfterA better StepStepStep 5: solution5: 3:2:1:Local Move Move Search coulda intodga intointo beV terminatesS\ found:S CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i Step 4: Move b into S Cut = 10

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b

e

d

(local search terminates) AfterA better StepStepStep 5: solution5: 3:2:1:Local Move Move Search coulda intodga intointo beV terminatesS\ found:S CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i Step 4: Move b into S Cut = 10

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b

e

d

(local search terminates) AfterA better StepStepStep 5: solution5: 3:2:4:1:Local Move Move Search coulda intodgba intointo beV terminatesS\ found:S CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i

Cut = 11

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b a

e

d

(local search terminates) AfterA better StepStep 5: solution 3:2:4:1:Local Move Search coulddgba intointo be terminatesS found: CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i Step 5: Move a into V \ S Cut = 11

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b a

e

d

(local search terminates) AfterA better StepStep 5: solution 3:2:4:1:Local Move Search coulddgba intointo be terminatesS found: CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i Step 5: Move a into V \ S Cut = 11

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b a

e

d

(local search terminates) A betterStepStep solution5: 3:2:4:1: Move Move coulda intodgba intointo beV S\ found:S CutCut==131110508¡

Illustration of Local Search

b a f

c

h e g

d

i After Step 5: Local Search terminates Cut = 12

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b a

e

d

(local search terminates) After StepStepStep 5: 5: 3:2:4:1:Local Move Move Searcha intodgba intointoV terminatesS\ S CutCut==131110508¡

Illustration of Local Search

b a f

c

h e g

d

i A better solution could be found: Cut = 12

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b a

g

d

(local search terminates) After StepStepStep 5: 5: 3:2:4:1:Local Move Move Searcha intodgba intointoV terminatesS\ S CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i A better solution could be found:

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b a

g

d

(local search terminates) After StepStepStep 5: 5: 3:2:4:1:Local Move Move Searcha intodgba intointoV terminatesS\ S CutCut==13111210508¡

Illustration of Local Search

b a f

c

h e g

d

i A better solution could be found:

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 b a

g

d

(local search terminates) After StepStepStep 5: 5: 3:2:4:1:Local Move Move Searcha intodgba intointoV terminatesS\ S CutCut==111210508¡

Illustration of Local Search

b a f

c

h e g

d

i A better solution could be found: Cut = 13

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 6 X and w(S) ≥ 2 · w({u, v}). v∈V \S,u∈V \S,u∼v

X X w({u, v}) ≥ w({u, v}), (1) v∈V \S,v∼u v∈S,v∼u

X X w({u, v}) ≥ w({u, v}). (2) v∈S,v∼u v∈V \S,v∼u and equation 2 for all vertices in V \ S, X w(S) ≥ 2 · w({u, v}) v∈S,u∈S,u∼v

At the time of termination, for every vertex u ∈ S:

Similarly, for any vertex u ∈ V \ S:

Adding up equation 1 for all vertices in S

Adding up these two inequalities, and diving by 2 yields X X w(S) ≥ w({u, v}) + w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v

Proof:

Every edge appears on one of the two sides.

Analysis of Local Search (1/2) Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 7 X and w(S) ≥ 2 · w({u, v}). v∈V \S,u∈V \S,u∼v

X X w({u, v}) ≥ w({u, v}), (1) v∈V \S,v∼u v∈S,v∼u

X X w({u, v}) ≥ w({u, v}). (2) v∈S,v∼u v∈V \S,v∼u and equation 2 for all vertices in V \ S, X w(S) ≥ 2 · w({u, v}) v∈S,u∈S,u∼v

At the time of termination, for every vertex u ∈ S:

Similarly, for any vertex u ∈ V \ S:

Adding up equation 1 for all vertices in S

Adding up these two inequalities, and diving by 2 yields X X w(S) ≥ w({u, v}) + w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v Every edge appears on one of the two sides.

Analysis of Local Search (1/2) Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

Proof:

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 7 X and w(S) ≥ 2 · w({u, v}). v∈V \S,u∈V \S,u∼v

X X w({u, v}) ≥ w({u, v}). (2) v∈S,v∼u v∈V \S,v∼u and equation 2 for all vertices in V \ S, X w(S) ≥ 2 · w({u, v}) v∈S,u∈S,u∼v

X X w({u, v}) ≥ w({u, v}), (1) v∈V \S,v∼u v∈S,v∼u Similarly, for any vertex u ∈ V \ S:

Adding up equation 1 for all vertices in S

Adding up these two inequalities, and diving by 2 yields X X w(S) ≥ w({u, v}) + w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v Every edge appears on one of the two sides.

Analysis of Local Search (1/2) Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

Proof: At the time of termination, for every vertex u ∈ S:

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 7 X and w(S) ≥ 2 · w({u, v}). v∈V \S,u∈V \S,u∼v

X X w({u, v}) ≥ w({u, v}). (2) v∈S,v∼u v∈V \S,v∼u and equation 2 for all vertices in V \ S, X w(S) ≥ 2 · w({u, v}) v∈S,u∈S,u∼v

Similarly, for any vertex u ∈ V \ S:

Adding up equation 1 for all vertices in S

Adding up these two inequalities, and diving by 2 yields X X w(S) ≥ w({u, v}) + w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v Every edge appears on one of the two sides.

Analysis of Local Search (1/2) Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

Proof: At the time of termination, for every vertex u ∈ S: X X w({u, v}) ≥ w({u, v}), (1) v∈V \S,v∼u v∈S,v∼u

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 7 X and w(S) ≥ 2 · w({u, v}). v∈V \S,u∈V \S,u∼v

and equation 2 for all vertices in V \ S, X w(S) ≥ 2 · w({u, v}) v∈S,u∈S,u∼v

X X w({u, v}) ≥ w({u, v}). (2) v∈S,v∼u v∈V \S,v∼u Adding up equation 1 for all vertices in S

Adding up these two inequalities, and diving by 2 yields X X w(S) ≥ w({u, v}) + w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v Every edge appears on one of the two sides.

Analysis of Local Search (1/2) Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

Proof: At the time of termination, for every vertex u ∈ S: X X w({u, v}) ≥ w({u, v}), (1) v∈V \S,v∼u v∈S,v∼u Similarly, for any vertex u ∈ V \ S:

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 7 X and w(S) ≥ 2 · w({u, v}). v∈V \S,u∈V \S,u∼v

and equation 2 for all vertices in V \ S, X w(S) ≥ 2 · w({u, v}) v∈S,u∈S,u∼v

Adding up these two inequalities, and diving by 2 yields X X w(S) ≥ w({u, v}) + w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v Every edge appears on one of the two sides.

Analysis of Local Search (1/2) Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

Proof: At the time of termination, for every vertex u ∈ S: X X w({u, v}) ≥ w({u, v}), (1) v∈V \S,v∼u v∈S,v∼u Similarly, for any vertex u ∈ V \ S: X X w({u, v}) ≥ w({u, v}). (2) v∈S,v∼u v∈V \S,v∼u Adding up equation 1 for all vertices in S

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 7 and equation 2 for all vertices in V \ S, X and w(S) ≥ 2 · w({u, v}). v∈V \S,u∈V \S,u∼v Adding up these two inequalities, and diving by 2 yields X X w(S) ≥ w({u, v}) + w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v Every edge appears on one of the two sides.

Analysis of Local Search (1/2) Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

Proof: At the time of termination, for every vertex u ∈ S: X X w({u, v}) ≥ w({u, v}), (1) v∈V \S,v∼u v∈S,v∼u Similarly, for any vertex u ∈ V \ S: X X w({u, v}) ≥ w({u, v}). (2) v∈S,v∼u v∈V \S,v∼u Adding up equation 1 for all vertices in S X w(S) ≥ 2 · w({u, v}) v∈S,u∈S,u∼v

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 7 X and w(S) ≥ 2 · w({u, v}). v∈V \S,u∈V \S,u∼v Adding up these two inequalities, and diving by 2 yields X X w(S) ≥ w({u, v}) + w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v Every edge appears on one of the two sides.

Analysis of Local Search (1/2) Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

Proof: At the time of termination, for every vertex u ∈ S: X X w({u, v}) ≥ w({u, v}), (1) v∈V \S,v∼u v∈S,v∼u Similarly, for any vertex u ∈ V \ S: X X w({u, v}) ≥ w({u, v}). (2) v∈S,v∼u v∈V \S,v∼u Adding up equation 1 for all vertices in S and equation 2 for all vertices in V \ S, X w(S) ≥ 2 · w({u, v}) v∈S,u∈S,u∼v

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 7 Adding up these two inequalities, and diving by 2 yields X X w(S) ≥ w({u, v}) + w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v Every edge appears on one of the two sides.

Analysis of Local Search (1/2) Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

Proof: At the time of termination, for every vertex u ∈ S: X X w({u, v}) ≥ w({u, v}), (1) v∈V \S,v∼u v∈S,v∼u Similarly, for any vertex u ∈ V \ S: X X w({u, v}) ≥ w({u, v}). (2) v∈S,v∼u v∈V \S,v∼u Adding up equation 1 for all vertices in S and equation 2 for all vertices in V \ S, X X w(S) ≥ 2 · w({u, v}) and w(S) ≥ 2 · w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 7 Every edge appears on one of the two sides.

Analysis of Local Search (1/2) Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

Proof: At the time of termination, for every vertex u ∈ S: X X w({u, v}) ≥ w({u, v}), (1) v∈V \S,v∼u v∈S,v∼u Similarly, for any vertex u ∈ V \ S: X X w({u, v}) ≥ w({u, v}). (2) v∈S,v∼u v∈V \S,v∼u Adding up equation 1 for all vertices in S and equation 2 for all vertices in V \ S, X X w(S) ≥ 2 · w({u, v}) and w(S) ≥ 2 · w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v Adding up these two inequalities, and diving by 2 yields X X w(S) ≥ w({u, v}) + w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 7 Analysis of Local Search (1/2) Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

Proof: At the time of termination, for every vertex u ∈ S: X X w({u, v}) ≥ w({u, v}), (1) v∈V \S,v∼u v∈S,v∼u Similarly, for any vertex u ∈ V \ S: X X w({u, v}) ≥ w({u, v}). (2) v∈S,v∼u v∈V \S,v∼u Adding up equation 1 for all vertices in S and equation 2 for all vertices in V \ S, X X w(S) ≥ 2 · w({u, v}) and w(S) ≥ 2 · w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v Adding up these two inequalities, and diving by 2 yields X X w(S) ≥ w({u, v}) + w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v Every edge appears on one of the two sides.

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 7 Analysis of Local Search (1/2) Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

Proof: At the time of termination, for every vertex u ∈ S: X X w({u, v}) ≥ w({u, v}), (1) v∈V \S,v∼u v∈S,v∼u Similarly, for any vertex u ∈ V \ S: X X w({u, v}) ≥ w({u, v}). (2) v∈S,v∼u v∈V \S,v∼u Adding up equation 1 for all vertices in S and equation 2 for all vertices in V \ S, X X w(S) ≥ 2 · w({u, v}) and w(S) ≥ 2 · w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v Adding up these two inequalities, and diving by 2 yields X X w(S) ≥ w({u, v}) + w({u, v}). v∈S,u∈S,u∼v v∈V \S,u∈V \S,u∼v Every edge appears on one of the two sides.

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 7 Unweighted Graphs: Cut increases by at least one in each iteration ⇒ at most n2 iterations Weighted Graphs: could take exponential time in n (not obvious...)

What is the running time of LOCAL-SEARCH?

Analysis of Local Search (2/2)

Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 8 Unweighted Graphs: Cut increases by at least one in each iteration ⇒ at most n2 iterations Weighted Graphs: could take exponential time in n (not obvious...)

Analysis of Local Search (2/2)

Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

What is the running time of LOCAL-SEARCH?

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 8 Weighted Graphs: could take exponential time in n (not obvious...)

Analysis of Local Search (2/2)

Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

What is the running time of LOCAL-SEARCH?

Unweighted Graphs: Cut increases by at least one in each iteration ⇒ at most n2 iterations

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 8 Analysis of Local Search (2/2)

Theorem The cut returned by LOCAL-SEARCH satisfies W ≥ (1/2)W ∗.

What is the running time of LOCAL-SEARCH?

Unweighted Graphs: Cut increases by at least one in each iteration ⇒ at most n2 iterations Weighted Graphs: could take exponential time in n (not obvious...)

VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 8 Outline

Simple Algorithms for MAX-CUT

A Solution based on Semidefinite Programming

Summary

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 9 1. Describe the Max-Cut Problem as a quadratic optimisation problem 2. Solve a corresponding semidefinite program that is a relaxation of the original problem 3. Recover an approximation for the original problem from the approximation for the semidefinite program

Label vertices by 1, 2,..., n and express Quadratic program weight function etc. as a n × n-matrix. 1 X maximize wi,j · (1 − yi yj ) 2 (i,j)∈E

subject to yi ∈ {−1, +1}, i = 1,..., n.

This models the MAX-CUT problem S = {i ∈ V : yi = +1}, V \ S = {i ∈ V : yi = −1}

Max-Cut Problem

High-Level-Approach:

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 10 2. Solve a corresponding semidefinite program that is a relaxation of the original problem 3. Recover an approximation for the original problem from the approximation for the semidefinite program

Label vertices by 1, 2,..., n and express Quadratic program weight function etc. as a n × n-matrix. 1 X maximize wi,j · (1 − yi yj ) 2 (i,j)∈E

subject to yi ∈ {−1, +1}, i = 1,..., n.

This models the MAX-CUT problem S = {i ∈ V : yi = +1}, V \ S = {i ∈ V : yi = −1}

Max-Cut Problem

High-Level-Approach: 1. Describe the Max-Cut Problem as a quadratic optimisation problem

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 10 3. Recover an approximation for the original problem from the approximation for the semidefinite program

Label vertices by 1, 2,..., n and express Quadratic program weight function etc. as a n × n-matrix. 1 X maximize wi,j · (1 − yi yj ) 2 (i,j)∈E

subject to yi ∈ {−1, +1}, i = 1,..., n.

This models the MAX-CUT problem S = {i ∈ V : yi = +1}, V \ S = {i ∈ V : yi = −1}

Max-Cut Problem

High-Level-Approach: 1. Describe the Max-Cut Problem as a quadratic optimisation problem 2. Solve a corresponding semidefinite program that is a relaxation of the original problem

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 10 Label vertices by 1, 2,..., n and express Quadratic program weight function etc. as a n × n-matrix. 1 X maximize wi,j · (1 − yi yj ) 2 (i,j)∈E

subject to yi ∈ {−1, +1}, i = 1,..., n.

This models the MAX-CUT problem S = {i ∈ V : yi = +1}, V \ S = {i ∈ V : yi = −1}

Max-Cut Problem

High-Level-Approach: 1. Describe the Max-Cut Problem as a quadratic optimisation problem 2. Solve a corresponding semidefinite program that is a relaxation of the original problem 3. Recover an approximation for the original problem from the approximation for the semidefinite program

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 10 Label vertices by 1, 2,..., n and express weight function etc. as a n × n-matrix.

This models the MAX-CUT problem S = {i ∈ V : yi = +1}, V \ S = {i ∈ V : yi = −1}

Max-Cut Problem

High-Level-Approach: 1. Describe the Max-Cut Problem as a quadratic optimisation problem 2. Solve a corresponding semidefinite program that is a relaxation of the original problem 3. Recover an approximation for the original problem from the approximation for the semidefinite program

Quadratic program 1 X maximize wi,j · (1 − yi yj ) 2 (i,j)∈E

subject to yi ∈ {−1, +1}, i = 1,..., n.

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 10 This models the MAX-CUT problem S = {i ∈ V : yi = +1}, V \ S = {i ∈ V : yi = −1}

Max-Cut Problem

High-Level-Approach: 1. Describe the Max-Cut Problem as a quadratic optimisation problem 2. Solve a corresponding semidefinite program that is a relaxation of the original problem 3. Recover an approximation for the original problem from the approximation for the semidefinite program

Label vertices by 1, 2,..., n and express Quadratic program weight function etc. as a n × n-matrix. 1 X maximize wi,j · (1 − yi yj ) 2 (i,j)∈E

subject to yi ∈ {−1, +1}, i = 1,..., n.

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 10 S = {i ∈ V : yi = +1}, V \ S = {i ∈ V : yi = −1}

Max-Cut Problem

High-Level-Approach: 1. Describe the Max-Cut Problem as a quadratic optimisation problem 2. Solve a corresponding semidefinite program that is a relaxation of the original problem 3. Recover an approximation for the original problem from the approximation for the semidefinite program

Label vertices by 1, 2,..., n and express Quadratic program weight function etc. as a n × n-matrix. 1 X maximize wi,j · (1 − yi yj ) 2 (i,j)∈E

subject to yi ∈ {−1, +1}, i = 1,..., n.

This models the MAX-CUT problem

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 10 Max-Cut Problem

High-Level-Approach: 1. Describe the Max-Cut Problem as a quadratic optimisation problem 2. Solve a corresponding semidefinite program that is a relaxation of the original problem 3. Recover an approximation for the original problem from the approximation for the semidefinite program

Label vertices by 1, 2,..., n and express Quadratic program weight function etc. as a n × n-matrix. 1 X maximize wi,j · (1 − yi yj ) 2 (i,j)∈E

subject to yi ∈ {−1, +1}, i = 1,..., n.

This models the MAX-CUT problem S = {i ∈ V : yi = +1}, V \ S = {i ∈ V : yi = −1}

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 10 Any solution of the original program can be recovered by setting vi = (yi , 0, 0,..., 0)!

Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

Relaxation

Quadratic program 1 X maximize wi,j · (1 − yi yj ) 2 (i,j)∈E

subject to yi ∈ {−1, +1}, i = 1,..., n.

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 11 Any solution of the original program can be recovered by setting vi = (yi , 0, 0,..., 0)!

Relaxation

Quadratic program 1 X maximize wi,j · (1 − yi yj ) 2 (i,j)∈E

subject to yi ∈ {−1, +1}, i = 1,..., n.

Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 11 Relaxation

Quadratic program 1 X maximize wi,j · (1 − yi yj ) 2 (i,j)∈E

subject to yi ∈ {−1, +1}, i = 1,..., n.

Any solution of the original program can be recovered by setting vi = (yi , 0, 0,..., 0)!

Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 11 18 2 4 −1  4 1 A = = · , so A is SPD. 2 6 1 2 −1 2 1 2 1 2  1  A = since 1 −1 · · = −2, A is not SPD. 2 1 2 1 −1

Remark 1. A is symmetric and positive definite iff there exists a n × n matrix B with BT · B = A. 2. If A is symmetric and positive definite, then the matrix B above can be computed in polynomial time.

using Cholesky-decomposition Examples:

Positive Definite Matrices

Definition n×n n A matrix A ∈ R is positive semidefinite iff for all y ∈ R , y T · A · y ≥ 0.

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 12 18 2 4 −1  4 1 A = = · , so A is SPD. 2 6 1 2 −1 2 1 2 1 2  1  A = since 1 −1 · · = −2, A is not SPD. 2 1 2 1 −1

using Cholesky-decomposition Examples:

Positive Definite Matrices

Definition n×n n A matrix A ∈ R is positive semidefinite iff for all y ∈ R , y T · A · y ≥ 0.

Remark 1. A is symmetric and positive definite iff there exists a n × n matrix B with BT · B = A. 2. If A is symmetric and positive definite, then the matrix B above can be computed in polynomial time.

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 12 18 2 4 −1  4 1 A = = · , so A is SPD. 2 6 1 2 −1 2 1 2 1 2  1  A = since 1 −1 · · = −2, A is not SPD. 2 1 2 1 −1

Examples:

Positive Definite Matrices

Definition n×n n A matrix A ∈ R is positive semidefinite iff for all y ∈ R , y T · A · y ≥ 0.

Remark 1. A is symmetric and positive definite iff there exists a n × n matrix B with BT · B = A. 2. If A is symmetric and positive definite, then the matrix B above can be computed in polynomial time.

using Cholesky-decomposition

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 12 18 2 4 −1  4 1 A = = · , so A is SPD. 2 6 1 2 −1 2 1 2 1 2  1  A = since 1 −1 · · = −2, A is not SPD. 2 1 2 1 −1

Positive Definite Matrices

Definition n×n n A matrix A ∈ R is positive semidefinite iff for all y ∈ R , y T · A · y ≥ 0.

Remark 1. A is symmetric and positive definite iff there exists a n × n matrix B with BT · B = A. 2. If A is symmetric and positive definite, then the matrix B above can be computed in polynomial time.

using Cholesky-decomposition Examples:

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 12 4 −1  4 1 = · , so A is SPD. 1 2 −1 2 1 2 1 2  1  A = since 1 −1 · · = −2, A is not SPD. 2 1 2 1 −1

Positive Definite Matrices

Definition n×n n A matrix A ∈ R is positive semidefinite iff for all y ∈ R , y T · A · y ≥ 0.

Remark 1. A is symmetric and positive definite iff there exists a n × n matrix B with BT · B = A. 2. If A is symmetric and positive definite, then the matrix B above can be computed in polynomial time.

using Cholesky-decomposition Examples:

18 2 A = 2 6

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 12 so A is SPD.

1 2 1 2  1  A = since 1 −1 · · = −2, A is not SPD. 2 1 2 1 −1

Positive Definite Matrices

Definition n×n n A matrix A ∈ R is positive semidefinite iff for all y ∈ R , y T · A · y ≥ 0.

Remark 1. A is symmetric and positive definite iff there exists a n × n matrix B with BT · B = A. 2. If A is symmetric and positive definite, then the matrix B above can be computed in polynomial time.

using Cholesky-decomposition Examples:

18 2 4 −1  4 1 A = = · , 2 6 1 2 −1 2

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 12 1 2 1 2  1  A = since 1 −1 · · = −2, A is not SPD. 2 1 2 1 −1

Positive Definite Matrices

Definition n×n n A matrix A ∈ R is positive semidefinite iff for all y ∈ R , y T · A · y ≥ 0.

Remark 1. A is symmetric and positive definite iff there exists a n × n matrix B with BT · B = A. 2. If A is symmetric and positive definite, then the matrix B above can be computed in polynomial time.

using Cholesky-decomposition Examples:

18 2 4 −1  4 1 A = = · , so A is SPD. 2 6 1 2 −1 2

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 12 1 2  1  since 1 −1 · · = −2, A is not SPD. 2 1 −1

Positive Definite Matrices

Definition n×n n A matrix A ∈ R is positive semidefinite iff for all y ∈ R , y T · A · y ≥ 0.

Remark 1. A is symmetric and positive definite iff there exists a n × n matrix B with BT · B = A. 2. If A is symmetric and positive definite, then the matrix B above can be computed in polynomial time.

using Cholesky-decomposition Examples:

18 2 4 −1  4 1 A = = · , so A is SPD. 2 6 1 2 −1 2 1 2 A = 2 1

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 12 = −2, A is not SPD.

Positive Definite Matrices

Definition n×n n A matrix A ∈ R is positive semidefinite iff for all y ∈ R , y T · A · y ≥ 0.

Remark 1. A is symmetric and positive definite iff there exists a n × n matrix B with BT · B = A. 2. If A is symmetric and positive definite, then the matrix B above can be computed in polynomial time.

using Cholesky-decomposition Examples:

18 2 4 −1  4 1 A = = · , so A is SPD. 2 6 1 2 −1 2 1 2 1 2  1  A = since 1 −1 · · 2 1 2 1 −1

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 12 A is not SPD.

Positive Definite Matrices

Definition n×n n A matrix A ∈ R is positive semidefinite iff for all y ∈ R , y T · A · y ≥ 0.

Remark 1. A is symmetric and positive definite iff there exists a n × n matrix B with BT · B = A. 2. If A is symmetric and positive definite, then the matrix B above can be computed in polynomial time.

using Cholesky-decomposition Examples:

18 2 4 −1  4 1 A = = · , so A is SPD. 2 6 1 2 −1 2 1 2 1 2  1  A = since 1 −1 · · = −2, 2 1 2 1 −1

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 12 Positive Definite Matrices

Definition n×n n A matrix A ∈ R is positive semidefinite iff for all y ∈ R , y T · A · y ≥ 0.

Remark 1. A is symmetric and positive definite iff there exists a n × n matrix B with BT · B = A. 2. If A is symmetric and positive definite, then the matrix B above can be computed in polynomial time.

using Cholesky-decomposition Examples:

18 2 4 −1  4 1 A = = · , so A is SPD. 2 6 1 2 −1 2 1 2 1 2  1  A = since 1 −1 · · = −2, A is not SPD. 2 1 2 1 −1

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 12 2 Introduce n variables ai,j = vi · vj , which give rise to a matrix A T If V is the matrix given by the vectors (v1, v2,..., vn), then A = V · V is symmetric and positive definite

Reformulation:

Solve this (which can be done in polynomial time), Semidefinite Program and recover V using Cholesky Decomposition. 1 X maximize wi,j · (1 − ai,j ) 2 (i,j)∈E

subject to A = (ai,j ) is symmetric and positive definite,

and ai,i = 1 for all i = 1,..., n

Reformulating the Quadratic Program as a Semidefinite Program Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 13 2 Introduce n variables ai,j = vi · vj , which give rise to a matrix A T If V is the matrix given by the vectors (v1, v2,..., vn), then A = V · V is symmetric and positive definite

Solve this (which can be done in polynomial time), Semidefinite Program and recover V using Cholesky Decomposition. 1 X maximize wi,j · (1 − ai,j ) 2 (i,j)∈E

subject to A = (ai,j ) is symmetric and positive definite,

and ai,i = 1 for all i = 1,..., n

Reformulating the Quadratic Program as a Semidefinite Program Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

Reformulation:

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 13 T If V is the matrix given by the vectors (v1, v2,..., vn), then A = V · V is symmetric and positive definite

Solve this (which can be done in polynomial time), Semidefinite Program and recover V using Cholesky Decomposition. 1 X maximize wi,j · (1 − ai,j ) 2 (i,j)∈E

subject to A = (ai,j ) is symmetric and positive definite,

and ai,i = 1 for all i = 1,..., n

Reformulating the Quadratic Program as a Semidefinite Program Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

Reformulation: 2 Introduce n variables ai,j = vi · vj , which give rise to a matrix A

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 13 Solve this (which can be done in polynomial time), Semidefinite Program and recover V using Cholesky Decomposition. 1 X maximize wi,j · (1 − ai,j ) 2 (i,j)∈E

subject to A = (ai,j ) is symmetric and positive definite,

and ai,i = 1 for all i = 1,..., n

Reformulating the Quadratic Program as a Semidefinite Program Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

Reformulation: 2 Introduce n variables ai,j = vi · vj , which give rise to a matrix A T If V is the matrix given by the vectors (v1, v2,..., vn), then A = V · V is symmetric and positive definite

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 13 Solve this (which can be done in polynomial time), and recover V using Cholesky Decomposition.

Reformulating the Quadratic Program as a Semidefinite Program Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

Reformulation: 2 Introduce n variables ai,j = vi · vj , which give rise to a matrix A T If V is the matrix given by the vectors (v1, v2,..., vn), then A = V · V is symmetric and positive definite

Semidefinite Program 1 X maximize wi,j · (1 − ai,j ) 2 (i,j)∈E

subject to A = (ai,j ) is symmetric and positive definite,

and ai,i = 1 for all i = 1,..., n

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 13 Reformulating the Quadratic Program as a Semidefinite Program Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

Reformulation: 2 Introduce n variables ai,j = vi · vj , which give rise to a matrix A T If V is the matrix given by the vectors (v1, v2,..., vn), then A = V · V is symmetric and positive definite

Solve this (which can be done in polynomial time), Semidefinite Program and recover V using Cholesky Decomposition. 1 X maximize wi,j · (1 − ai,j ) 2 (i,j)∈E

subject to A = (ai,j ) is symmetric and positive definite,

and ai,i = 1 for all i = 1,..., n

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 13 1. Pick a random vector r = (r1, r2,..., rn) by drawing each component from N (0, 1)

2. Put i ∈ V if vi · r ≥ 0 and i ∈ V \ S otherwise

Rounding by a random hyperplane :

Lemma 1 n The probability that two vectors vi , vj ∈ R are separated by the (random) arccos(vi ·vj ) hyperplane given by r equals π . Follows by projecting on the plane given by vi and vj .

Rounding the Vector Program

Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 14 2. Put i ∈ V if vi · r ≥ 0 and i ∈ V \ S otherwise

Lemma 1 n The probability that two vectors vi , vj ∈ R are separated by the (random) arccos(vi ·vj ) hyperplane given by r equals π . Follows by projecting on the plane given by vi and vj .

Rounding the Vector Program

Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

Rounding by a random hyperplane :

1. Pick a random vector r = (r1, r2,..., rn) by drawing each component from N (0, 1)

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 14 Lemma 1 n The probability that two vectors vi , vj ∈ R are separated by the (random) arccos(vi ·vj ) hyperplane given by r equals π . Follows by projecting on the plane given by vi and vj .

Rounding the Vector Program

Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

Rounding by a random hyperplane :

1. Pick a random vector r = (r1, r2,..., rn) by drawing each component from N (0, 1)

2. Put i ∈ V if vi · r ≥ 0 and i ∈ V \ S otherwise

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 14 Follows by projecting on the plane given by vi and vj .

Rounding the Vector Program

Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

Rounding by a random hyperplane :

1. Pick a random vector r = (r1, r2,..., rn) by drawing each component from N (0, 1)

2. Put i ∈ V if vi · r ≥ 0 and i ∈ V \ S otherwise

Lemma 1 n The probability that two vectors vi , vj ∈ R are separated by the (random) arccos(vi ·vj ) hyperplane given by r equals π .

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 14 Rounding the Vector Program

Vector Programming Relaxation 1 X maximize wi,j · (1 − vi vj ) 2 (i,j)∈E

subject to vi · vi = 1 i = 1,..., n. n vi ∈ R

Rounding by a random hyperplane :

1. Pick a random vector r = (r1, r2,..., rn) by drawing each component from N (0, 1)

2. Put i ∈ V if vi · r ≥ 0 and i ∈ V \ S otherwise

Lemma 1 n The probability that two vectors vi , vj ∈ R are separated by the (random) arccos(vi ·vj ) hyperplane given by r equals π . Follows by projecting on the plane given by vi and vj .

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 14 v4 v 2 r

v3 v5

v1

Illustration of the Hyperplane

N

v4 v2

v3 v5

v1

S

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 15 v4 v2

v3 v5

v1

Illustration of the Hyperplane

N

v4 v 2 r

v3 v5

v1

S

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 15 v4 v2

v3 v5

v1

Illustration of the Hyperplane

N

v4 v 2 r

v3 v5

v1

S

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 15 1 π arccos(x) 1 (1−x) 2 2

1 0.878

0 x −1 −0.5 0 0.5 1

f (x)

1 1 2 (1 − x) 1 π arrcos(x)

0.5

0 x −1 −0.5 0 0.5 1

A second (technical) Lemma

Lemma 2 For any x ∈ [−1, 1], 1 1 arccos(x) ≥ 0.878 · (1 − x). π 2

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 16 1 π arccos(x) 1 (1−x) 2 2

1 0.878

0 x −1 −0.5 0 0.5 1

A second (technical) Lemma

Lemma 2 For any x ∈ [−1, 1], 1 1 arccos(x) ≥ 0.878 · (1 − x). π 2

f (x)

1 1 2 (1 − x) 1 π arrcos(x)

0.5

0 x −1 −0.5 0 0.5 1

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 16 A second (technical) Lemma

Lemma 2 For any x ∈ [−1, 1], 1 1 arccos(x) ≥ 0.878 · (1 − x). π 2

1 π arccos(x) 1 (1−x) f (x) 2

1 1 2 2 (1 − x) 1 π arrcos(x)

0.5 1 0.878

0 x 0 x −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 16   X = E  Xi,j  {i,j}∈E X   = E Xi,j {i,j}∈E X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E X 1 = w · arccos(v · v ) i,j π i j {i,j}∈E

1 X ∗ ∗ ≥ 0.878 · wi,j · (1 − vi · vj ) = 0.878 · z ≥ 0.878 · W . 2 {i,j}∈E

Hence for the (random) weight of the computed cut,

E [ w(S)]

Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise.

Proof:

By Lemma 1

By Lemma 2

Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17   X = E  Xi,j  {i,j}∈E X   = E Xi,j {i,j}∈E X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E X 1 = w · arccos(v · v ) i,j π i j {i,j}∈E

1 X ∗ ∗ ≥ 0.878 · wi,j · (1 − vi · vj ) = 0.878 · z ≥ 0.878 · W . 2 {i,j}∈E

Hence for the (random) weight of the computed cut,

E [ w(S)]

Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise.

By Lemma 1

By Lemma 2

Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

Proof:

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17   X = E  Xi,j  {i,j}∈E X   = E Xi,j {i,j}∈E X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E X 1 = w · arccos(v · v ) i,j π i j {i,j}∈E

1 X ∗ ∗ ≥ 0.878 · wi,j · (1 − vi · vj ) = 0.878 · z ≥ 0.878 · W . 2 {i,j}∈E

Hence for the (random) weight of the computed cut,

E [ w(S)]

By Lemma 1

By Lemma 2

Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

Proof: Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise.

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17   X = E  Xi,j  {i,j}∈E X   = E Xi,j {i,j}∈E X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E X 1 By Lemma 1 = wi,j · arccos(vi · vj ) π {i,j}∈E

1 X ∗ ∗ By Lemma 2 ≥ 0.878 · wi,j · (1 − vi · vj ) = 0.878 · z ≥ 0.878 · W . 2 {i,j}∈E

Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

Proof: Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise. Hence for the (random) weight of the computed cut,

E [ w(S)]

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17 X   = E Xi,j {i,j}∈E X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E X 1 By Lemma 1 = wi,j · arccos(vi · vj ) π {i,j}∈E

1 X ∗ ∗ By Lemma 2 ≥ 0.878 · wi,j · (1 − vi · vj ) = 0.878 · z ≥ 0.878 · W . 2 {i,j}∈E

Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

Proof: Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise. Hence for the (random) weight of the computed cut,   X E [ w(S)] = E  Xi,j  {i,j}∈E

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17 X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E X 1 By Lemma 1 = wi,j · arccos(vi · vj ) π {i,j}∈E

1 X ∗ ∗ By Lemma 2 ≥ 0.878 · wi,j · (1 − vi · vj ) = 0.878 · z ≥ 0.878 · W . 2 {i,j}∈E

Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

Proof: Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise. Hence for the (random) weight of the computed cut,   X E [ w(S)] = E  Xi,j  {i,j}∈E X   = E Xi,j {i,j}∈E

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17 X 1 By Lemma 1 = wi,j · arccos(vi · vj ) π {i,j}∈E

1 X ∗ ∗ By Lemma 2 ≥ 0.878 · wi,j · (1 − vi · vj ) = 0.878 · z ≥ 0.878 · W . 2 {i,j}∈E

Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

Proof: Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise. Hence for the (random) weight of the computed cut,   X E [ w(S)] = E  Xi,j  {i,j}∈E X   = E Xi,j {i,j}∈E X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17 X 1 = w · arccos(v · v ) i,j π i j {i,j}∈E

1 X ∗ ∗ By Lemma 2 ≥ 0.878 · wi,j · (1 − vi · vj ) = 0.878 · z ≥ 0.878 · W . 2 {i,j}∈E

Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

Proof: Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise. Hence for the (random) weight of the computed cut,   X E [ w(S)] = E  Xi,j  {i,j}∈E X   = E Xi,j {i,j}∈E X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E

By Lemma 1

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17 1 X ∗ ∗ By Lemma 2 ≥ 0.878 · wi,j · (1 − vi · vj ) = 0.878 · z ≥ 0.878 · W . 2 {i,j}∈E

Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

Proof: Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise. Hence for the (random) weight of the computed cut,   X E [ w(S)] = E  Xi,j  {i,j}∈E X   = E Xi,j {i,j}∈E X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E X 1 By Lemma 1 = wi,j · arccos(vi · vj ) π {i,j}∈E

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17 1 X ∗ ∗ ≥ 0.878 · wi,j · (1 − vi · vj ) = 0.878 · z ≥ 0.878 · W . 2 {i,j}∈E

Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

Proof: Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise. Hence for the (random) weight of the computed cut,   X E [ w(S)] = E  Xi,j  {i,j}∈E X   = E Xi,j {i,j}∈E X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E X 1 By Lemma 1 = wi,j · arccos(vi · vj ) π {i,j}∈E

By Lemma 2

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17 = 0.878 · z∗ ≥ 0.878 · W ∗.

Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

Proof: Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise. Hence for the (random) weight of the computed cut,   X E [ w(S)] = E  Xi,j  {i,j}∈E X   = E Xi,j {i,j}∈E X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E X 1 By Lemma 1 = wi,j · arccos(vi · vj ) π {i,j}∈E 1 X By Lemma 2 ≥ 0.878 · wi,j · (1 − vi · vj ) 2 {i,j}∈E

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17 ≥ 0.878 · W ∗.

Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

Proof: Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise. Hence for the (random) weight of the computed cut,   X E [ w(S)] = E  Xi,j  {i,j}∈E X   = E Xi,j {i,j}∈E X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E X 1 By Lemma 1 = wi,j · arccos(vi · vj ) π {i,j}∈E

1 X ∗ By Lemma 2 ≥ 0.878 · wi,j · (1 − vi · vj ) = 0.878 · z 2 {i,j}∈E

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17 Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

Proof: Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise. Hence for the (random) weight of the computed cut,   X E [ w(S)] = E  Xi,j  {i,j}∈E X   = E Xi,j {i,j}∈E X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E X 1 By Lemma 1 = wi,j · arccos(vi · vj ) π {i,j}∈E

1 X ∗ ∗ By Lemma 2 ≥ 0.878 · wi,j · (1 − vi · vj ) = 0.878 · z ≥ 0.878 · W . 2 {i,j}∈E

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17 Putting Everything Together

Theorem (Goemans, Willamson’96)

1 The algorithm has an approximation ratio of 0.878 ≈ 1.139.

Proof: Define an indicator variable ( 1 if (i, j) ∈ E are on different sides of the hyperplane X = i,j 0 otherwise. Hence for the (random) weight of the computed cut,   X E [ w(S)] = E  Xi,j  {i,j}∈E X   = E Xi,j {i,j}∈E X = wi,j · Pr [ {i, j} ∈ E is in the cut ] {i,j}∈E X 1 By Lemma 1 = wi,j · arccos(vi · vj ) π {i,j}∈E

1 X ∗ ∗ By Lemma 2 ≥ 0.878 · wi,j · (1 − vi · vj ) = 0.878 · z ≥ 0.878 · W . 2 {i,j}∈E

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 17 can be derandomized Similar approach can be applied to MAX-3-CNF (with some effort) and yields an approximation ratio of 1.345

Theorem (Håstad’97) Unless P=NP, there is no ρ-approximation algorithm for MAX-CUT with 17 ρ ≤ 16 = 1.0625.

Theorem (Khot, Kindler, Mossel, O’Donnell’04) Assuming the so-called holds, unless P=NP there is no ρ-approximation algorithm for MAX-CUT with

1 (1 − x) ρ ≤ max 2 ≤ 1.139 −1≤x≤1 1 π arccos(x)

MAX-CUT: Concluding Remarks

Theorem (Goemans, Willamson’96) There is a randomised polynomial-time 1.139-approximation algorithm for MAX-CUT.

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 18 Similar approach can be applied to MAX-3-CNF and yields an approximation ratio of 1.345

Theorem (Håstad’97) Unless P=NP, there is no ρ-approximation algorithm for MAX-CUT with 17 ρ ≤ 16 = 1.0625.

Theorem (Khot, Kindler, Mossel, O’Donnell’04) Assuming the so-called Unique Games Conjecture holds, unless P=NP there is no ρ-approximation algorithm for MAX-CUT with

1 (1 − x) ρ ≤ max 2 ≤ 1.139 −1≤x≤1 1 π arccos(x)

MAX-CUT: Concluding Remarks

Theorem (Goemans, Willamson’96) There is a randomised polynomial-time 1.139-approximation algorithm for MAX-CUT.

can be derandomized (with some effort)

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 18 Theorem (Håstad’97) Unless P=NP, there is no ρ-approximation algorithm for MAX-CUT with 17 ρ ≤ 16 = 1.0625.

Theorem (Khot, Kindler, Mossel, O’Donnell’04) Assuming the so-called Unique Games Conjecture holds, unless P=NP there is no ρ-approximation algorithm for MAX-CUT with

1 (1 − x) ρ ≤ max 2 ≤ 1.139 −1≤x≤1 1 π arccos(x)

MAX-CUT: Concluding Remarks

Theorem (Goemans, Willamson’96) There is a randomised polynomial-time 1.139-approximation algorithm for MAX-CUT.

can be derandomized Similar approach can be applied to MAX-3-CNF (with some effort) and yields an approximation ratio of 1.345

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 18 Theorem (Khot, Kindler, Mossel, O’Donnell’04) Assuming the so-called Unique Games Conjecture holds, unless P=NP there is no ρ-approximation algorithm for MAX-CUT with

1 (1 − x) ρ ≤ max 2 ≤ 1.139 −1≤x≤1 1 π arccos(x)

MAX-CUT: Concluding Remarks

Theorem (Goemans, Willamson’96) There is a randomised polynomial-time 1.139-approximation algorithm for MAX-CUT.

can be derandomized Similar approach can be applied to MAX-3-CNF (with some effort) and yields an approximation ratio of 1.345

Theorem (Håstad’97) Unless P=NP, there is no ρ-approximation algorithm for MAX-CUT with 17 ρ ≤ 16 = 1.0625.

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 18 MAX-CUT: Concluding Remarks

Theorem (Goemans, Willamson’96) There is a randomised polynomial-time 1.139-approximation algorithm for MAX-CUT.

can be derandomized Similar approach can be applied to MAX-3-CNF (with some effort) and yields an approximation ratio of 1.345

Theorem (Håstad’97) Unless P=NP, there is no ρ-approximation algorithm for MAX-CUT with 17 ρ ≤ 16 = 1.0625.

Theorem (Khot, Kindler, Mossel, O’Donnell’04) Assuming the so-called Unique Games Conjecture holds, unless P=NP there is no ρ-approximation algorithm for MAX-CUT with

1 (1 − x) ρ ≤ max 2 ≤ 1.139 −1≤x≤1 1 π arccos(x)

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 18 This is an additive approximation!

Algorithm (1): 1. Take a sample S of x = O(1/2) vertices chosen uniformly at random 2. For each of the 2x possible cuts, go through vertices in V \ S in random order and place them on the side of the cut which maximizes the crossing edges 3. Output the best cut found

Theorem (Trevisan’08) There is a randomised 1.833-approximation algorithm for MAX-CUT which runs in O(n2 · polylog(n)) time.

Exploits relation between the smallest eigenvalue and the structure of the graph.

Other Approximation Algorithms for MAX-CUT

Theorem (Mathieu, Schudy’08) For any  > 0, there is a randomised algorithm with running time 2 O(n2)2O(1/ ) so that the expected value of the output deviates from the maximum cut value by at most O( · n2).

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 19 Algorithm (1): 1. Take a sample S of x = O(1/2) vertices chosen uniformly at random 2. For each of the 2x possible cuts, go through vertices in V \ S in random order and place them on the side of the cut which maximizes the crossing edges 3. Output the best cut found

Theorem (Trevisan’08) There is a randomised 1.833-approximation algorithm for MAX-CUT which runs in O(n2 · polylog(n)) time.

Exploits relation between the smallest eigenvalue and the structure of the graph.

Other Approximation Algorithms for MAX-CUT

Theorem (Mathieu, Schudy’08) For any  > 0, there is a randomised algorithm with running time 2 O(n2)2O(1/ ) so that the expected value of the output deviates from the 2 maximum cut value by at most O( · n ). This is an additive approximation!

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 19 Theorem (Trevisan’08) There is a randomised 1.833-approximation algorithm for MAX-CUT which runs in O(n2 · polylog(n)) time.

Exploits relation between the smallest eigenvalue and the structure of the graph.

Other Approximation Algorithms for MAX-CUT

Theorem (Mathieu, Schudy’08) For any  > 0, there is a randomised algorithm with running time 2 O(n2)2O(1/ ) so that the expected value of the output deviates from the 2 maximum cut value by at most O( · n ). This is an additive approximation!

Algorithm (1): 1. Take a sample S of x = O(1/2) vertices chosen uniformly at random 2. For each of the 2x possible cuts, go through vertices in V \ S in random order and place them on the side of the cut which maximizes the crossing edges 3. Output the best cut found

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 19 Exploits relation between the smallest eigenvalue and the structure of the graph.

Other Approximation Algorithms for MAX-CUT

Theorem (Mathieu, Schudy’08) For any  > 0, there is a randomised algorithm with running time 2 O(n2)2O(1/ ) so that the expected value of the output deviates from the 2 maximum cut value by at most O( · n ). This is an additive approximation!

Algorithm (1): 1. Take a sample S of x = O(1/2) vertices chosen uniformly at random 2. For each of the 2x possible cuts, go through vertices in V \ S in random order and place them on the side of the cut which maximizes the crossing edges 3. Output the best cut found

Theorem (Trevisan’08) There is a randomised 1.833-approximation algorithm for MAX-CUT which runs in O(n2 · polylog(n)) time.

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 19 Other Approximation Algorithms for MAX-CUT

Theorem (Mathieu, Schudy’08) For any  > 0, there is a randomised algorithm with running time 2 O(n2)2O(1/ ) so that the expected value of the output deviates from the 2 maximum cut value by at most O( · n ). This is an additive approximation!

Algorithm (1): 1. Take a sample S of x = O(1/2) vertices chosen uniformly at random 2. For each of the 2x possible cuts, go through vertices in V \ S in random order and place them on the side of the cut which maximizes the crossing edges 3. Output the best cut found

Theorem (Trevisan’08) There is a randomised 1.833-approximation algorithm for MAX-CUT which runs in O(n2 · polylog(n)) time.

Exploits relation between the smallest eigenvalue and the structure of the graph.

VIII. MAX-CUT Problem A Solution based on Semidefinite Programming 19 Outline

Simple Algorithms for MAX-CUT

A Solution based on Semidefinite Programming

Summary

VIII. MAX-CUT Problem Summary 20 MAX-CLIQUE

SET-COVER

VERTEX-COVER, MAX-3-CNF, MAX-CUT METRIC-TSP

SCHEDULING, EUCLIDEAN- TSP

KNAPSACK SUBSET-SUM

PTAS APX log-APX poly-APX

Spectrum of Approximations

FPTAS

VIII. MAX-CUT Problem Summary 21 MAX-CLIQUE

SET-COVER

VERTEX-COVER, MAX-3-CNF, MAX-CUT METRIC-TSP

SCHEDULING, EUCLIDEAN- TSP

PTAS APX log-APX poly-APX

Spectrum of Approximations

KNAPSACK SUBSET-SUM

FPTAS

VIII. MAX-CUT Problem Summary 21 MAX-CLIQUE

SET-COVER

VERTEX-COVER, MAX-3-CNF, MAX-CUT METRIC-TSP

SCHEDULING, EUCLIDEAN- TSP

APX log-APX poly-APX

Spectrum of Approximations

KNAPSACK SUBSET-SUM

FPTAS PTAS

VIII. MAX-CUT Problem Summary 21 MAX-CLIQUE

SET-COVER

VERTEX-COVER, MAX-3-CNF, MAX-CUT METRIC-TSP

APX log-APX poly-APX

Spectrum of Approximations

SCHEDULING, EUCLIDEAN- TSP

KNAPSACK SUBSET-SUM

FPTAS PTAS

VIII. MAX-CUT Problem Summary 21 MAX-CLIQUE

SET-COVER

VERTEX-COVER, MAX-3-CNF, MAX-CUT METRIC-TSP

log-APX poly-APX

Spectrum of Approximations

SCHEDULING, EUCLIDEAN- TSP

KNAPSACK SUBSET-SUM

FPTAS PTAS APX

VIII. MAX-CUT Problem Summary 21 MAX-CLIQUE

SET-COVER

log-APX poly-APX

Spectrum of Approximations

VERTEX-COVER, MAX-3-CNF, MAX-CUT METRIC-TSP

SCHEDULING, EUCLIDEAN- TSP

KNAPSACK SUBSET-SUM

FPTAS PTAS APX

VIII. MAX-CUT Problem Summary 21 MAX-CLIQUE

SET-COVER

poly-APX

Spectrum of Approximations

VERTEX-COVER, MAX-3-CNF, MAX-CUT METRIC-TSP

SCHEDULING, EUCLIDEAN- TSP

KNAPSACK SUBSET-SUM

FPTAS PTAS APX log-APX

VIII. MAX-CUT Problem Summary 21 MAX-CLIQUE

poly-APX

Spectrum of Approximations

SET-COVER

VERTEX-COVER, MAX-3-CNF, MAX-CUT METRIC-TSP

SCHEDULING, EUCLIDEAN- TSP

KNAPSACK SUBSET-SUM

FPTAS PTAS APX log-APX

VIII. MAX-CUT Problem Summary 21 MAX-CLIQUE

Spectrum of Approximations

SET-COVER

VERTEX-COVER, MAX-3-CNF, MAX-CUT METRIC-TSP

SCHEDULING, EUCLIDEAN- TSP

KNAPSACK SUBSET-SUM

FPTAS PTAS APX log-APX poly-APX

VIII. MAX-CUT Problem Summary 21 Spectrum of Approximations

MAX-CLIQUE

SET-COVER

VERTEX-COVER, MAX-3-CNF, MAX-CUT METRIC-TSP

SCHEDULING, EUCLIDEAN- TSP

KNAPSACK SUBSET-SUM

FPTAS PTAS APX log-APX poly-APX

VIII. MAX-CUT Problem Summary 21 The End

Thank you very much and Best Wishes for the Exam!

VIII. MAX-CUT Problem Summary 22