
ORIE 6300 Mathematical Programming I November 11, 2008 Lecture 20 Lecturer: Yogeshwer Sharma Scribe: Yucel Altug 1 Decision Problem as a Subset De¯nition 1 The set of all binary strings, is de¯ned as f0; 1g¤ = f0; 1; 00; 01; 10; 11; 000;:::g For any ¼ ⊆ §¤, ¼¹ denotes its complement in the set §¤. Next, let's give following examples of decision problems: Given a graph G = (V; E), ST = f< G; s; t >: There is a path from s to t in graph Gg: ST , the complement of ST is given by: ST = f< G; s; t >: There is no path from s to t in graph Gg: Therefore given any < G; s; t > ¯nding whether < G; s; t >2 ST or not is a decision problem. As another example, consider Linear Programming (LP) problem: LP = f< A; b >: 9x; s.t. Ax · bg: Note that for LP, §¤ is the set of all possible < A; b > and LP is given by LP = f< A; b >: 8x; Ax 6· bg: Hence, given any < A; b >2 §¤ deciding whether < A; b >2 LP or not is a decision problem. Similarly, a linear optimization problem may also viewed as a decision problem. Consider following optimization problem: max cT x; subject to Ax · b: If we de¯ne ¼ as ¼ = f< A; b; c; t > : There is a solution x s.t. Ax · b and cT x ¸ tg; and beginning from t = ¡1 decide whether < A; b; c; t >2 ¼ or not, we can ¯nd optimal solution t¤ as the point where the answer \switches" from YES to NO. 2 De¯nition of Polynomial Time If we denote a computational problem as ¼, then the set of polynomial time problems, denoted by P, is de¯ned as: P = f¼ : There is an algorithm to decide ¼ in polynomial timeg: A necessary and su±cient condition for a problem to be an element of polynomial time is given by [¼ 2 P] , [9A s.t. running time of A on all inputs of length n is · nd and ¼ = fx 2 §¤ : A(x) = YESg]; where d is a constant. The aforementioned algorithm A is called polynomial time acceptor for ¼. Examples: ST problem de¯ned in previous part. MW ST = f< G; W >: 9 a spanning tree such that weight is · W g, where MW ST stands for \minimum cost spanning tree". 20-1 3 De¯nition of Non-deterministic Polynomial Time The following is a necessary and su±cient condition for a computational problem ¼ to be an element of NP: d ¤ [¼ 2 N P] , [9B with running time jxj and ¼ = fx 2 § : 9cx; s.t. B(x; cx) = YESg]; where cx is called certi¯cate (proof, hint). Example: ¼ = SAT (Satis¯ability). Before de¯ning SAT problem, we should state following ² Variables: x1; x2; : : : ; xn. ² Literals: l1; l2; : : : ; ln, where 8i, li 2 fxi; x¹ig. ² Clauses: c1; c2; : : : ; cn, where 8i, ci = (lj1 _ ::: _ ljn ) ² Formula: © = c1 ^ ::: ^ cn Now we are ready to de¯ne SAT problem as follows: SAT = f© : We can assign 0; 1 to variables of © s.t. all clauses are sati¯edg; where by satisfying we mean at least one one clause has a positive literal xi assigned to 1, or a negative literalx ¹j assigned to 0. Claim 1 SAT 2 N P Proof: We construct an algorithm with following parameters: Input: < ©; n bit vector v >. Procedure: Check whether v satis¯es © or not. Output: YES - NO. A couple of observations next. First of all note that B is polynomial time. Indeed its running time is linear, since it simply scans thorough vector v and decides YES if it encounters a 1 and NO otherwise. Furthermore, if © 2 SAT , then 9v, s.t. B(©; v) = YES, where v is a satisfying assignment. Moreover, if © 2= SAT , then 8v, B(©; v) = NO. Using all these there observations, we conclude that SAT 2 N P. 2 Note that de¯nition of NP is asymmetric, i.e. © 2 SAT ) 9c©; which leads to acceptence: © 2= SAT ) 8c©;B rejects: As another example of NP problem, consider LP problem. Recall that we have LP = f< A; b >: system Ax · b is feasible.g Certi¯cate for LP is a feasible solution x. An algortihm B calculates Ax and decides whether x is fea- sible or not. Running time of B is quadratic in length of x (by recalling the complexity of matrix vector multiplication). Another example of a problem in NP is the following: COMP OSIT E = fn 2 N : n is compositeg: A certi¯cate for COMP OSIT E problem is n1; n2 2 N, s.t. n1; n2 2= f1; ng and n1n2 = n. 4 NP, CO{NP De¯nition 2 [¼ ⊆ §¤; s.t. ¼ 2 CO{NP] , [¹¼ 2 N P]: 20-2 As an example of a computational problem which is in CO{NP, consider P RIME = COMP OSIT E. Since COMP OSIT E 2 N P (as argued above), P RIME 2 CO{NP. Therefore, for any given input n 2 N, we have certi¯cate that states that n is not a prime. As another example of a problem in CO{NP, consider LP de¯ned as follows: LP = f< A; b > : Ax · b is not feasibleg: If < A; b >=2 LP , there exists a short certi¯cate to verify < A; b >2 LP , which implies that LP 2 CO{NP. Claim 2 LP 2 N P \ CO{NP. Proof: Consider following systems Ax · b; (1) and AT y = 0; y ¸ 0; bT y < 0: (2) Using Farkas' lemma, we know that either (1) or (2) is feasible but not both. In the same way as above, we can write a polynomial time acceptor for (2), so we know that detecting the infeasibility of an LP is in NP, or rather LP 2 N P. Hence LP 2 CO{NP. Hence, we have LP 2 N P \ CO{NP. 2 Before discussing NP{completeness, we state two long standing open problems in computer science: First, is P = NP\ CO{NP? Second, is P = NP? It's clear that P ⊆ N P \ CO{NP and that P ⊆ N P, but it's unclear whether equality holds. 5 NP{Completeness Intuitively, NP{complete problems may be considered as 'the hardest' problems in NP, in the sense that every problem in NP can be reduced to a NP{complete problem in polynomial time. Before stating the de¯nition of NP{completeness, we need following de¯nition. De¯nition 3 ¼; ¼ ⊆ §¤, ¼0 is reducible to ¼, denoted as ¼0 · ¼, provided that 9f :§¤ ! §¤; s.t. x 2 ¼0 , f(x) 2 ¼ and f runs in polynomial time. Next, we state the rigorous de¯nition of NP{completeness: De¯nition 4 ¼ is NP{complete provided that: 1. ¼ 2 N P. 2. 8¼0 2 N P, ¼0 · ¼, Claim 3 If ¼ is NP{complete and ¼ 2 P, then P = NP. Proof: Suppose we have an NP{complete ¼, such that ¼ 2 P, which implies that 9A¼, which runs in d 0 n . Using this A¼, we can construct the following algorithm for any ¼ 2 N P: Given input x for ¼0, - Compute f(x) - Run A¼ on f(x) - If A¼(f(x)) = YES, then output YES, if A¼(f(x)) = NO, then output NO. Now, observe that from the de¯nition of NP-completeness, the aforementioned algorithm works correctly; d c moreover, since x 2 P, we have n as the running time of A¼ and n for f (recall the de¯nition of reducibility). Hence, the algorithm's running time is nc+d, which is also polynomial. Therefore, we conclude that ¼0 2 P. If ¼ 2 P for any NP{complete ¼, then we have 8¼0 2 N P, ¼0 2 P. Hence the claim follows. 2 20-3 6 Steps for NP{Completeness Proofs In this part, we will show that SAT · 0 ¡ 1 IP , where IP stands for `integer program'. We should construct a polynomial time algorithm with following properties: Input: ©. Output: A 0 ¡ 1 IP . For each xi of SAT , we haveWyi 2 f0; 1g forW 0 ¡ 1 IP , where yi =P 0 means xiPis false and yi = 1 means xi is true. For any clause c = x _ x , we have y + (1 ¡ y ) for 0 ¡ 1 IP . j i=Pj ⊆[n] i k2Nj ⊆[n] j i2Pj i k2Nj k Observe that the aforementioned algorithm is polynomial time. Therefore, we conclude SAT · 0 ¡ 1 IP, which implies that 0 ¡ 1 IP 2 N P. Theorem 4 (Cook, Levin '71) SAT is NP{complete. Since SAT · 0 ¡ 1 IP (as we have argued above), 0 ¡ 1 IP is also NP{complete. Therefore, intuitively speaking it is also one of the `hardest problems in NP'. 20-4.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages4 Page
-
File Size-