<p> CSC221 Data Structures Spring 2008 Burg Computational Complexity</p><p>What is computational complexity? A measure of the efficiency of a solution to a problem (an algorithm) Expressed as a function of the size of the input, N Determined by considering repeated steps in the algorithm – i.e., loops and recursion – and coming up with a function T(N) that describes this number of steps in terms of the input size. One way to express it is in big O notation, as in O( f (N)) . We talk about the “order” of an algorithm. o f (N) 1 O(1) constant</p><p> o f (N) log 2 N O(log 2 N) logarithmic o f (N) N O(N) linear </p><p> o f (N) N log 2 N O(N log 2 N) logarithmic * N 2 O(N 2 ) o f (N) N quadratic 3 O(N 3 ) o f (N) N cubic o f (N) 2 N O(2 N ) exponential Usually, we analyze worst-case complexity, but it’s possible to analyze best-case or average case as well. The notation you’ll see most often is big O, but you should know the following:</p><p>Notation: Pronunciation: Meaning, loosely: Value of</p><p>T(N)is o( f (N)) little Oh of f (N) T(N) f (N) as N gets large, T(N) lim( ) = 0 i.e. the run time of the program N f (N) is faster than f (N) as N gets large T(N)isO( f (N)) big Oh of f (N) T(N) f (N) as N gets large, T(N) lim( ) is i.e. the run time of the program N f (N) is the same as or faster than finite f (N) as N gets large T(N)is ( f (N)) theta of f (N) T(N) f (N) , i.e. the run time T(N) lim( ) is of the program is the same as N f (N) f (N) nonzero and finite T(N)is ( f (N)) omega of f (N) T(N) f (N) , i.e. the run time T(N) lim( ) is of the program is the same as N f (N) or slower than f (N) nonzero</p><p>1 We can understand these "loosely" as follows:</p><p>T(N) is o( f (N)) An algorithm that takes T(N) steps is better than one that takes means f (N) steps (for large N) T(N) is O( f (N)) An algorithm that takes T(N) steps is as good as or better than means one that takes f (N) steps (for large N) T(N) is ( f (N)) An algorithm that takes T(N) steps is essentially equal in run means time efficiency to one that takes f (N) steps (for large N) T(N) is ( f (N)) An algorithm that takes T(N) steps is as good as or worse than means as one that takes f (N) steps </p><p>Example 1:</p><p>Suppose you have an algorithm that takes N steps and then a constant 2000 steps.</p><p>Your estimate of its run time is then N 2000 . The function, T(N) , that you are using to estimate the run time is N 2000 . That is, T(N) N 2000 .</p><p>The function to which we’re going to compare this run time is N. That is, f (N) N .</p><p>I claim that T(N) is O(N) , T(N) is (N) , and T(N) is (N) . Here’s how to prove it by the definition.</p><p>Prove T(N) is O(N) .</p><p>Find a c and n0 such that T(N) < cf (N) for all N n0 . That is, show </p><p>N 2000 < cN for all N n0 .</p><p>Here are graphs of T(N) , which is N 2000 , and f (N) , which is N.</p><p>2 1 2 0 0 0</p><p>1 0 0 0 0</p><p>8 0 0 0 T ( N ) = N + 2 0 0 0</p><p>6 0 0 0</p><p> f ( N ) = N 4 0 0 0</p><p>2 0 0 0</p><p>0 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 6 0 0 0 7 0 0 0 8 0 0 0 9 0 0 0 1 0 0 0 0</p><p>It looks like T(N) takes more steps – i.e., is slower than – f (N) . But to satisfy the definition of Big Oh, we need to find the appropriate c and n0 . There isn’t necessarily just one such combination. We just need to find any combination of c and n0 that satisfies the definition. Here’s one: c 2 and n0 3000 .</p><p>4 x 1 0 2 R e c a l l t h a t f ( N ) = N a n d T ( N ) = N + 2 0 0 0 1 . 8 T ( N ) < = 2 * f ( N ) f o r N > = 3 0 0 0 1 . 6</p><p>1 . 4 T h e v a l u e s w e u s e t o s a t i s f y t h e d e f i n i t i o n o f O ( N ) a r e c = 2 a n d n = 3 0 0 0 . 0 1 . 2 N + 2 0 0 0 i s O ( N ) 1</p><p>0 . 8 2 * N N + 2 0 0 0 , t h e f u n c t i o n e s t i m a t i n g t h e t i m e 0 . 6 c o m p l e x i t y o f t h e g i v e n a l g o r i t h m 0 . 4</p><p>0 . 2</p><p>0 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 6 0 0 0 7 0 0 0 8 0 0 0 9 0 0 0 1 0 0 0 0</p><p>Prove T(N) is (N) .</p><p>Find a c and n0 such that T(N) ≥ cf (N) for all N n0 . That is, show </p><p>N 2000 ≥ cN for all N n0 . Here is a c and n0 combination that works: c 1and n0 0</p><p>3 1 2 0 0 0 N + 2 0 0 0 i s ( N ) 1 0 0 0 0 T o p r o v e t h i s b y t h e d e f i n i t i o n , u s e c = 1 a n d n = 0 0 8 0 0 0 N + 2 0 0 0</p><p>6 0 0 0</p><p>N 4 0 0 0</p><p>2 0 0 0</p><p>0 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 6 0 0 0 7 0 0 0 8 0 0 0 9 0 0 0 1 0 0 0 0</p><p>Prove T(N) is (N) .</p><p>Since T(N) is O(N) and T(N) is (N) , it follows by the definition of (N) that T(N) is (N) .</p><p>Example 2:</p><p>Suppose you have an algorithm that takes 3N 2 4 .</p><p>Your estimate of its run time is then 3N 2 4 . The function, T(N) , that you are using to estimate the run time is 3N 2 4 . That is, T(N) 3N 2 4 .</p><p>The function to which we’re going to compare this run time is N 2 . That is, f (N) N .</p><p>I claim that T(N) is O(N 2 ) , T(N) is (N 2 ) , and T(N) is (N 2 ) . Here’s how to prove it by the definition.</p><p>Prove T(N) is O(N 2 ) .</p><p>Here are graphs of T(N) and f (N) .</p><p>4 6 x 1 0 3 . 5</p><p>3</p><p>2 . 5</p><p>2 T ( N ) = 3 N 2 + 4</p><p>1 . 5</p><p>1</p><p>2 0 . 5 F ( N ) = N</p><p>0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 7 0 0 8 0 0 9 0 0 1 0 0 0</p><p>It looks like 3N 2 4 grows faster than N 2 . To prove that T(N) is O(N 2 ) , what c and 2 2 n0 combination can you use to make N cross over 3N 4 at some point? Let’s try c 5.</p><p>6 x 1 0 5</p><p>4 . 5</p><p>4</p><p>3 . 5</p><p>3</p><p>2 . 5 5 N 2</p><p>2 T ( N ) = 3 N 2 + 4 1 . 5</p><p>1 F ( N ) = N 2 0 . 5</p><p>0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 7 0 0 8 0 0 9 0 0 1 0 0 0</p><p>Hmmmm. You can’t see in this graph where 5N 2 crosses 3N 2 4 .</p><p>Look at the graph just from N 0 to N 3 .</p><p>5 5 0</p><p>4 5</p><p>4 0</p><p>3 5 5 N 2 3 0</p><p>2 5</p><p>2 0 3 N 2 + 4 1 5</p><p>1 0</p><p>5 N 2 0 0 0 . 5 1 1 . 5 2 2 . 5 3</p><p>2 2 You can see that around N 1.5 , 5N crosses 3N 4 . So with c 5 and n0 2 , we prove T(N) is O(N 2 ) because 3N 2 4 5N 2 for all N 2 .</p><p>Prove T(N) is (N 2 ) .</p><p>It’s easy to show that T(N) is (N 2 ) because 3N 2 4 N 2 for all N 0 . The c we use is 1 and the n0 we use is 0.</p><p>Since T(N) is (N 2 ) and T(N) is O(N 2 ) , then T(N) is (N 2 ) .</p><p>Another way to prove complexity is by using limits.</p><p>Example 3:</p><p>Prove N 2000 is O(N) using limits.</p><p>N 2000 We need to show that lim( ) is finite. N N</p><p>N 2000 As N goes to infinity, the 2000 term has less and less effect on the value of . N This value gets closer and closer to 1. The limit is 1. This is a finite number, so we have proven that N 2000 is O(N) .</p><p>6 True of False:</p><p>(Defend each answer both by finding an appropriate c and n0 and by looking at limits.)</p><p>_____1. log 2 N is O(N) .</p><p>_____2. N 2 is O(N) .</p><p>_____3. N 2 is (N) .</p><p>_____4. N 2 is (N 3 ) .</p><p>__ ___5. 2 N N 2 is O(2 N ) .</p><p>In summary, what you should learn from the exercises above is that: </p><p>T(N) is o( f (N)) essentially means that the time it takes to run the algorithm is better than f (N) . T(N) is O( f (N)) essentially means that the time it takes to run the algorithm is as good as or better than f (N) . T(N) is ( f (N)) essentially means that the time it takes to run the algorithm is the same as f (N) . T(N) is ( f (N)) essentially means that the time it takes to run the algorithm is as good as or worse than f (N) .</p><p>Just keep the dominant term in T(N) to determine the complexity, and ignore constant factors. For example, 3N 2 is (N) . </p><p>Note that if it’s ( f (N)) , it’s also O( f (N)) and ( f (N)) .</p><p>7</p>
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-