Computational Complexity

Total Page:16

File Type:pdf, Size:1020Kb

Computational Complexity

CSC221 Data Structures Spring 2008 Burg Computational Complexity

What is computational complexity?  A measure of the efficiency of a solution to a problem (an algorithm)  Expressed as a function of the size of the input, N  Determined by considering repeated steps in the algorithm – i.e., loops and recursion – and coming up with a function T(N) that describes this number of steps in terms of the input size.  One way to express it is in big O notation, as in O( f (N)) . We talk about the “order” of an algorithm. o f (N)  1 O(1) constant

o f (N)  log 2 N O(log 2 N) logarithmic o f (N)  N O(N) linear

o f (N)  N log 2 N O(N log 2 N) logarithmic * N 2 O(N 2 ) o f (N)  N quadratic 3 O(N 3 ) o f (N)  N cubic o f (N)  2 N O(2 N ) exponential  Usually, we analyze worst-case complexity, but it’s possible to analyze best-case or average case as well.  The notation you’ll see most often is big O, but you should know the following:

Notation: Pronunciation: Meaning, loosely: Value of

T(N)is o( f (N)) little Oh of f (N) T(N)  f (N) as N gets large, T(N) lim( ) = 0 i.e. the run time of the program N  f (N) is faster than f (N) as N gets large T(N)isO( f (N)) big Oh of f (N) T(N)  f (N) as N gets large, T(N) lim( ) is i.e. the run time of the program N  f (N) is the same as or faster than finite f (N) as N gets large T(N)is ( f (N)) theta of f (N) T(N)  f (N) , i.e. the run time T(N) lim( ) is of the program is the same as N  f (N) f (N) nonzero and finite T(N)is ( f (N)) omega of f (N) T(N)  f (N) , i.e. the run time T(N) lim( ) is of the program is the same as N  f (N) or slower than f (N) nonzero

1 We can understand these "loosely" as follows:

T(N) is o( f (N)) An algorithm that takes T(N) steps is better than one that takes means f (N) steps (for large N) T(N) is O( f (N)) An algorithm that takes T(N) steps is as good as or better than means one that takes f (N) steps (for large N) T(N) is  ( f (N)) An algorithm that takes T(N) steps is essentially equal in run means time efficiency to one that takes f (N) steps (for large N) T(N) is ( f (N)) An algorithm that takes T(N) steps is as good as or worse than means as one that takes f (N) steps

Example 1:

Suppose you have an algorithm that takes N steps and then a constant 2000 steps.

Your estimate of its run time is then N  2000 . The function, T(N) , that you are using to estimate the run time is N  2000 . That is, T(N)  N  2000 .

The function to which we’re going to compare this run time is N. That is, f (N)  N .

I claim that T(N) is O(N) , T(N) is (N) , and T(N) is  (N) . Here’s how to prove it by the definition.

Prove T(N) is O(N) .

Find a c and n0 such that T(N) < cf (N) for all N  n0 . That is, show

N  2000 < cN for all N  n0 .

Here are graphs of T(N) , which is N  2000 , and f (N) , which is N.

2 1 2 0 0 0

1 0 0 0 0

8 0 0 0 T ( N ) = N + 2 0 0 0

6 0 0 0

f ( N ) = N 4 0 0 0

2 0 0 0

0 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 6 0 0 0 7 0 0 0 8 0 0 0 9 0 0 0 1 0 0 0 0

It looks like T(N) takes more steps – i.e., is slower than – f (N) . But to satisfy the definition of Big Oh, we need to find the appropriate c and n0 . There isn’t necessarily just one such combination. We just need to find any combination of c and n0 that satisfies the definition. Here’s one: c  2 and n0  3000 .

4 x 1 0 2 R e c a l l t h a t f ( N ) = N a n d T ( N ) = N + 2 0 0 0 1 . 8 T ( N ) < = 2 * f ( N ) f o r N > = 3 0 0 0 1 . 6

1 . 4 T h e v a l u e s w e u s e t o s a t i s f y t h e d e f i n i t i o n o f O ( N ) a r e c = 2 a n d n = 3 0 0 0 . 0 1 . 2 N + 2 0 0 0 i s O ( N ) 1

0 . 8 2 * N N + 2 0 0 0 , t h e f u n c t i o n e s t i m a t i n g t h e t i m e 0 . 6 c o m p l e x i t y o f t h e g i v e n a l g o r i t h m 0 . 4

0 . 2

0 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 6 0 0 0 7 0 0 0 8 0 0 0 9 0 0 0 1 0 0 0 0

Prove T(N) is (N) .

Find a c and n0 such that T(N) ≥ cf (N) for all N  n0 . That is, show

N  2000 ≥ cN for all N  n0 . Here is a c and n0 combination that works: c  1and n0  0

3 1 2 0 0 0 N + 2 0 0 0 i s  ( N ) 1 0 0 0 0 T o p r o v e t h i s b y t h e d e f i n i t i o n , u s e c = 1 a n d n = 0 0 8 0 0 0 N + 2 0 0 0

6 0 0 0

N 4 0 0 0

2 0 0 0

0 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 6 0 0 0 7 0 0 0 8 0 0 0 9 0 0 0 1 0 0 0 0

Prove T(N) is (N) .

Since T(N) is O(N) and T(N) is (N) , it follows by the definition of  (N) that T(N) is  (N) .

Example 2:

Suppose you have an algorithm that takes 3N 2  4 .

Your estimate of its run time is then 3N 2  4 . The function, T(N) , that you are using to estimate the run time is 3N 2  4 . That is, T(N)  3N 2  4 .

The function to which we’re going to compare this run time is N 2 . That is, f (N)  N .

I claim that T(N) is O(N 2 ) , T(N) is (N 2 ) , and T(N) is  (N 2 ) . Here’s how to prove it by the definition.

Prove T(N) is O(N 2 ) .

Here are graphs of T(N) and f (N) .

4 6 x 1 0 3 . 5

3

2 . 5

2 T ( N ) = 3 N 2 + 4

1 . 5

1

2 0 . 5 F ( N ) = N

0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 7 0 0 8 0 0 9 0 0 1 0 0 0

It looks like 3N 2  4 grows faster than N 2 . To prove that T(N) is O(N 2 ) , what c and 2 2 n0 combination can you use to make N cross over 3N  4 at some point? Let’s try c  5.

6 x 1 0 5

4 . 5

4

3 . 5

3

2 . 5 5 N 2

2 T ( N ) = 3 N 2 + 4 1 . 5

1 F ( N ) = N 2 0 . 5

0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 7 0 0 8 0 0 9 0 0 1 0 0 0

Hmmmm. You can’t see in this graph where 5N 2 crosses 3N 2  4 .

Look at the graph just from N  0 to N  3 .

5 5 0

4 5

4 0

3 5 5 N 2 3 0

2 5

2 0 3 N 2 + 4 1 5

1 0

5 N 2 0 0 0 . 5 1 1 . 5 2 2 . 5 3

2 2 You can see that around N  1.5 , 5N crosses 3N  4 . So with c  5 and n0  2 , we prove T(N) is O(N 2 ) because 3N 2  4  5N 2 for all N  2 .

Prove T(N) is (N 2 ) .

It’s easy to show that T(N) is (N 2 ) because 3N 2  4  N 2 for all N  0 . The c we use is 1 and the n0 we use is 0.

Since T(N) is (N 2 ) and T(N) is O(N 2 ) , then T(N) is  (N 2 ) .

Another way to prove complexity is by using limits.

Example 3:

Prove N  2000 is O(N) using limits.

N  2000 We need to show that lim( ) is finite. N  N

N  2000 As N goes to infinity, the 2000 term has less and less effect on the value of . N This value gets closer and closer to 1. The limit is 1. This is a finite number, so we have proven that N  2000 is O(N) .

6 True of False:

(Defend each answer both by finding an appropriate c and n0 and by looking at limits.)

_____1. log 2 N is O(N) .

_____2. N  2 is O(N) .

_____3. N  2 is  (N) .

_____4. N 2 is (N 3 ) .

__ ___5. 2 N  N 2 is O(2 N ) .

In summary, what you should learn from the exercises above is that:

T(N) is o( f (N)) essentially means that the time it takes to run the algorithm is better than f (N) . T(N) is O( f (N)) essentially means that the time it takes to run the algorithm is as good as or better than f (N) . T(N) is  ( f (N)) essentially means that the time it takes to run the algorithm is the same as f (N) . T(N) is ( f (N)) essentially means that the time it takes to run the algorithm is as good as or worse than f (N) .

Just keep the dominant term in T(N) to determine the complexity, and ignore constant factors. For example, 3N  2 is  (N) .

Note that if it’s  ( f (N)) , it’s also O( f (N)) and ( f (N)) .

7

Recommended publications