<<

Tutor:- Prof. Avadhoot S. Joshi Subject:- Design & Analysis of Session II Unit I - Fundamentals

Design – II • Algorithm as a Technology • What kinds of problems are solved by algorithms? • Evolution of Algorithms • Algorithm Design – II Algorithm Design (Cntd.)

Figure briefly shows a sequence of steps one typically goes through in designing and analyzing an algorithm. Figure: Algorithm design and analysis process. Algorithm Design (Cntd.) Understand the problem: Before designing an algorithm is to understand completely the problem given. Read the problem’s description carefully and ask questions if you have any doubts about the problem, do a few small examples by hand, think about special cases, and ask questions again if needed. There are a few types of problems that arise in computing applications quite often. If the problem in question is one of them, you might be able to use a known algorithm for solving it. If you have to choose among several available algorithms, it helps to understand how such an algorithm works and to know its strengths and weaknesses. But often you will not find a readily available algorithm and will have to design your own. An input to an algorithm specifies an instance of the problem the algorithm solves. It is very important to specify exactly the set of instances the algorithm needs to handle. Algorithm Design (Cntd.) Ascertaining the Capabilities of the Computational Device: After understanding the problem you need to ascertain the capabilities of the computational device the algorithm is intended for. The vast majority of algorithms in use today are still destined to be programmed for a computer closely resembling the von Neumann machine—a computer architecture outlined by the prominent Hungarian-American (1903–1957), in collaboration with A. Burks and H. Goldstine, in 1946. The essence of this architecture is captured by the so- called random-access machine (RAM). Its central assumption is that instructions are executed one after another, one operation at a time. Accordingly, algorithms designed to be executed on such machines are called sequential algorithms. Algorithm Design (Cntd.) Ascertaining the Capabilities of the Computational Device (Cntd.): The central assumption of the RAM model does not hold for some newer computers that can execute operations concurrently, i.e., in parallel. Algorithms that take advantage of this capability are called parallel algorithms. Speed and memory of a computer also play pivotal role depending on area where algorithm is going to be designed. For scientific work researchers do not bother for particular computers speed and memory, but if algorithm is going to be developed as practical tool the answer may depend on a problem you need to solve. In many situations you need not worry about a computer being too slow for the task because, even the “slow” computers of today are almost unimaginably fast. There are important problems, however, that are very complex by their nature, or have to process huge volumes of data, or deal with applications where the time is critical. In such situations, it is imperative to be aware of the speed and memory available on a particular computer system. Algorithm Design (Cntd.) Choosing between Exact and Approximate Problem Solving: The next principal decision is to choose between solving the problem exactly or solving it approximately. In the former case, an algorithm is called an exact algorithm; in the latter case, an algorithm is called an approximation algorithm. Why would one opt for an approximation algorithm? First, there are important problems that simply cannot be solved exactly for most of their instances. Second, available algorithms for solving a problem exactly can be unacceptably slow because of the problem’s intrinsic complexity. Third, an approximation algorithm can be a part of a more sophisticated algorithm that solves a problem exactly. Algorithm Design (Cntd.) Algorithm Design Techniques: An algorithm design technique (or “strategy” or “paradigm”) is a general approach to solving problems algorithmically that is applicable to a variety of problems from different areas of computing. There are basically 5 fundamental techniques which are used to design an algorithm efficiently: Algorithm Design (Cntd.) Algorithm Design Techniques (Cntd.): Design Description Problems that follows strategy Divide & - Divide & conquer technique is a top-down  Binary search conquer approach to solve a problem. The algorithm  Multiplication of two n-bits which follows divide and conquer technique numbers involves 3 steps:  Quick Sort 1. Divide the original problem into a set of sub  Heap Sort problems.  2. Conquer (or Solve) every sub-problem individually, recursive. 3. Combine the solutions of these sub problems to get the solution of original problem. Algorithm Design (Cntd.) Algorithm Design Techniques (Cntd.): Design Description Problems that follows strategy Greedy - Greedy technique is used to solve an optimization problem.  Knapsack (fractional) Method - An Optimization problem is one in which, we are given a set Problem of input values, which are required to be either maximized or minimized (known as objective function) w. r. t. some  Minimum cost Spanning constraints or conditions. tree - Greedy algorithm always makes the choice (greedy criteria) . Kruskal’s algorithm that looks best at the moment, to optimize a given objective function. That is, it makes a locally optimal . Prim’s algorithm choice in the hope that this choice will lead to a overall  Single source shortest path globally optimal solution. problem - The greedy algorithm does not always guaranteed the optimal solution but it generally produces solutions that . Dijkstra’s algorithm are very close in value to the optimal. Algorithm Design (Cntd.) Algorithm Design Techniques (Cntd.): Design Description Problems that follows strategy Dynamic - Dynamic programming technique is similar to divide  All pair shortest path- programmi and conquer approach. Both solve a problem by Floyed algorithm breaking it down into a several sub problems that can ng  Chain be solved recursively. The difference between the two is that in dynamic programming approach, the results  Longest common obtained from solving smaller sub problems are reused subsequence (LCS) (by maintaining a table of results) in the calculation of  0/1 Knapsack Problem larger sub problems. Thus dynamic programming is a  Traveling salesmen Bottom-up approach that begins by solving the smaller problem (TSP) sub-problems, saving these partial results, and then reusing them to solve larger sub-problems until the solution to the original problem is obtained. Algorithm Design (Cntd.) Algorithm Design Techniques (Cntd.): Design Description Problems that follows strategy - Reusing the results of sub-problems (by maintaining a table of results) is the major advantage of dynamic programming because it avoids the re-computations (computing results twice or more) of the same problem. Thus Dynamic programming approach takes much less time than naïve or straightforward methods, such as divide-and-conquer approach which solves problems in top-down method and having lots of re-computations. - The dynamic programming approach always gives a guarantee to get a optimal solution. Algorithm Design (Cntd.) Algorithm Design Techniques (Cntd.): Design Description Problems that follows strategy Backtrackin - The term “backtrack” was coined by American  N-queen’s problem g mathematician D.H. Lehmer in the 1950s.  Sum-of subset - Backtracking can be applied only for problems which admit the concept of a “partial candidate solution” and relatively quick test of whether it can possibly be completed to a valid solution. Algorithm Design (Cntd.) Algorithm Design Techniques (Cntd.): Design Description Problems that follows strategy - Backtrack algorithms try each possibility until they find the right one. It is a depth-first-search of the set of possible solutions. During the search, if an alternative doesn’t work, the search backtracks to the choice point, the place which presented different alternatives, and tries the next alternative. When the alternatives are exhausted, the search returns to the previous choice point and try the next alternative there. If there are no more choice points, the search fails. Algorithm Design (Cntd.) Algorithm Design Techniques (Cntd.): Design Description Problems that follows strategy Branch & - Branch-and-Bound (B&B) is a rather general  Assignment problem Bound optimization technique that applies where the greedy  Traveling salesmen method and dynamic programming fail. problem (TSP) - B&B design strategy is very similar to backtracking in that a state-space-tree is used to solve a problem. Branch and bound is a systematic method for solving optimization problems. However, it is much slower. Indeed, it often leads to exponential time complexities in the worst case. On the other hand, if applied carefully, it can lead to algorithms that run reasonably fast on average. Algorithm Design (Cntd.) Algorithm Design Techniques (Cntd.): Design Description Problems that follows strategy - The general idea of B&B is a BFS-like search for the optimal solution, but not all nodes get expanded (i.e., their children generated). Rather, a carefully selected criterion determines which node to expand and when, and another criterion tells the algorithm when an optimal solution has been found. - Branch and Bound (B&B) is the most widely used tool for solving large scale NP-hard combinatorial optimization problems. Algorithm Design (Cntd.) Algorithm Design Techniques (Cntd.): They distil a few key ideas that have proven to be useful in designing algorithms. Learning these techniques is of utmost importance for the following reasons. First, they provide guidance for designing algorithms for new problems, i.e., problems for which there is no known satisfactory algorithm. Second, algorithms are the cornerstone of . Every science is interested in classifying its principal subject, and computer science is no exception. Algorithm design techniques make it possible to classify algorithms according to an underlying design idea; therefore, they can serve as a natural way to both categorize and study algorithms. Algorithm Design (Cntd.) Designing an Algorithm and Data Structures: While the algorithm design techniques do provide a powerful set of general approaches to algorithmic problem solving, designing an algorithm for a particular problem may still be a challenging task. Some design techniques can be simply inapplicable to the problem in question. Sometimes, several techniques need to be combined, and there are algorithms that are hard to pinpoint as applications of the known design techniques. One should pay close attention to choosing data structures appropriate for the operations performed by the algorithm. Data structures remain crucially important for both design and . Algorithm Design (Cntd.) Proving an Algorithm’s Correctness: Once an algorithm has been specified, you have to prove its correctness. That is, you have to prove that the algorithm yields a required result for every legitimate input in a finite amount of time. For some algorithms, a proof of correctness is quite easy; for others, it can be quite complex. A common technique for proving correctness is to use mathematical induction because an algorithm’s iterations provide a natural sequence of steps needed for such proofs. The notion of correctness for approximation algorithms is less straightforward than it is for exact algorithms. For an approximation algorithm, we usually would like to be able to show that the error produced by the algorithm does not exceed a predefined limit. Remember an algorithm’s correctness does not necessarily imply anything about its efficiency. Algorithm Design (Cntd.) Analyzing an Algorithm: We usually want our algorithms to possess several qualities. After correctness, by far the most important is efficiency. In fact, there are two kinds of algorithm efficiency: time efficiency, indicating how fast the algorithm runs, and space efficiency, indicating how much extra memory it uses. Another desirable characteristic of an algorithm is simplicity. Unlike efficiency, which can be precisely defined and investigated with mathematical rigor. Simplicity is an important algorithm characteristic to strive for, why? Because simpler algorithms are easier to understand and easier to program; consequently, the resulting programs usually contain fewer bugs. Yet another desirable characteristic of an algorithm is generality. There are, in fact, two issues here: generality of the problem the algorithm solves and the set of inputs it accepts. Algorithm Design (Cntd.) Coding an Algorithm: Programming an algorithm presents both a peril and an opportunity. The peril lies in the possibility of making the transition from an algorithm to a program either incorrectly or very inefficiently. Algorithm Design (Cntd.) Characteristics of Algorithms: The main characteristics of algorithms are as follows: Algorithms must have a unique name Algorithms should have explicitly defined set of inputs and outputs Algorithms are well-ordered with unambiguous operations Algorithms halt in a finite amount of time. Algorithms should not run for infinity, i.e., an algorithm must end at some point Algorithm as a Technology Algorithm as a Technology - Would you have any reason to study algorithms? The answer is ‘Yes’, we would like to demonstrate that our solution method terminates and does so with the correct answer; we would probably want our implementation to be within the bounds of good engineering practice, but we would most often use whichever method was the easiest to implement. - Computers may be fast, but they are not infinitely fast. And memory may be inexpensive, but it is not free. Computing time is therefore a bounded resource, and so is space in memory. You should use these resources wisely, and algorithms that are efficient in terms of time or space will help you do so. Algorithm as a Technology (Cntd.) - Efficiency: 1. Different algorithms devised to solve the same problem often differ dramatically in their efficiency. 2. These differences can be much more significant than differences due to hardware and software. Algorithm as a Technology (Cntd.) - Insertion sort typically has a smaller constant factor than merge sort, so that c1 < c2. Remember that the constant factors can have far less of an impact on the running time than the dependence on the input size ‘n’. - For example, assume a faster computer ‘A’ (1010 instructions/sec) running insertion sort against a slower computer ‘B’ (107 instructions/sec) running merge sort. 7 Suppose that c1=2,c2=50 and n= 10 . Algorithm as a Technology (Cntd.) ▪ To sort 1010 numbers, computer ‘A’ takes C ∗ n2 Instructions = 1 . Instructions per second 2 ∗(107)2 Instructions = = ퟐퟎ, ퟎퟎퟎ 퐬퐞퐜퐨퐧퐝퐬 1010 Instructions per second ▪ While, computer ‘B’ takes C ∗ n log 푛 Instructions = 2 2 Instructions per second 50 ∗ 107 log 107 Instructions = 2 = 1163 seconds 107 Instructions per second Algorithm as a Technology (Cntd.) ▪ The example above shows that we should consider algorithms, like computer hardware, as a technology. Total system performance depends on choosing efficient algorithms as much as on choosing fast hardware. Just as rapid advances are being made in other computer technologies, they are being made in algorithms as well. What kinds of problems are solved by algorithms?

Algorithms in Various field… What kinds of problems are solved by algorithms? - Practical applications of algorithms are ubiquitous and include the following examples: The Human Genome Project has made great progress toward the goals of identifying all the 100,000 genes in human DNA, determining the sequences of the 3 billion chemical base pairs that make up human DNA, storing this information in databases, and developing tools for data analysis. Each of these steps requires sophisticated algorithms. What kinds of problems are solved by algorithms? (Cntd.) The Internet enables people all around the world to quickly access and retrieve large amounts of information. With the aid of clever algorithms, sites on the Internet are able to manage and manipulate this large volume of data. Examples of problems that make essential use of algorithms include finding good routes on which the data will travel, and using a search engine to quickly find pages on which particular information resides What kinds of problems are solved by algorithms? (Cntd.) Electronic commerce enables goods and services to be negotiated and exchanged electronically, and it depends on the privacy of personal information such as credit card numbers, passwords, and bank statements. The core technologies used in electronic commerce include public-key cryptography and digital signatures, which are based on numerical algorithms and . What kinds of problems are solved by algorithms? (Cntd.) Manufacturing and other commercial enterprises often need to allocate scarce resources in the most beneficial way. Examples, an oil company may wish to know where to place its wells in order to maximize its expected profit. A political candidate may want to determine where to spend money buying campaign advertising in order to maximize the chances of winning an election. An airline may wish to assign crews to flights in the least expensive way possible, making sure that each flight is covered and that government regulations regarding crew scheduling are met. An Internet service provider may wish to determine where to place additional resources in order to serve its customers more effectively. All of these are examples of problems that can be solved using linear programming. Evolution of Algorithms

Step by step… Evolution of Algorithms Indian developed and used the concept of zero and position number system. This allowed the development of basic algorithms for operations, square roots, cube roots and the like. The famous Sanskrit grammarian Panini (around 4th Century BCE) gave data structures like “Maheshwara Sutra”, which greatly facilitated compact rules and algorithms for phonetics, phonology, morphology and syntax of the Sanskrit language. He gave a formal language theory, almost parallel to the modern theory and gave concepts which are parallel to modern mathematical functions. Evolution of Algorithms (Cntd.) During initial days of computer usage, that is during 1940’s and 50’s, the emphasis was on building the hardware, developing programming system so that computers can be used in commercial, scientific and engineering problems. As systematic methods of coding are required, this lead to the concepts of structured programming. Next logical step was seeking the proof of correctness of an algorithm, as many applications were identified where proven correctness was absolutely essential. This led to search for more efficient algorithms and in-depth study of hard algorithms. The term referred as algorithm analysis. Evolution of Algorithms (Cntd.) Problems were divided into two classes – tractable (those problems which we can hope to solve in a reasonable time even when scaled up) and non- tractable (those problems for which such hope should not be entertained). This branch of study developed into what we calls as “Complexity Theory”. For large problems it was found very difficult to give a proof of correctness, once an algorithms is defined or a program is written for real life problem. The proof became very lengthy, and there was no assurance that the proof will be Correct. This lead to two developments 1) Automatic proof of correctness – programs which try to generate the detailed proof for a given algorithm with some assistance from human analyst. 2) And the Idea that an algorithms should be developed in parallel, along with its proof of correctness. Evolution of Algorithms (Cntd.) Before one can give proof of correctness of an algorithm, it should be properly specified, that is there should be precise specification of the nature of inputs the algorithm is expected to handle and what output it is required to produce. Without this precisely defined information, the proof of correctness remains questionable. This lead to the development of formal methods of specifications. While searching for more efficient algorithms, researchers found that one can trade memory space for CPU time. This led to the study of time/space trade-off and further, study of the relationships between problems which were tractable time-wise and problems which were tractable space-wise. Evolution of Algorithms (Cntd.) For many non-tractable but practically essential problems, users were ready to accept even approximate solutions, provided the degree of approximation was known. Another fact which was emerged was that randomness can be an aid in solving many difficult problems. Where random numbers should be treated like a resource, just like CPU time, or memory space. (For e.g. genetic algorithms, simulated annealing and statistically defined algorithms) Timeline of Algorithms

From Medieval Period to the current era… Medieval Period  Before - Writing about "recipes" (on cooking, rituals, agriculture and other themes)  c. 1700—2000 BC - Egyptians develop earliest known algorithms for multiplying two numbers.  c. 1600 BC - Babylonians develop earliest known algorithms for and finding square roots  c. 300 BC - Euclid's algorithm  c. 200 BC - the  263 AD - Gaussian elimination described by Liu Hui  628 - described by  c. 820 - Al-Khawarizmi described algorithms for solving linear equations and quadratic equations in his ; the word algorithm comes from his name  825 - Al-Khawarizmi described the , algorithms for using the Hindu-Arabic numerals, in his treatise On the Calculation with Hindu Numerals, which was translated into Latin as Algoritmi de numero Indorum, where "Algoritmi", the translator's rendition of the author's name gave rise to the word algorithm (Latin algorithmus) with a meaning "calculation method"  c. 850 - and algorithms developed by Al-Kindi (Alkindus) in A Manuscript on Deciphering Cryptographic Messages, which contains algorithms on breaking and .  c. 1025 - Ibn al-Haytham (Alhazen), was the first mathematician to derive the formula for the sum of the fourth powers, and in turn, he develops an algorithm for determining the general formula for the sum of any integral powers, which was fundamental to the development of integral calculus  c. 1400 - Ahmad al-Qalqashandi gives a list of ciphers in his Subh al-a'sha which include both substitution and transposition, and for the first time, a with multiple substitutions for each plaintext letter; he also gives an exposition on and worked example of cryptanalysis, including the use of tables of letter frequencies and sets of letters which cannot occur together in one word Before 1940

 1614 - John Napier develops method for performing calculations using  1671 - Newton–Raphson method developed by Isaac Newton  1690 - Newton–Raphson method independently developed by Joseph Raphson  1706 - John Machin develops a quickly converging inverse-tangent series for π and computes π to 100 decimal places,  1789 - improves Machin's formula and computes π to 140 decimal places,  1805 - FFT-like algorithm known by  1842 - writes the first algorithm for a computing engine  1903 - A algorithm presented by Carle David Tolmé Runge  1926 - Borůvka's algorithm  1926 - Primary decomposition algorithm presented by Grete Hermann  1934 - Delaunay triangulation developed by Boris Delaunay  1936 - , an abstract machine developed by , with others developed the modern notion of algorithm.

1940’s  1942 - A Fast Fourier Transform algorithm developed by G.C. Danielson and  1945 - Merge sort developed by John von Neumann  1947 - developed by 1950’s  1952 - developed by David A. Huffman  1953 - Simulated annealing introduced by Nicholas Metropolis  1954 - computer algorithm developed by Harold H. Seward  1956 - Kruskal's algorithm developed by Joseph Kruskal  1957 - Prim's algorithm developed by Robert Prim  1957 - Bellman–Ford algorithm developed by Richard E. Bellman and L. R. Ford, Jr.  1959 - Dijkstra's algorithm developed by Edsger Dijkstra  1959 - Shell sort developed by Donald L. Shell  1959 - De Casteljau's algorithm developed by Paul de Casteljau  1959 - QR factorization algorithm developed independently by John G.F. Francis and Vera Kublanovskaya 1960’s  1960 - Karatsuba multiplication  1962 - AVL trees  1962 - developed by C. A. R. Hoare  1962 - Ford–Fulkerson algorithm developed by L. R. Ford, Jr. and D. R. Fulkerson  1962 - Bresenham's line algorithm developed by Jack E. Bresenham  1962 - Gayle-Shapley 'stable-marriage' algorithm developed by and  1964 - developed by J. W. J. Williams  1964 - multigrid methods first proposed by R. P. Fedorenko  1965 - Cooley–Tukey algorithm rediscovered by James Cooley and  1965 - developed by Vladimir Levenshtein  1965 - Cocke–Younger–Kasami (CYK) algorithm independently developed by Tadao Kasami  1965 - Buchberger's algorithm for computing Gröbner bases developed by  1966 - Dantzig algorithm for shortest path in a graph with negative edges  1967 - Viterbi algorithm proposed by  1967 - Cocke–Younger–Kasami (CYK) algorithm independently developed by Daniel H. Younger  1968 - A* graph search algorithm described by Peter Hart, Nils Nilsson, and Bertram Raphael.  1968 - Risch algorithm for indefinite integration developed by Robert Henry Risch  1969 - for matrix multiplication developed by 1970’s  1970 - Dinic's algorithm for computing maximum flow in a flow network by Yefim (Chaim) A. Dinitz  1970 - Knuth–Bendix completion algorithm developed by and Peter B. Bendix  1970 - BFGS method of the quasi-Newton class  1972 - Graham scan developed by Ronald Graham  1972 - Red-black trees and B-trees discovered  1973 - RSA algorithm discovered by Clifford Cocks  1973 - Jarvis march algorithm developed by R. A. Jarvis  1973 - Hopcroft-Karp developed by and Richard Karp  1974 - Pollard's p − 1 algorithm developed by John Pollard  1975 - Genetic algorithms popularized by John Holland  1975 - Pollard's rho algorithm developed by John Pollard  1975 - Aho–Corasick string matching algorithm developed by Alfred V. Aho and Margaret J. Corasick  1975 - Cylindrical algebraic decomposition developed by George E. Collins  1976 - Salamin–Brent algorithm independently discovered by Eugene Salamin and Richard Brent  1976 - Knuth–Morris–Pratt algorithm developed by Donald Knuth and and independently by J. H. Morris  1977 - Boyer–Moore string search algorithm for searching the occurrence of a string into another string.  1977 - RSA encryption algorithm rediscovered by , , and Len Adleman  1977 - LZ77 algorithm developed by and  1977 - multigrid methods developed independently by Achi Brandt and Wolfgang Hackbusch  1978 - LZ78 algorithm developed from LZ77 by Abraham Lempel and Jacob Ziv  1978 - Bruun's algorithm proposed for powers of two by Georg Bruun  1979 - Khachiyan's developed by Leonid Khachiyan  1979 - ID3 decision tree algorithm developed by Ross Quinlan 1980’s

 1980 - Brent's Algorithm for Richard P. Brendt  1981 - developed by Carl Pomerance  1983 - Simulated annealing developed by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi  1983 - Classification and regression tree (CART) algorithm developed by Leo Breiman, et al.  1984 - LZW algorithm developed from LZ78 by Terry Welch  1984 - Karmarkar's interior-point algorithm developed by  1985 - Simulated annealing independently developed by V. Cerny  1985 - Splay trees discovered by Sleator and Tarjan  1986 - Blum Blum Shub proposed by L. Blum, M. Blum, and M. Shub  1986 - Push relabel maximum flow algorithm by Andrew Goldberg and  1987 - Fast multipole method developed by and Vladimir Rokhlin  1988 - Special number field sieve developed by John Pollard 1990’s  1990 - General number field sieve developed from SNFS by Carl Pomerance, Joe Buhler, Hendrik Lenstra, and  1991 - Wait-free synchronization developed by  1992 - Deutsch–Jozsa algorithm proposed by D. Deutsch and Richard Jozsa  1992 - C4.5 algorithm, a descendent of ID3 decision tree algorithm, was developed by Ross Quinlan  1993 - Apriori algorithm developed by Rakesh Agrawal and Ramakrishnan Srikant  1993 - Karger's algorithm to compute the minimum cut of a connected graph by David Karger  1994 - Shor's algorithm developed by  1994 - Burrows–Wheeler transform developed by Michael Burrows and David Wheeler  1994 - Bootstrap aggregating (bagging) developed by Leo Breiman  1995 - AdaBoost algorithm, the first practical boosting algorithm, was introduced by and  1995 - soft-margin support vector machine algorithm was published by and . It adds a softmargin idea to the 1992 algorithm by Boser, Nguyon, Vapnik, and is the algorithm that people usually refer to when saying SVM.  1995 - Ukkonen's algorithm for construction of suffix trees  1996 - Bruun's algorithm generalized to arbitrary even composite sizes by H. Murakami  1996 - Grover's algorithm developed by Lov K. Grover  1996 - RIPEMD-160 developed by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel  1997 - Mersenne Twister a pseudo random number generator developed by Makoto Matsumoto and Tajuki Nishimura  1998 - PageRank algorithm was published by Larry Page  1998 - rsync algorithm developed by Andrew Tridgell  1999 - algorithm developed by Jerome H. Friedman  1999 - Yarrow algorithm designed by Bruce Schneier, John Kelsey, and Niels Ferguson 2000’s

 2000 - Hyperlink-induced topic search a hyperlink analysis algorithm developed by Jon Kleinberg  2001 - Lempel–Ziv–Markov chain algorithm for compression developed by Igor Pavlov  2001 -Viola–Jones algorithm for real-time face detection was developed by Paul Viola and Michael Jones.  2002 - AKS developed by , and  2002 - Girvan-Newman algorithm to detect communities in complex systems To be Continued...