A Realistic Approach to Spatial and Temporal Complexities of Computational Algorithms
Total Page:16
File Type:pdf, Size:1020Kb
Mathematics and Computers in Business, Manufacturing and Tourism A Realistic Approach to Spatial and Temporal Complexities of Computational Algorithms AHMED TAREK AHMED FARHAN Department of Science Owen J. Roberts High School Cecil College, University System of Maryland (USM) 981 Ridge Road One Seahawk Drive, North East, MD 21901-1900 Pottstown, PA 19465 UNITED STATES OF AMERICA UNITED STATES OF AMERICA [email protected] [email protected] Abstract: Computational complexity is a widely researched topic in Theoretical Computer Science. A vast majority of the related results are purely of theoretical interest, and only a few have treated the model of complexity as applicable for real-life computation. Even if, some individuals performed research in complexity, the results would be hard to realize in practice. A number of these papers lack in simultaneous consideration of temporal and spatial complexities, which together depict the overall computational scenario. Treating one without the other makes the analysis insufficient and the model presented may not depict the true computational scenario, since both are of paramount importance in computer implementation of computational algorithms. This paper exceeds some of these limitations prevailing in the computer science literature through a clearly depicted model of computation and a meticulous analysis of spatial and temporal complexities. The paper also deliberates computational complexities due to a wide variety of coding constructs arising frequently in the design and analysis of practical algorithms. Key–Words: Design and analysis of algorithms, Model of computation, Real-life computation, Spatial complexity, Temporal complexity 1 Introduction putations require pre and/or post processing with the remaining operations carried out fast enough. With Computation time and memory space require- these computational scenarios, an Amortized Analysis ments are two major constraints in computer imple- is used. With Amortization, the total time required for mentation of real-life algorithms. Though there are a n operations is bounded asymptotically from above by 2 G(n) wide variety of formal notations for expressing these a function g(n), and g(n) O( n ). Hence, Amor- computational constraints, big-oh notation is the most tized Time is an upper bound for the average time of customarily used. Big-oh provides the upper-bound an operation in the worst case. So essentially, the in- on temporal(time) and spatial(space) requirements of vestment in the pre-work and/or post-work during the a computer algorithm, which is the most difficult ob- computation amortize or average out. struction to cross in digital computation. This paper Computer memory is a re-usable resource from primarily focuses on the upper-bounds as expressed the operating system standpoint and may be released through the standard big-oh notation for both tempo- for further reallocation. Temporal resources are con- ral and spatial complexities. sumable, and once spent, there is no return to that Temporal complexity is the quantity of CPU time point in time. Before today, computer algorithms were necessitated by an algorithm for its computer imple- not capable of time travel, which is turning back the mentation. Spatial complexity is the number of mem- clock - whether it’s a parallel or a pure sequential al- ory cells that an algorithm truly requires for compu- gorithm. Though there is a significant difference be- tation. A good algorithm tends to keep both of these tween temporal and spatial complexities from the re- requirements as small as possible. However, in real- allocation standpoint, spatial complexity shares many ity, it is strenuous to design an algorithm that is both of the same features due to temporal complexity. time and space efficient. In that event, there remains a This paper considers the temporal complexity to- trade-off between these two requirements. In certain wards the beginning. Commencing with the simplistic situations, not each of the n computer operations re- coding constructs arising frequently in real-life com- quire the same amount of time. Specially, some com- putation, the paper gradually transitions to the rela- ISBN: 978-960-474-332-2 23 Mathematics and Computers in Business, Manufacturing and Tourism tively complex models of computation in practice, and problems that can be solved by a Non-deterministic for each one of them, provides the upper-bound for Turing Machine using space in the order of O(f(n)), temporal resource requirements. Following the tem- and unlimited time. Therefore, the computation is poral complexity, the paper deals with the spatial com- done in O(f(n)) space by relaxing time to unlimited. plexity in an elaborate and easy to comprehend fash- NSP ACE stands for Non-deterministic Space Com- ion. In particular, spatial complexity is important to putation, which is the Non-deterministic counter-part sorting algorithms. Starting with the simple spatial in- of DSPACE. stances, the paper gradually approaches the relatively Please refer to [2] and [4] for the definition of complex computational models. big-oh complexity. Following is the basis of Com- In Section 2, specific terms and notations used in plexity Order using big-oh, and provides the complex- this paper are discussed briefly. Section 3 deals with ity class hierarchy. 2 k n a variety of coding constructs that frequently arise in log2n < n < nlog2n < n < : : : < n < 2 < realistic computations and provides the big-oh time Cn < n! complexity for each one of them. Section 4 consid- ers spatial complexity in an elaborate fashion. Section 3 Temporal Complexity of Computational 5 explores the realistic issues in temporal and spatial Algorithms complexity models discussed in this paper. Section 6 deduces conclusions based on models and analysis in The aggregate time required to execute a pro- the paper, and explores future research avenues. gram depends both on the compile time and the run time. Compile time does not depend on the dynamic 2 Terminology and Notations parameters involved during computation. Therefore, the same compiled program may be executed several In this paper, following notations are used. times, each time with an individual input of different n: Denotes input or instance size. size. Execution time of an algorithm is the principal g(n): Highest order term in the expression for com- focus in temporal complexity analysis. plexity without coefficients determining the complex- In determining the temporal complexity, there ity class of algorithms. are: operations count and step count. Operations f(n): Complexity function with an input size, n. count is the number of additions, multiplications, f´(n): Complexity function without the constant coef- comparisons and other operations used during the ficients in f(n). computation. Success in operations count depends on O(g(n)): Big-oh complexity with problem size, n, the ability to identify the crucial operations that con- which represents a set corresponding to the complex- tribute most to the temporal complexity. Step count ity class, g(n). accounts for all time spent in all parts of the pro- T (n): Temporal complexity of an algorithm or a func- gram/function. Since 100 additions may be 1 step or tion with an input size, n. 200 multiplications may also be 1 step, a program step S(n): Spatial complexity of an algorithm or a func- is the syntactically or semantically meaningful seg- tion with an input of size n. ment of a program for which the execution time is B(n): Space-time Bandwidth Product for an input of independent of the instance characteristic. size n. Time complexity calculates the total time con- C: Any constant value. sumed for the given instance characteristic. How- Space-time Tradeoff: A situation where memory us- ever, the instance characteristic varies from input to age can be reduced at the cost of slower program exe- input even with the same problem size. For exam- cution, or vice versa: the computation time can be re- ple, the number of swaps performed by the Bubble duced at the cost of increased memory consumption. sort depends on both array size and array elements. DSPACE(f, n): DSP ACE stands for the De- With the same array size n, the array elements may be terministic Space Computation. In this paper, vastly different, which will result in different number DSP ACE(f, n) denotes the total number of mem- of swaps performed by the algorithm. As a result, the ory cells used during the deterministic computation operation count may not be uniquely determined by of f(n). Here, f indicates the algorithm or func- the selected instance characteristic. Hence, it is neces- tion under consideration, and n is the input of size, sary to have the best, the worst and the average num- jnj. It is often abbreviated as DSP ACE (f). Hence, ber of operation counts yielding the best, the worst DSP ACE(f, n) 2 O(g(n)). However, DSP ACE and the average time complexities. For vast majority (f) is not defined whenever the computation of f(n) of the applications, average complexities suffice. does not halt. Big-oh complexity is usually expressed by the NSPACE(g, n): It denotes the set of all decision fastest growing term in the complexity function. ISBN: 978-960-474-332-2 24 Mathematics and Computers in Business, Manufacturing and Tourism There are 4 algorithmic steps in determining the big- the CPU time consumed to execute statement, Si, oh complexity that may be employed as a computer forPi = 1; 2; : : : ; k, then the total time consumed k program. The complete algorithm is described below: is, i=1 ci = C, which is a constant. Applying Algorithm big ohComplexity(n) the algorithm to determine the big-oh complex- Purpose: This algorithm determines big oh com- ity, the complexity order is, O(1). plexity with instance size n (single instance vari- able). 2. Simple Loops: The following is a pro- Input: Complexity function, f(n) on n with k totype for loop found in many pro- terms.