Toward a Model for Backtracking and Dynamic Programming

Total Page:16

File Type:pdf, Size:1020Kb

Toward a Model for Backtracking and Dynamic Programming Toward a Model for Backtracking and Dynamic Programming Ý Þ ÞÜß Michael Alekhnovich £ Allan Borodin Joshua Buresh-Oppenheim Russell Impagliazzo Ýß Avner Magen Ý Toniann Pitassi Abstract This second direction has recently attracted the attention of many researchers. Khanna, Motwani, Sudan and Vazi- We consider a model (BT) for backtracking algorithms. rani [25] formalize various types of local search paradigms, Our model generalizes both the priority model of Borodin, and in doing so, provide a more precise understanding of Nielson and Rackoff, as well as a simple dynamic program- local search algorithms. Woeginger [30] defines a class ming model due to Woeginger, and hence spans a wide of simple dynamic programming algorithms and provides spectrum of algorithms. After witnessing the strength of conditions for when a dynamic programming solution can the model, we then show its limitations by providing lower be used to derive a FPTAS for an optimization problem. bounds for algorithms in this model for several classical Borodin, Nielsen and Rackoff [11] introduce priority algo- problems such as interval scheduling, knapsack and satisfi- rithms as a model of greedy-like algorithms. Arora, Bol- ability. lob´as and Lov´asz [8] study wide classes of LP formula- tions, and prove integrality gaps for vertex cover within these classes. The most popular methods for solving SAT 1. Introduction are DPLL algorithms—a family of backtracking algorithms whose complexity has been characterized in terms of reso- lution proof complexity (see for example [16, 17, 14, 21]). Proving unconditional lower bounds for computing ex- Finally, Chv´atal [13] proves a lower bound for Knapsack in plicit functions remains one of the most challenging prob- an algorithmic model that involves elements of branch-and- lems in computational complexity. Since 1949, when Shan- bound and dynamic programming. non showed that a random function has large circuit com- plexity [29], little progress has been made toward proving We continue in this direction by presenting a hierarchy lower bounds for the size of unrestricted Boolean circuits of models for backtracking algorithms for general search that compute explicit functions. One explanation for this and optimization problems. Many well-known algorithms phenomenon was given by the Natural Proofs approach of and algorithmic techniques can be simulated within these Razborov and Rudich [27] who showed that most of the models, both those that are usually considered backtrack- existing lower bound techniques are incapable of proving ing and some that would normally be classified as greedy or such lower bounds. One way to investigate the complexity dynamic programming. We prove several upper and lower of explicit functions in spite of these difficulties is to study bounds on the capabilities of algorithms in this model, in reductions between problems, e.g. to identify a canonical some cases proving that the known algorithms are essen- problem (like an NP-complete problem) and show that a tially the best possible within the model. given computational task is “not easier” than solving this The starting point for the BT model is the priority algo- problem. Another approach would be to restrict the model rithm model [11]. We assume that the input is represented to avoid the inherent Natural Proof limitations, while pre- as a set of data items, where each data item is a small piece serving a model strong enough to incorporate many “natu- of information about the problem; it may be a time interval ral” algorithms. representing a job to be scheduled, a vertex with its list of the neighbours in a graph, a propositional variable with all £ Institute for Advanced Study, Princeton, US. Supported by CCR grant clauses containing it in a CNF. Priority algorithms consider Æ CCR-0324906. Ý Department of Computer Science, University of Toronto, one item at a time and maintain a single partial solution bor,avner,[email protected] (based on the items considered thus far) that it continues to Þ CSE Department, University of California San Diego, bureshop, rus- extend. What is the order in which items are considered? A [email protected]. Ü fixed order algorithm orders the items up front according to Research partially supported by NSF Award CCR-0098197. ß Some of this research was performed at the Institute for Advanced some criterion (e.g., in the case of knapsack, sort the items Study in Princeton, NJ supported by the State of New Jersey. by their weight-to-value ratio). A more general (adaptive 1 Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE order) approach would be to change the ordering accord- efficiently approximated by any fixed BT algorithm. We ing to the items seen so far. For example, in the greedy set then show that 3-SAT requires exponential width and expo- cover algorithm, in every iteration we order the sets accord- nential depth first size in the fully adaptive BT model. This ing to the number of yet uncovered elements they contain lower bound in turn gives us an exponential bound on the (the distinction between fixed and adaptive orderings has width of fully adaptive BT algorithms for knapsack by “BT also been recently studied in [19]). Rather than imposing reduction” (but gives no inapproximation result). complexity constraints on the allowable orders, we simply Wherever proofs are omitted, please see the full version of require them to be oblivious to the actual content of those the paper: [3]. input items not yet considered. By introducing branching, a backtracking algorithm (BT) can pursue a number of dif- ferent partial solutions. Given a specific input, a BT al- 2. The Backtracking Model and its Relatives gorithm then induces a computation tree. Of course, it is possible to solve any properly formulated search or opti- Let be an arbitrary data domain that contains objects À mization problem in this manner: simply branch on every called data items.Let be a set, representing the set possible decision for every input item. In other words, there of allowable decisions for a data item. For example, for the is a tradeoff between the quality of a solution and the com- knapsack problem, a natural choice for would be the set Ü Ôµ Ü Ô plexity of the BT-algorithm. We view the maximum width of all pairs ´ where is a weight and is a profit; the ¼ ½ of a BT program as the number of partial solutions that need natural choice for À is where 0 is the decision to to be maintained in parallel in the worst case. As we will reject an item and 1 is the decision to accept an item. see, this extension allows us to model the simple dynamic A Backtracking search/optimization problem È is µ programming framework of Woeginger [30]. This branch- ´ È È specifiedbyapair È where is the underlying Ò ing extension can be applied to either the fixed or adaptive data domain, and È is a family of objective functions, È µ Ê order (fixed BT and adaptive BT) and in either case each ´ ½ Ò ½ Ò Ò ,where ½ is a set of branch (corresponding to a partial solution) considers the À Ò variables that range over ,and ½ is a set of vari- Á ¾ items in the same order. For example, various DP-based op- Ò ables that range over . On input ½ , À timal and approximate algorithms for the knapsack problem the goal is to assign each avaluein so as to maximize Ò can be seen as fixed or adaptive BT algorithms. In order to (or minimize) . A search problem is a special case where È Ò ½ ¼ model the power of backtracking programs (say as in DPLL outputs either or . 1 È Ç ´Ë µ algorithms for SAT) we need to extend the model further. For any domain Ë we write forthesetofallor- In fully adaptive BT we allow each branch to choose its own derings of elements of Ë . We are now ready to define a ordering of input items. Furthermore, we need to allow al- backtracking algorithm for a backtracking problem. gorithms to prioritize (using a depth first traversal of the È induced computation tree) the order in which the different Definition 1. A backtracking algorithm for problem Ò µ partial solutions are pursued. In this setting, we can con- ´ consists of the ordering functions sider the number of nodes traversed in the computation tree Ö ¢ À Ç´ µ before a solution is found (which may be smaller than the tree’s width). and the choice functions ·½ ¢ À Ç´À µ Our results The problems we consider are all well- studied; namely, interval scheduling, knapsack, and satisfi- 2 ¼ Ò ½ ability. For Ñ-machine interval scheduling we show a tight where . Ñ Ò µ ª´ -width lower bound (for optimality) in the adap- We separate the following three classes of BT algorithms tive BT model, an inapproximability result in the fixed BT Ö ¯ Fixed algorithms: does not depend upon any of its model, and an approximability separation between width- arguments. 1 BT and width-2 BT in the adaptive model. For knapsack, we show an exponential lower bound (for optimality) on the ¯ Ö ¾ Adaptive algorithms: depends on ½ width of adaptive BT algorithms, and for achieving an FP- but not on ½ . TAS in the adaptive model we show upper and lower bounds ¯ Ö ¯ polynomial in ½ . For SAT, we show that 2-SAT is solved Fully adaptive algorithms: depends on both ¾ ½ by a linear-width adaptive BT, but needs exponential width ½ and . for any fixed order BT, and also that MAX2SAT cannot be 2All of our lower bound results will apply to non-uniform BT algo- 1 The BT model encompasses DPLL in many situations where access to rithms that know Ò, the number of input items, and hence more formally, Ò Ò the input is limited.
Recommended publications
  • Metaheuristics1
    METAHEURISTICS1 Kenneth Sörensen University of Antwerp, Belgium Fred Glover University of Colorado and OptTek Systems, Inc., USA 1 Definition A metaheuristic is a high-level problem-independent algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms (Sörensen and Glover, To appear). Notable examples of metaheuristics include genetic/evolutionary algorithms, tabu search, simulated annealing, and ant colony optimization, although many more exist. A problem-specific implementation of a heuristic optimization algorithm according to the guidelines expressed in a metaheuristic framework is also referred to as a metaheuristic. The term was coined by Glover (1986) and combines the Greek prefix meta- (metá, beyond in the sense of high-level) with heuristic (from the Greek heuriskein or euriskein, to search). Metaheuristic algorithms, i.e., optimization methods designed according to the strategies laid out in a metaheuristic framework, are — as the name suggests — always heuristic in nature. This fact distinguishes them from exact methods, that do come with a proof that the optimal solution will be found in a finite (although often prohibitively large) amount of time. Metaheuristics are therefore developed specifically to find a solution that is “good enough” in a computing time that is “small enough”. As a result, they are not subject to combinatorial explosion – the phenomenon where the computing time required to find the optimal solution of NP- hard problems increases as an exponential function of the problem size. Metaheuristics have been demonstrated by the scientific community to be a viable, and often superior, alternative to more traditional (exact) methods of mixed- integer optimization such as branch and bound and dynamic programming.
    [Show full text]
  • Lecture 4 Dynamic Programming
    1/17 Lecture 4 Dynamic Programming Last update: Jan 19, 2021 References: Algorithms, Jeff Erickson, Chapter 3. Algorithms, Gopal Pandurangan, Chapter 6. Dynamic Programming 2/17 Backtracking is incredible powerful in solving all kinds of hard prob- lems, but it can often be very slow; usually exponential. Example: Fibonacci numbers is defined as recurrence: 0 if n = 0 Fn =8 1 if n = 1 > Fn 1 + Fn 2 otherwise < ¡ ¡ > A direct translation in:to recursive program to compute Fibonacci number is RecFib(n): if n=0 return 0 if n=1 return 1 return RecFib(n-1) + RecFib(n-2) Fibonacci Number 3/17 The recursive program has horrible time complexity. How bad? Let's try to compute. Denote T(n) as the time complexity of computing RecFib(n). Based on the recursion, we have the recurrence: T(n) = T(n 1) + T(n 2) + 1; T(0) = T(1) = 1 ¡ ¡ Solving this recurrence, we get p5 + 1 T(n) = O(n); = 1.618 2 So the RecFib(n) program runs at exponential time complexity. RecFib Recursion Tree 4/17 Intuitively, why RecFib() runs exponentially slow. Problem: redun- dant computation! How about memorize the intermediate computa- tion result to avoid recomputation? Fib: Memoization 5/17 To optimize the performance of RecFib, we can memorize the inter- mediate Fn into some kind of cache, and look it up when we need it again. MemFib(n): if n = 0 n = 1 retujrjn n if F[n] is undefined F[n] MemFib(n-1)+MemFib(n-2) retur n F[n] How much does it improve upon RecFib()? Assuming accessing F[n] takes constant time, then at most n additions will be performed (we never recompute).
    [Show full text]
  • Exhaustive Recursion and Backtracking
    CS106B Handout #19 J Zelenski Feb 1, 2008 Exhaustive recursion and backtracking In some recursive functions, such as binary search or reversing a file, each recursive call makes just one recursive call. The "tree" of calls forms a linear line from the initial call down to the base case. In such cases, the performance of the overall algorithm is dependent on how deep the function stack gets, which is determined by how quickly we progress to the base case. For reverse file, the stack depth is equal to the size of the input file, since we move one closer to the empty file base case at each level. For binary search, it more quickly bottoms out by dividing the remaining input in half at each level of the recursion. Both of these can be done relatively efficiently. Now consider a recursive function such as subsets or permutation that makes not just one recursive call, but several. The tree of function calls has multiple branches at each level, which in turn have further branches, and so on down to the base case. Because of the multiplicative factors being carried down the tree, the number of calls can grow dramatically as the recursion goes deeper. Thus, these exhaustive recursion algorithms have the potential to be very expensive. Often the different recursive calls made at each level represent a decision point, where we have choices such as what letter to choose next or what turn to make when reading a map. Might there be situations where we can save some time by focusing on the most promising options, without committing to exploring them all? In some contexts, we have no choice but to exhaustively examine all possibilities, such as when trying to find some globally optimal result, But what if we are interested in finding any solution, whichever one that works out first? At each decision point, we can choose one of the available options, and sally forth, hoping it works out.
    [Show full text]
  • Dynamic Programming Via Static Incrementalization 1 Introduction
    Dynamic Programming via Static Incrementalization Yanhong A. Liu and Scott D. Stoller Abstract Dynamic programming is an imp ortant algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems rep eatedly and of- ten takes exp onential time, a dynamic programming algorithm solves every subsubproblem just once, saves the result, reuses it when the subsubproblem is encountered again, and takes p oly- nomial time. This pap er describ es a systematic metho d for transforming programs written as straightforward recursions into programs that use dynamic programming. The metho d extends the original program to cache all p ossibly computed values, incrementalizes the extended pro- gram with resp ect to an input increment to use and maintain all cached results, prunes out cached results that are not used in the incremental computation, and uses the resulting in- cremental program to form an optimized new program. Incrementalization statically exploits semantics of b oth control structures and data structures and maintains as invariants equalities characterizing cached results. The principle underlying incrementalization is general for achiev- ing drastic program sp eedups. Compared with previous metho ds that p erform memoization or tabulation, the metho d based on incrementalization is more powerful and systematic. It has b een implemented and applied to numerous problems and succeeded on all of them. 1 Intro duction Dynamic programming is an imp ortant technique for designing ecient algorithms [2, 44 , 13 ]. It is used for problems whose solutions involve recursively solving subproblems that overlap.
    [Show full text]
  • Backtrack Parsing Context-Free Grammar Context-Free Grammar
    Context-free Grammar Problems with Regular Context-free Grammar Language and Is English a regular language? Bad question! We do not even know what English is! Two eggs and bacon make(s) a big breakfast Backtrack Parsing Can you slide me the salt? He didn't ought to do that But—No! Martin Kay I put the wine you brought in the fridge I put the wine you brought for Sandy in the fridge Should we bring the wine you put in the fridge out Stanford University now? and University of the Saarland You said you thought nobody had the right to claim that they were above the law Martin Kay Context-free Grammar 1 Martin Kay Context-free Grammar 2 Problems with Regular Problems with Regular Language Language You said you thought nobody had the right to claim [You said you thought [nobody had the right [to claim that they were above the law that [they were above the law]]]] Martin Kay Context-free Grammar 3 Martin Kay Context-free Grammar 4 Problems with Regular Context-free Grammar Language Nonterminal symbols ~ grammatical categories Is English mophology a regular language? Bad question! We do not even know what English Terminal Symbols ~ words morphology is! They sell collectables of all sorts Productions ~ (unordered) (rewriting) rules This concerns unredecontaminatability Distinguished Symbol This really is an untiable knot. But—Probably! (Not sure about Swahili, though) Not all that important • Terminals and nonterminals are disjoint • Distinguished symbol Martin Kay Context-free Grammar 5 Martin Kay Context-free Grammar 6 Context-free Grammar Context-free
    [Show full text]
  • Automatic Code Generation Using Dynamic Programming Techniques
    ! Automatic Code Generation using Dynamic Programming Techniques MASTERARBEIT zur Erlangung des akademischen Grades Diplom-Ingenieur im Masterstudium INFORMATIK Eingereicht von: Igor Böhm, 0155477 Angefertigt am: Institut für System Software Betreuung: o.Univ.-Prof.Dipl.-Ing. Dr. Dr.h.c. Hanspeter Mössenböck Linz, Oktober 2007 Abstract Building compiler back ends from declarative specifications that map tree structured intermediate representations onto target machine code is the topic of this thesis. Although many tools and approaches have been devised to tackle the problem of automated code generation, there is still room for improvement. In this context we present Hburg, an implementation of a code generator generator that emits compiler back ends from concise tree pattern specifications written in our code generator description language. The language features attribute grammar style specifications and allows for great flexibility with respect to the placement of semantic actions. Our main contribution is to show that these language features can be integrated into automatically generated code generators that perform optimal instruction selection based on tree pattern matching combined with dynamic program- ming. In order to substantiate claims about the usefulness of our language we provide two complete examples that demonstrate how to specify code generators for Risc and Cisc architectures. Kurzfassung Diese Diplomarbeit beschreibt Hburg, ein Werkzeug das aus einer Spezi- fikation des abstrakten Syntaxbaums eines Programms und der Spezifika- tion der gewuns¨ chten Zielmaschine automatisch einen Codegenerator fur¨ diese Maschine erzeugt. Abbildungen zwischen abstrakten Syntaxb¨aumen und einer Zielmaschine werden durch Baummuster definiert. Fur¨ diesen Zweck haben wir eine deklarative Beschreibungssprache entwickelt, die es erm¨oglicht den Baummustern Attribute beizugeben, wodurch diese gleich- sam parametrisiert werden k¨onnen.
    [Show full text]
  • 1 Introduction 2 Dijkstra's Algorithm
    15-451/651: Design & Analysis of Algorithms September 3, 2019 Lecture #3: Dynamic Programming II last changed: August 30, 2019 In this lecture we continue our discussion of dynamic programming, focusing on using it for a variety of path-finding problems in graphs. Topics in this lecture include: • The Bellman-Ford algorithm for single-source (or single-sink) shortest paths. • Matrix-product algorithms for all-pairs shortest paths. • Algorithms for all-pairs shortest paths, including Floyd-Warshall and Johnson. • Dynamic programming for the Travelling Salesperson Problem (TSP). 1 Introduction As a reminder of basic terminology: a graph is a set of nodes or vertices, with edges between some of the nodes. We will use V to denote the set of vertices and E to denote the set of edges. If there is an edge between two vertices, we call them neighbors. The degree of a vertex is the number of neighbors it has. Unless otherwise specified, we will not allow self-loops or multi-edges (multiple edges between the same pair of nodes). As is standard with discussing graphs, we will use n = jV j, and m = jEj, and we will let V = f1; : : : ; ng. The above describes an undirected graph. In a directed graph, each edge now has a direction (and as we said earlier, we will sometimes call the edges in a directed graph arcs). For each node, we can now talk about out-neighbors (and out-degree) and in-neighbors (and in-degree). In a directed graph you may have both an edge from u to v and an edge from v to u.
    [Show full text]
  • Backtracking / Branch-And-Bound
    Backtracking / Branch-and-Bound Optimisation problems are problems that have several valid solutions; the challenge is to find an optimal solution. How optimal is defined, depends on the particular problem. Examples of optimisation problems are: Traveling Salesman Problem (TSP). We are given a set of n cities, with the distances between all cities. A traveling salesman, who is currently staying in one of the cities, wants to visit all other cities and then return to his starting point, and he is wondering how to do this. Any tour of all cities would be a valid solution to his problem, but our traveling salesman does not want to waste time: he wants to find a tour that visits all cities and has the smallest possible length of all such tours. So in this case, optimal means: having the smallest possible length. 1-Dimensional Clustering. We are given a sorted list x1; : : : ; xn of n numbers, and an integer k between 1 and n. The problem is to divide the numbers into k subsets of consecutive numbers (clusters) in the best possible way. A valid solution is now a division into k clusters, and an optimal solution is one that has the nicest clusters. We will define this problem more precisely later. Set Partition. We are given a set V of n objects, each having a certain cost, and we want to divide these objects among two people in the fairest possible way. In other words, we are looking for a subdivision of V into two subsets V1 and V2 such that X X cost(v) − cost(v) v2V1 v2V2 is as small as possible.
    [Show full text]
  • Language and Compiler Support for Dynamic Code Generation by Massimiliano A
    Language and Compiler Support for Dynamic Code Generation by Massimiliano A. Poletto S.B., Massachusetts Institute of Technology (1995) M.Eng., Massachusetts Institute of Technology (1995) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 1999 © Massachusetts Institute of Technology 1999. All rights reserved. A u th or ............................................................................ Department of Electrical Engineering and Computer Science June 23, 1999 Certified by...............,. ...*V .,., . .* N . .. .*. *.* . -. *.... M. Frans Kaashoek Associate Pro essor of Electrical Engineering and Computer Science Thesis Supervisor A ccepted by ................ ..... ............ ............................. Arthur C. Smith Chairman, Departmental CommitteA on Graduate Students me 2 Language and Compiler Support for Dynamic Code Generation by Massimiliano A. Poletto Submitted to the Department of Electrical Engineering and Computer Science on June 23, 1999, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Abstract Dynamic code generation, also called run-time code generation or dynamic compilation, is the cre- ation of executable code for an application while that application is running. Dynamic compilation can significantly improve the performance of software by giving the compiler access to run-time infor- mation that is not available to a traditional static compiler. A well-designed programming interface to dynamic compilation can also simplify the creation of important classes of computer programs. Until recently, however, no system combined efficient dynamic generation of high-performance code with a powerful and portable language interface. This thesis describes a system that meets these requirements, and discusses several applications of dynamic compilation.
    [Show full text]
  • Bellman-Ford Algorithm
    The many cases offinding shortest paths Dynamic programming Bellman-Ford algorithm We’ve already seen how to calculate the shortest path in an Tyler Moore unweighted graph (BFS traversal) We’ll now study how to compute the shortest path in different CSE 3353, SMU, Dallas, TX circumstances for weighted graphs Lecture 18 1 Single-source shortest path on a weighted DAG 2 Single-source shortest path on a weighted graph with nonnegative weights (Dijkstra’s algorithm) 3 Single-source shortest path on a weighted graph including negative weights (Bellman-Ford algorithm) Some slides created by or adapted from Dr. Kevin Wayne. For more information see http://www.cs.princeton.edu/~wayne/kleinberg-tardos. Some code reused from Python Algorithms by Magnus Lie Hetland. 2 / 13 �������������� Shortest path problem. Given a digraph ����������, with arbitrary edge 6. DYNAMIC PROGRAMMING II weights or costs ���, find cheapest path from node � to node �. ‣ sequence alignment ‣ Hirschberg's algorithm ‣ Bellman-Ford 1 -3 3 5 ‣ distance vector protocols 4 12 �������� 0 -5 ‣ negative cycles in a digraph 8 7 7 2 9 9 -1 1 11 ����������� 5 5 -3 13 4 10 6 ������������� ���������������������������������� 22 3 / 13 4 / 13 �������������������������������� ��������������� Dijkstra. Can fail if negative edge weights. Def. A negative cycle is a directed cycle such that the sum of its edge weights is negative. s 2 u 1 3 v -8 w 5 -3 -3 Reweighting. Adding a constant to every edge weight can fail. 4 -4 u 5 5 2 2 ���������������������� c(W) = ce < 0 t s e W 6 6 �∈ 3 3 0 v -3 w 23 24 5 / 13 6 / 13 ���������������������������������� ���������������������������������� Lemma 1.
    [Show full text]
  • Notes on Dynamic Programming Algorithms & Data Structures 1
    Notes on Dynamic Programming Algorithms & Data Structures Dr Mary Cryan These notes are to accompany lectures 10 and 11 of ADS. 1 Introduction The technique of Dynamic Programming (DP) could be described “recursion turned upside-down”. However, it is not usually used as an alternative to recursion. Rather, dynamic programming is used (if possible) for cases when a recurrence for an algorithmic problem will not run in polynomial-time if it is implemented recursively. So in fact Dynamic Programming is a more- powerful technique than basic Divide-and-Conquer. Designing, Analysing and Implementing a dynamic programming algorithm is (like Divide- and-Conquer) highly problem specific. However, there are particular features shared by most dynamic programming algorithms, and we describe them below on page 2 (dp1(a), dp1(b), dp2, dp3). It will be helpful to carry along an introductory example-problem to illustrate these fea- tures. The introductory problem will be the problem of computing the nth Fibonacci number, where F(n) is defined as follows: F0 = 0; F1 = 1; Fn = Fn-1 + Fn-2 (for n ≥ 2). Since Fibonacci numbers are defined recursively, the definition suggests a very natural recursive algorithm to compute F(n): Algorithm REC-FIB(n) 1. if n = 0 then 2. return 0 3. else if n = 1 then 4. return 1 5. else 6. return REC-FIB(n - 1) + REC-FIB(n - 2) First note that were we to implement REC-FIB, we would not be able to use the Master Theo- rem to analyse its running-time. The recurrence for the running-time TREC-FIB(n) that we would get would be TREC-FIB(n) = TREC-FIB(n - 1) + TREC-FIB(n - 2) + Θ(1); (1) where the Θ(1) comes from the fact that, at most, we have to do enough work to add two values together (on line 6).
    [Show full text]
  • Parallel Technique for the Metaheuristic Algorithms Using Devoted Local Search and Manipulating the Solutions Space
    applied sciences Article Parallel Technique for the Metaheuristic Algorithms Using Devoted Local Search and Manipulating the Solutions Space Dawid Połap 1,* ID , Karolina K˛esik 1, Marcin Wo´zniak 1 ID and Robertas Damaševiˇcius 2 ID 1 Institute of Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland; [email protected] (K.K.); [email protected] (M.W.) 2 Department of Software Engineering, Kaunas University of Technology, Studentu 50, LT-51368, Kaunas, Lithuania; [email protected] * Correspondence: [email protected] Received: 16 December 2017; Accepted: 13 February 2018 ; Published: 16 February 2018 Abstract: The increasing exploration of alternative methods for solving optimization problems causes that parallelization and modification of the existing algorithms are necessary. Obtaining the right solution using the meta-heuristic algorithm may require long operating time or a large number of iterations or individuals in a population. The higher the number, the longer the operation time. In order to minimize not only the time, but also the value of the parameters we suggest three proposition to increase the efficiency of classical methods. The first one is to use the method of searching through the neighborhood in order to minimize the solution space exploration. Moreover, task distribution between threads and CPU cores can affect the speed of the algorithm and therefore make it work more efficiently. The second proposition involves manipulating the solutions space to minimize the number of calculations. In addition, the third proposition is the combination of the previous two. All propositions has been described, tested and analyzed due to the use of various test functions.
    [Show full text]