Introduction to the Theory of Complexity

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to the Theory of Complexity Introduction to the theory of complexity Daniel Pierre Bovet Pierluigi Crescenzi The information in this book is distributed on an “As is” basis, without warranty. Although every precaution has been taken in the preparation of this work, the authors shall not have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information contained in this work. First electronic edition: June 2006 Contents 1 Mathematical preliminaries 1 1.1 Sets, relations and functions 1 1.2 Set cardinality 5 1.3 Three proof techniques 5 1.4 Graphs 8 1.5 Alphabets, words and languages 10 2 Elements of computability theory 12 2.1 Turing machines 13 2.2 Machines and languages 26 2.3 Reducibility between languages 28 3 Complexity classes 33 3.1 Dynamic complexity measures 34 3.2 Classes of languages 36 3.3 Decision problems and languages 38 3.4 Time-complexity classes 41 3.5 The pseudo-Pascal language 47 4 The class P 51 4.1 The class P 52 4.2 The robustness of the class P 57 4.3 Polynomial-time reducibility 60 4.4 Uniform diagonalization 62 5 The class NP 69 5.1 The class NP 70 5.2 NP-complete languages 72 v vi 5.3 NP-intermediate languages 88 5.4 Computing and verifying a function 92 5.5 Relativization of the P 6= NP conjecture 95 6 The complexity of optimization problems 110 6.1 Optimization problems 111 6.2 Underlying languages 115 6.3 Optimum measure versus optimum solution 117 6.4 Approximability 119 6.5 Reducibility and optimization problems 125 7 Beyond NP 133 7.1 The class coNP 134 7.2 The Boolean hierarchy 140 7.3 The polynomial hierarchy 142 7.4 Exponential-time complexity classes 150 8 Space-complexity classes 156 8.1 Space-complexity classes 157 8.2 Relations between time and space 157 8.3 Nondeterminism, determinism and space 160 8.4 Nondeterminism, complement and space 162 8.5 Logarithmic space 164 8.6 Polynomial space 171 9 Probabilistic algorithms and complexity classes 178 9.1 Some probabilistic algorithms 179 9.2 Probabilistic Turing machines 188 9.3 Probabilistic complexity classes 194 10 Interactive proof systems 203 10.1 Interactive proof systems 204 10.2 The power of IP 208 10.3 Probabilistic checking of proofs 214 11 Models of parallel computers 221 11.1 Circuits 222 11.2 The PRAM model 230 11.3 PRAM memory conflicts 232 11.4 A comparison of the PRAM models 233 11.5 Relations between circuits and PRAMs 238 11.6 The parallel computation thesis 240 vii 12 Parallel algorithms 246 12.1 The class NC 247 12.2 Examples of NC problems 250 12.3 Probabilistic parallel algorithms 256 12.4 P-complete problems revisited 259 Preface The birth of the theory of computational complexity can be set in the early 1960s when the first users of electronic computers started to pay increasing attention to the performances of their programs. As in the theory of computation, where the concept of a model of computation had led to that of an algorithm and of an algo- rithmically solvable problem, similarly, in the theory of computational complexity, the concept of resource used by a computation led to that of an efficient algorithm and of a computationally feasible problem. Since these preliminary stages, many more results have been obtained and, as stated by Hartmanis (1989), ‘the systematic study of computational complexity theory has developed into one of the central and most active research areas of computer science. It has grown into a rich and exciting mathematical theory whose development is motivated and guided by computer science needs and technological advances.’ The aim of this introductory book is to review in a systematic way the most significant results obtained in this new research area. The main goals of compu- tational complexity theory are to introduce classes of problems which have similar complexity with respect to a specific computation model and complexity measure, and to study the intrinsic properties of such classes. In this book, we will follow a balanced approach which is partly algorithmic and partly structuralist. From an algorithmic point of view, we will first present some ‘natural’ problems and then illustrate algorithms which solve them. Since the aim is merely to prove that the problem belongs to a specific class, we will not always give the most efficient algorithm and we will occasionally give preference to an algorithm which is simpler to describe and analyse. From a structural point of view, we will be concerned with intrinsic properties of complexity classes, including relationships between classes, implications between several hypotheses about complexity classes, and identification of structural prop- erties of sets that affect their computational complexity. The reader is assumed to have some basic knowledge of theory of computation (as ix x Preface taught in an undergraduate course on Automata Theory, Logic, Formal Languages Theory, or Theory of Computation) and of programming languages and techniques. Some mathematical knowledge is also required. The first eight chapters of the book can be taught on a senior undergraduate course. The whole book together with an exhaustive discussion of the problems should be suitable for a postgraduate course. Let us now briefly review the contents of the book and the choices made in selecting the material. The first part (Chapters 1-3) provide the basic tools which will enable us to study topics in complexity theory. Chapter 1 includes a series of definitions and notations related to classic mathematical concepts such as sets, relationships and languages (this chapter can be skipped and referred to when needed). Chapter 2 reviews some important results of computability theory. Chapter 3 provides the basic tool of complexity theory: dynamic complexity measures are introduced, the concept of classes of languages is presented, the strict correspondence between such classes and decision problems is established, and techniques used to study the properties of such classes are formulated. The second part (Chapters 4-8) studies, in a detailed way, the properties of some of the most significant complexity classes. Those chapters represent the ‘heart’ of complexity theory: by placing suitable restrictions on the power of the computation model, and thus on the amount of resources allowed for the computation, it becomes possible to define a few fundamental complexity classes and to develop a series of tools enabling us to identify, for most computational problems, the complexity class to which they belong. The third part (Chapters 9-10) deals with probabilistic algorithms and with the corresponding complexity classes. Probabilistic Turing machines are introduced in Chapter 9 and a few probabilistic algorithms for such machines are analysed. In Chapter 10, a more elaborate computation model denoted as interactive proof system is considered and a new complexity class based on such a model is studied. The last part (Chapters 11 and 12) is dedicated to the complexity of parallel computations. As a result of advances in hardware technology, computers with thousands of processors are now available; it thus becomes important, not only from a theoretical point of view but also from a practical one, to be able to specify which problems are best suited to be run on parallel machines. Chapter 11 describes in detail a few important and widely differing models of parallel computers and shows how their performance can be considered roughly equivalent. Chapter 12 introduces the concept of a problem solvable by a fast parallel algorithm and the complementary one of a problem with no fast parallel algorithm, and illustrates examples of both types of problems. While selecting material to be included in the book, we followed a few guidelines. First, we have focused our attention on results obtained in the past two decades, mentioning without proof or leaving as problems some well-known results obtained in the 1960s. Second, whenever a proof of a theorem uses a technique described in a previous proof, we have provided an outline, leaving the complete proof as a Preface xi problem for the reader. Finally, we have systematically avoided stating without proof specialistic results in other fields in order to make the book as self-contained as possible. Acknowledgements This book originated from a course on Algorithms and Complexity given at the University of Rome ‘La Sapienza’ by D.P. Bovet and P. Crescenzi since 1986. We would like to thank the students who were exposed to the preliminary versions of the chapters and who contributed their observations to improve the quality of the presentation. We also would like to thank R. Silvestri for pointing out many corrections and for suggesting simpler and clearer proofs of some results. Chapter 1 Mathematical preliminaries In this chapter some preliminary definitions, notations and proof techniques which are going to be used in the rest of the book will be introduced. 1.1 Sets, relations and functions Intuitively, a set A is any collection of elements. If a, b and c are arbitrary elements, then the set A consisting of elements a, b and c is represented as A = {a, b, c}. A set cannot contain more than one copy or instance of the same element; furthermore, the order in which the elements of the set appear is irrelevant. Thus we define two sets A and B as equal (in symbols, A = B) if every element of A is also an element of B and vice versa. Two sets A and B are not equal (in symbols, A 6= B) when A = B does not hold true. The symbols ∈ and 6∈ denote, respectively, the fact that an element belongs or does not belong to a set.
Recommended publications
  • On Physical Problems That Are Slightly More Difficult Than
    On physical problems that are slightly more difficult than QMA Andris Ambainis University of Latvia and IAS, Princeton Email: [email protected] Abstract We study the complexity of computational problems from quantum physics. Typi- cally, they are studied using the complexity class QMA (quantum counterpart of NP ) but some natural computational problems appear to be slightly harder than QMA. We introduce new complexity classes consisting of problems that are solvable with a small number of queries to a QMA oracle and use these complexity classes to quantify the complexity of several natural computational problems (for example, the complexity of estimating the spectral gap of a Hamiltonian). 1 Introduction Quantum Hamiltonian complexity [30] is a new field that combines quantum physics with computer science, by using the notions from computational complexity to study the com- plexity of problems that appear in quantum physics. One of central notions of Hamiltonian complexity is the complexity class QMA [25, 24, 40, 21, 3] which is the quantum counterpart of NP. QMA consists of all computational problems whose solutions can be verified in polynomial time on a quantum computer, given a quantum witness (a quantum state on a polynomial number of qubits). QMA captures the complexity of several interesting physical problems. For example, estimating the ground state energy of a physical system (described by a Hamiltonian) is arXiv:1312.4758v2 [quant-ph] 10 Apr 2014 a very important task in quantum physics. We can characterize the complexity of this problem by showing that it is QMA-complete, even if we restrict it to natural classes of Hamiltonians.
    [Show full text]
  • Computability of Fraïssé Limits
    COMPUTABILITY OF FRA¨ISSE´ LIMITS BARBARA F. CSIMA, VALENTINA S. HARIZANOV, RUSSELL MILLER, AND ANTONIO MONTALBAN´ Abstract. Fra¨ıss´estudied countable structures S through analysis of the age of S, i.e., the set of all finitely generated substructures of S. We investigate the effectiveness of his analysis, considering effectively presented lists of finitely generated structures and asking when such a list is the age of a computable structure. We focus particularly on the Fra¨ıss´elimit. We also show that degree spectra of relations on a sufficiently nice Fra¨ıss´elimit are always upward closed unless the relation is definable by a quantifier-free formula. We give some sufficient or necessary conditions for a Fra¨ıss´elimit to be spectrally universal. As an application, we prove that the computable atomless Boolean algebra is spectrally universal. Contents 1. Introduction1 1.1. Classical results about Fra¨ıss´elimits and background definitions4 2. Computable Ages5 3. Computable Fra¨ıss´elimits8 3.1. Computable properties of Fra¨ıss´elimits8 3.2. Existence of computable Fra¨ıss´elimits9 4. Examples 15 5. Upward closure of degree spectra of relations 18 6. Necessary conditions for spectral universality 20 6.1. Local finiteness 20 6.2. Finite realizability 21 7. A sufficient condition for spectral universality 22 7.1. The countable atomless Boolean algebra 23 References 24 1. Introduction Computable model theory studies the algorithmic complexity of countable structures, of their isomorphisms, and of relations on such structures. Since algorithmic properties often depend on data presentation, in computable model theory classically isomorphic structures can have different computability-theoretic properties.
    [Show full text]
  • On Uniformity Within NC
    On Uniformity Within NC David A Mix Barrington Neil Immerman HowardStraubing University of Massachusetts University of Massachusetts Boston Col lege Journal of Computer and System Science Abstract In order to study circuit complexity classes within NC in a uniform setting we need a uniformity condition which is more restrictive than those in common use Twosuch conditions stricter than NC uniformity RuCo have app eared in recent research Immermans families of circuits dened by rstorder formulas ImaImb and a unifor mity corresp onding to Buss deterministic logtime reductions Bu We show that these two notions are equivalent leading to a natural notion of uniformity for lowlevel circuit complexity classes Weshow that recent results on the structure of NC Ba still hold true in this very uniform setting Finallyweinvestigate a parallel notion of uniformity still more restrictive based on the regular languages Here we givecharacterizations of sub classes of the regular languages based on their logical expressibility extending recentwork of Straubing Therien and Thomas STT A preliminary version of this work app eared as BIS Intro duction Circuit Complexity Computer scientists have long tried to classify problems dened as Bo olean predicates or functions by the size or depth of Bo olean circuits needed to solve them This eort has Former name David A Barrington Supp orted by NSF grant CCR Mailing address Dept of Computer and Information Science U of Mass Amherst MA USA Supp orted by NSF grants DCR and CCR Mailing address Dept of
    [Show full text]
  • Slides 6, HT 2019 Space Complexity
    Computational Complexity; slides 6, HT 2019 Space complexity Prof. Paul W. Goldberg (Dept. of Computer Science, University of Oxford) HT 2019 Paul Goldberg Space complexity 1 / 51 Road map I mentioned classes like LOGSPACE (usually calledL), SPACE(f (n)) etc. How do they relate to each other, and time complexity classes? Next: Various inclusions can be proved, some more easy than others; let's begin with \low-hanging fruit"... e.g., I have noted: TIME(f (n)) is a subset of SPACE(f (n)) (easy!) We will see e.g.L is a proper subset of PSPACE, although it's unknown how they relate to various intermediate classes, e.g.P, NP Various interesting problems are complete for PSPACE, EXPTIME, and some of the others. Paul Goldberg Space complexity 2 / 51 Convention: In this section we will be using Turing machines with a designated read only input tape. So, \logarithmic space" becomes meaningful. Space Complexity So far, we have measured the complexity of problems in terms of the time required to solve them. Alternatively, we can measure the space/memory required to compute a solution. Important difference: space can be re-used Paul Goldberg Space complexity 3 / 51 Space Complexity So far, we have measured the complexity of problems in terms of the time required to solve them. Alternatively, we can measure the space/memory required to compute a solution. Important difference: space can be re-used Convention: In this section we will be using Turing machines with a designated read only input tape. So, \logarithmic space" becomes meaningful. Paul Goldberg Space complexity 3 / 51 Definition.
    [Show full text]
  • Nitin Saurabh the Institute of Mathematical Sciences, Chennai
    ALGEBRAIC MODELS OF COMPUTATION By Nitin Saurabh The Institute of Mathematical Sciences, Chennai. A thesis submitted to the Board of Studies in Mathematical Sciences In partial fulllment of the requirements For the Degree of Master of Science of HOMI BHABHA NATIONAL INSTITUTE April 2012 CERTIFICATE Certied that the work contained in the thesis entitled Algebraic models of Computation, by Nitin Saurabh, has been carried out under my supervision and that this work has not been submitted elsewhere for a degree. Meena Mahajan Theoretical Computer Science Group The Institute of Mathematical Sciences, Chennai ACKNOWLEDGEMENTS I would like to thank my advisor Prof. Meena Mahajan for her invaluable guidance and continuous support since my undergraduate days. Her expertise and ideas helped me comprehend new techniques. Her guidance during the preparation of this thesis has been invaluable. I also thank her for always being there to discuss and clarify any matter. I am extremely grateful to all the faculty members of theory group at IMSc and CMI for their continuous encouragement and giving me an opportunity to learn from them. I would like to thank all my friends, at IMSc and CMI, for making my stay in Chennai a memorable one. Most of all, I take this opportunity to thank my parents, my uncle and my brother. Abstract Valiant [Val79, Val82] had proposed an analogue of the theory of NP-completeness in an entirely algebraic framework to study the complexity of polynomial families. Artihmetic circuits form the most standard model for studying the complexity of polynomial computations. In a note [Val92], Valiant argued that in order to prove lower bounds for boolean circuits, obtaining lower bounds for arithmetic circuits should be a rst step.
    [Show full text]
  • NP-Completeness: Reductions Tue, Nov 21, 2017
    CMSC 451 Dave Mount CMSC 451: Lecture 19 NP-Completeness: Reductions Tue, Nov 21, 2017 Reading: Chapt. 8 in KT and Chapt. 8 in DPV. Some of the reductions discussed here are not in either text. Recap: We have introduced a number of concepts on the way to defining NP-completeness: Decision Problems/Language recognition: are problems for which the answer is either yes or no. These can also be thought of as language recognition problems, assuming that the input has been encoded as a string. For example: HC = fG j G has a Hamiltonian cycleg MST = f(G; c) j G has a MST of cost at most cg: P: is the class of all decision problems which can be solved in polynomial time. While MST 2 P, we do not know whether HC 2 P (but we suspect not). Certificate: is a piece of evidence that allows us to verify in polynomial time that a string is in a given language. For example, the language HC above, a certificate could be a sequence of vertices along the cycle. (If the string is not in the language, the certificate can be anything.) NP: is defined to be the class of all languages that can be verified in polynomial time. (Formally, it stands for Nondeterministic Polynomial time.) Clearly, P ⊆ NP. It is widely believed that P 6= NP. To define NP-completeness, we need to introduce the concept of a reduction. Reductions: The class of NP-complete problems consists of a set of decision problems (languages) (a subset of the class NP) that no one knows how to solve efficiently, but if there were a polynomial time solution for even a single NP-complete problem, then every problem in NP would be solvable in polynomial time.
    [Show full text]
  • Computational Complexity: a Modern Approach
    i Computational Complexity: A Modern Approach Draft of a book: Dated January 2007 Comments welcome! Sanjeev Arora and Boaz Barak Princeton University [email protected] Not to be reproduced or distributed without the authors’ permission This is an Internet draft. Some chapters are more finished than others. References and attributions are very preliminary and we apologize in advance for any omissions (but hope you will nevertheless point them out to us). Please send us bugs, typos, missing references or general comments to [email protected] — Thank You!! DRAFT ii DRAFT Chapter 9 Complexity of counting “It is an empirical fact that for many combinatorial problems the detection of the existence of a solution is easy, yet no computationally efficient method is known for counting their number.... for a variety of problems this phenomenon can be explained.” L. Valiant 1979 The class NP captures the difficulty of finding certificates. However, in many contexts, one is interested not just in a single certificate, but actually counting the number of certificates. This chapter studies #P, (pronounced “sharp p”), a complexity class that captures this notion. Counting problems arise in diverse fields, often in situations having to do with estimations of probability. Examples include statistical estimation, statistical physics, network design, and more. Counting problems are also studied in a field of mathematics called enumerative combinatorics, which tries to obtain closed-form mathematical expressions for counting problems. To give an example, in the 19th century Kirchoff showed how to count the number of spanning trees in a graph using a simple determinant computation. Results in this chapter will show that for many natural counting problems, such efficiently computable expressions are unlikely to exist.
    [Show full text]
  • 31 Summary of Computability Theory
    CS:4330 Theory of Computation Spring 2018 Computability Theory Summary Haniel Barbosa Readings for this lecture Chapters 3-5 and Section 6.2 of [Sipser 1996], 3rd edition. A hierachy of languages n m B Regular: a b n n B Deterministic Context-free: a b n n n 2n B Context-free: a b [ a b n n n B Turing decidable: a b c B Turing recognizable: ATM 1 / 12 Why TMs? B In 1900: Hilbert posed 23 “challenge problems” in Mathematics The 10th problem: Devise a process according to which it can be decided by a finite number of operations if a given polynomial has an integral root. It became necessary to have a formal definition of “algorithms” to define their expressivity. 2 / 12 Church-Turing Thesis B In 1936 Church and Turing independently defined “algorithm”: I λ-calculus I Turing machines B Intuitive notion of algorithms = Turing machine algorithms B “Any process which could be naturally called an effective procedure can be realized by a Turing machine” th B We now know: Hilbert’s 10 problem is undecidable! 3 / 12 Algorithm as Turing Machine Definition (Algorithm) An algorithm is a decider TM in the standard representation. B The input to a TM is always a string. B If we want an object other than a string as input, we must first represent that object as a string. B Strings can easily represent polynomials, graphs, grammars, automata, and any combination of these objects. 4 / 12 How to determine decidability / Turing-recognizability? B Decidable / Turing-recognizable: I Present a TM that decides (recognizes) the language I If A is mapping reducible to
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • Notes on Space Complexity of Integration of Computable Real
    Notes on space complexity of integration of computable real functions in Ko–Friedman model Sergey V. Yakhontov Abstract x In the present paper it is shown that real function g(x)= 0 f(t)dt is a linear-space computable real function on interval [0, 1] if f is a linear-space computable C2[0, 1] real function on interval R [0, 1], and this result does not depend on any open question in the computational complexity theory. The time complexity of computable real functions and integration of computable real functions is considered in the context of Ko–Friedman model which is based on the notion of Cauchy functions computable by Turing machines. 1 2 In addition, a real computable function f is given such that 0 f ∈ FDSPACE(n )C[a,b] but 1 f∈ / FP if FP 6= #P. 0 C[a,b] R RKeywords: Computable real functions, Cauchy function representation, polynomial-time com- putable real functions, linear-space computable real functions, C2[0, 1] real functions, integration of computable real functions. Contents 1 Introduction 1 1.1 CF computablerealnumbersandfunctions . ...... 2 1.2 Integration of FP computablerealfunctions. 2 2 Upper bound of the time complexity of integration 3 2 3 Function from FDSPACE(n )C[a,b] that not in FPC[a,b] if FP 6= #P 4 4 Conclusion 4 arXiv:1408.2364v3 [cs.CC] 17 Nov 2014 1 Introduction In the present paper, we consider computable real numbers and functions that are represented by Cauchy functions computable by Turing machines [1]. Main results regarding computable real numbers and functions can be found in [1–4]; main results regarding computational complexity of computations on Turing machines can be found in [5].
    [Show full text]
  • LSPACE VS NP IS AS HARD AS P VS NP 1. Introduction in Complexity
    LSPACE VS NP IS AS HARD AS P VS NP FRANK VEGA Abstract. The P versus NP problem is a major unsolved problem in com- puter science. This consists in knowing the answer of the following question: Is P equal to NP? Another major complexity classes are LSPACE, PSPACE, ESPACE, E and EXP. Whether LSPACE = P is a fundamental question that it is as important as it is unresolved. We show if P = NP, then LSPACE = NP. Consequently, if LSPACE is not equal to NP, then P is not equal to NP. According to Lance Fortnow, it seems that LSPACE versus NP is easier to be proven. However, with this proof we show this problem is as hard as P versus NP. Moreover, we prove the complexity class P is not equal to PSPACE as a direct consequence of this result. Furthermore, we demonstrate if PSPACE is not equal to EXP, then P is not equal to NP. In addition, if E = ESPACE, then P is not equal to NP. 1. Introduction In complexity theory, a function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem [6]. A functional problem F is defined as a binary relation (x; y) 2 R over strings of an arbitrary alphabet Σ: R ⊂ Σ∗ × Σ∗: A Turing machine M solves F if for every input x such that there exists a y satisfying (x; y) 2 R, M produces one such y, that is M(x) = y [6].
    [Show full text]
  • Boolean Hierarchies
    Bo olean Hierarchies On Collapse Prop erties and Query Order Dissertation zur Erlangung des akademischen Grades do ctor rerum naturalium Dr rer nat vorgelegt dem Rat der Fakultat fur Mathematik und Informatik der FriedrichSchillerUniversitat Jena von DiplomMathematiker Harald Hemp el geb oren am August in Jena Gutachter Prof Dr Gerd Wechsung Prof Edith Hemaspaandra Prof Dr Klaus Wagner Tag des Rigorosums Tag der oentlichen Verteidigung To my family Acknowledgements Words can not express my deep gratitude to my advisor Professor Gerd Wechsung Gen erously he oered supp ort guidance and encouragement throughout the past four years Learning from him and working with him was and still is a pleasure and privilege I much ap preciate Through all the ups and downs of my research his optimism and humane warmth have made the downs less frustrating and the ups more encouraging I want to express my deep gratitude to Professor Lane Hemaspaandra and Professor Edith Hemaspaandra Allowing me to become part of so many joint pro jects has been a wonderful learning exp erience and I much b eneted from their scien tic exp ertise Their generous help and advice help ed me to gain insights into how research is done and made this thesis p ossible For serving as referees for this thesis I am grateful to Professor Edith Hemaspaandra and Professor Klaus Wagner Iwant to thank all my colleagues at Jena esp ecially HaikoMuller Dieter Kratsch Jorg Rothe Johannes Waldmann and Maren Hinrichs for generously oering help and supp ort A regarding the many little things
    [Show full text]