Tackling Os Complexity with Declarative Techniques

Total Page:16

File Type:pdf, Size:1020Kb

Tackling Os Complexity with Declarative Techniques DISS. ETH NO. 20930 TACKLING OS COMPLEXITY WITH DECLARATIVE TECHNIQUES A dissertation submitted to ETH ZURICH for the degree of Doctor of Sciences presented by ADRIAN LAURENT SCHÜPBACH Master of Science ETH in Computer Science, ETH Zurich 19. April 1981 citizen of Landiswil, BE accepted on the recommendation of Prof. Dr. Timothy Roscoe Prof. Dr. Gustavo Alonso Prof. Dr. Hermann Härtig 2012 ii Kurzfassung Diese Dissertation zeigt, dass die erhohte¨ Betriebssystemkomplexitat,¨ die durch die Notwendigkeit ent- steht, sich an eine grosse Anzahl unterschiedlicher Rechnersysteme anzupassen, mittels deklarativer Techniken signifikant reduziert werden kann. Moderne Hardware ist zunehmend unterschiedlich und komplex. Es ist wahrscheinlich, dass sich diese Entwicklung in Zukunft fortsetzt. Diese Entwicklung erschwert den Betriebssystembau. Betriebs- systeme mussen¨ sich der Rechnerarchitektur optimal anpassen. Sie mussen¨ den gesamten Funktionsum- fang des Rechners ausschopfen,¨ um die volle Leistung des kompletten Systems zu gewahrleisten.¨ Vom Betriebssystem ungenutzte,¨ suboptimal genutzte¨ oder gar falsch genutzte¨ Rechnerfunktionalitat¨ fuhrt¨ zu geringerer Leistung des Gesamtsystems. Traditionelle Betriebssysteme passen sich durch vorgefertigte Regeln, die im ganzen Betriebssystem verteilt und mit der eigentlichen Betriebssystemfunktionalitat¨ ver- mischt sind, der Rechnerarchitektur an. In dieser Arbeit argumentiere ich, dass es aus zwei Grunden¨ nicht mehr moglich¨ ist, vorgefertigte Re- geln fur¨ eine Anzahl bekannter Rechnerarchitekturen mit der Betriebssystemfunktionalitat¨ zu vermischen. Erstens garantieren vorgefertigte Regeln nicht, dass alle Rechnerfunktionen vollstandig¨ ausgeschopft¨ wer- den. Zweitens bedeutet das, dass die Regeln fur¨ jede neue Rechnerarchitektur angepasst werden mussen,¨ wobei die Regeln fur¨ bisherige Rechnerarchitekturen beibehalten werden mussen.¨ Dies fuhrt¨ zu erheb- licher Betriebssystemkomplexitat¨ und schliesslich zu einem enormen Anpassungsaufwand. Um dies zu vermeiden, muss das Betriebssystem wahrend¨ der Laufzeit Wissen uber¨ die Rechnerarchitektur aufbau- en und daraus, durch logische Schlussfolgerungen, die bestmoglichen¨ Anpassungsregeln ableiten. Dies fuhrt¨ zu einfacheren, verstandlicheren,¨ pflegeleichteren und leichter anpassbaren Betriebssystemfunktio- nen und stellt sicher, dass die Rechnerfunktionalitat¨ vollstandig¨ ausgeschopft¨ wird. Der Wissensaufbau und das Ableiten von Anpassungsregeln durch logische Schlussfolgerungen sind mit hoher Programmierkomplexitat¨ verbunden. Dies gilt insbesondere, wenn dafur¨ maschinennahe Pro- grammiersprachen, wie zum Beispiel C, verwendet werden. Deklarative Techniken erlauben hingegen, angestrebte Regeln durch eine einfache und verstandliche¨ Beschreibung der gewunschten¨ Art der Anpas- sung, basierend auf Wissen uber¨ die Rechnerarchitektur, abzuleiten. Durch die naturliche¨ Beschreibung in hoheren¨ Programmiersprachen wird die Programmierkomplexitat¨ stark verringert. Um den Vorteil deklarativer Techniken im Zusammenhang mit Komplexitat¨ und Anpassungsfahigkeit¨ in Betriebssystemen zu beweisen, stelle ich in dieser Dissertation verschiedene Fallstudien vor, die, basie- rend auf deklarativen Techniken, Regeln fur¨ die Anpassung an die Rechnerarchitektur, mittels logischer Schlussfolgerungen, ableiten. Die Fallstudien setzen kein Wissen uber¨ die Rechnerarchitektur voraus, sondern eignen sich dies wahrend¨ der Laufzeit an. Das Wissen wird in einem zentralen Wissensdienst des Betriebssystems aufgebaut. Regeln werden in diesem Wissensdienst durch logische Schlussfolgerung abgeleitet. Dadurch, dass die Fallstudien, und somit die verschiedenen Betriebssystemkomponenten, diesen Wissensdienst benutzen¨ konnen,¨ wird ihre Komplexitat¨ nochmals deutlich verringert. Es ist somit nicht notig,¨ dass sich jede einzelne Betriebs- systemkomponente mit der Wissensgewinnung und der Ableitung von Regeln beschaftigt.¨ Mit dieser Implementation beweise ich die praktische Anwendbarkeit deklarativer Techniken in Betriebssystemen. iii iv Abstract This thesis argues that tackling the increased operating systems complexity with declarative techniques significantly reduces code complexity involved in adapting to a wide range of modern hardware. Modern hardware is increasingly diverse and complex. It is likely that this trend continues further. This trend complicates the operating systems construction. Operating systems have to adapt to the hard- ware architecture and exploit all features to guarantee the best possible overall system performance. Not exploiting all hardware features or using them in a suboptimal or even wrong way results in lower overall system performance. Traditionally, operating systems adapt to the underlying architecture by predefined policies, which are intermangled with the core operating systems functionality. In this thesis I argue that it is not anymore possible to encode predefined policies for a set of known hardware architectures into the operating system for two reasons. First, predefined policies do not auto- matically guarantee that hardware features are fully exploited on all hardware platforms. Second, for this reason, predefined policies would need to be ported to many different hardware platforms, while, at the same time, it would be necessary to keep the policies suited for older platforms. This leads to significant operating systems complexity and finally to high engineering effort, when porting the operating system to new hardware platforms. To avoid this problem, the operating systems must gain hardware knowledge at runtime and derive policies suitable for the current architecture through online reasoning about the hardware. This leads to simpler, better understandable, more maintainable and easier portable operating systems code, while ensuring that the operating system exploits the hardware features as best as possible. Reasoning about hardware and deriving policies is a complex task. This is especially the case, if low-level languages like C are used. Instead, declarative techniques allow to derive policies through a simple description of how to adapt to the hardware based on hardware knowledge gathered at runtime. The natural description in a high-level declarative language reduces code complexity significantly. To prove the usefulness of declarative techniques in the context of adaptability of operating systems and handling of complexity, I present several case studies in this thesis. The case studies are based on declarative techniques. They reason about hardware and derive policies based on hardware knowledge. The case studies do not assume any a priori knowledge about the current hardware platform. Instead, they gain knowledge at runtime by online reasoning about the hardware. A central knowledge service stores hardware knowledge and allows to derive policies according to declarative rules. Because the case studies, and therefore the operating system components, can use the central service, their complexity is again reduced significantly. It is not necessary, that every single component deals with knowledge gathering and deriving policies by itself. It pushes this part to the knowledge service. With this implementation I prove the practical feasibility of applying declarative techniques in real operating systems. v vi Contents 1 Introduction 1 1.1 Motivation . .2 1.1.1 Diversity . .4 1.1.2 The interconnect network . .5 1.1.3 Managing Hardware . .6 1.1.4 Managing Applications . .8 1.2 Problem Statement and Hypothesis . .8 1.3 Goals . .8 1.4 Contributions . .9 1.5 Structure . .9 2 Background 11 2.1 Declarative Techniques . 11 2.1.1 What is declarative programming? . 11 2.1.2 Declarative languages . 12 2.1.3 Constraint logic programming . 13 2.1.4 CLP programming in ECLiPSe ........................... 13 2.2 Barrelfish . 14 2.2.1 The Multikernel . 14 2.2.2 A Barrelfish “node” . 16 2.2.3 Explicit access to physical resources . 17 2.2.4 Messaging . 18 2.2.5 Drivers and services . 18 2.3 Reasoning in operating systems . 19 2.3.1 Hardware representation . 19 2.3.2 Declarative hardware access and configuration . 19 2.3.3 Resource allocation . 20 2.4 Declarative reasoning in networks . 20 2.5 Summary . 21 3 The system knowledge base 23 3.1 Introduction . 23 3.2 Background . 24 3.2.1 Knowledge . 24 3.2.2 Knowledge bases . 25 3.3 How does the SKB help the operating system? . 26 3.3.1 Purpose . 26 vii viii CONTENTS 3.3.2 Examples . 26 3.3.3 Common patterns of resource allocation descriptions . 27 3.3.4 When to use the SKB . 28 3.4 Design . 29 3.4.1 Design principles . 29 3.4.2 Overall architecture . 31 3.4.3 Core . 31 3.4.4 Interface . 32 3.4.5 Facts, schema and queries . 33 3.4.6 Data gathering . 34 3.4.7 Algorithms . 35 3.4.8 A note on security . 36 3.5 Implementation . 37 3.5.1 Implementation of the SKB server . 37 3.5.2 Facts and schema . 37 3.5.3 Datagatherer . 39 3.5.4 Common queries . 39 3.5.5 Startup . 40 3.6 Client library . 40 3.6.1 Using and initializing the library . 40 3.6.2 Interacting with the SKB . 40 3.7 Evaluation . 43 3.7.1 Code complexity . 44 3.7.2 Memory overhead . 44 3.7.3 Performance . 45 3.8 Discussion . 45 3.8.1 Advantages . 45 3.8.2 Disadvantages . 47 3.8.3 Approaching a configuration problem in CLP . 49 3.9 Summary . 50 4 Coordination 51 4.1 Introduction . 51 4.2 Background . 52 4.3 Approach . 53 4.3.1 Design principles . 53 4.3.2 Octopus . 53 4.3.3 Records and Record Queries . 54 4.3.4 Record Store . ..
Recommended publications
  • Skip Lists: a Probabilistic Alternative to Balanced Trees
    Skip Lists: A Probabilistic Alternative to Balanced Trees Skip lists are a data structure that can be used in place of balanced trees. Skip lists use probabilistic balancing rather than strictly enforced balancing and as a result the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees. William Pugh Binary trees can be used for representing abstract data types Also giving every fourth node a pointer four ahead (Figure such as dictionaries and ordered lists. They work well when 1c) requires that no more than n/4 + 2 nodes be examined. the elements are inserted in a random order. Some sequences If every (2i)th node has a pointer 2i nodes ahead (Figure 1d), of operations, such as inserting the elements in order, produce the number of nodes that must be examined can be reduced to degenerate data structures that give very poor performance. If log2 n while only doubling the number of pointers. This it were possible to randomly permute the list of items to be in- data structure could be used for fast searching, but insertion serted, trees would work well with high probability for any in- and deletion would be impractical. put sequence. In most cases queries must be answered on-line, A node that has k forward pointers is called a level k node. so randomly permuting the input is impractical. Balanced tree If every (2i)th node has a pointer 2i nodes ahead, then levels algorithms re-arrange the tree as operations are performed to of nodes are distributed in a simple pattern: 50% are level 1, maintain certain balance conditions and assure good perfor- 25% are level 2, 12.5% are level 3 and so on.
    [Show full text]
  • Skip B-Trees
    Skip B-Trees Ittai Abraham1, James Aspnes2?, and Jian Yuan3 1 The Institute of Computer Science, The Hebrew University of Jerusalem, [email protected] 2 Department of Computer Science, Yale University, [email protected] 3 Google, [email protected] Abstract. We describe a new data structure, the Skip B-Tree that combines the advantages of skip graphs with features of traditional B-trees. A skip B-Tree provides efficient search, insertion and deletion operations. The data structure is highly fault tolerant even to adversarial failures, and allows for particularly simple repair mechanisms. Related resource keys are kept in blocks near each other enabling efficient range queries. Using this data structure, we describe a new distributed peer-to-peer network, the Distributed Skip B-Tree. Given m data items stored in a system with n nodes, the network allows to perform a range search operation for r consecutive keys that costs only O(logb m + r/b) where b = Θ(m/n). In addition, our distributed Skip B-tree search network has provable polylogarithmic costs for all its other basic operations like insert, delete, and node join. To the best of our knowledge, all previous distributed search networks either provide a range search operation whose cost is worse than ours or may require a linear cost for some basic operation like insert, delete, and node join. ? Supported in part by NSF grants CCR-0098078, CNS-0305258, and CNS-0435201. 1 Introduction Peer-to-peer systems provide a decentralized way to share resources among machines. An ideal peer-to-peer network should have such properties as decentralization, scalability, fault-tolerance, self-stabilization, load- balancing, dynamic addition and deletion of nodes, efficient query searching and exploiting spatial as well as temporal locality in searches.
    [Show full text]
  • On the Cost of Persistence and Authentication in Skip Lists*
    On the Cost of Persistence and Authentication in Skip Lists⋆ Micheal T. Goodrich1, Charalampos Papamanthou2, and Roberto Tamassia2 1 Department of Computer Science, University of California, Irvine 2 Department of Computer Science, Brown University Abstract. We present an extensive experimental study of authenticated data structures for dictionaries and maps implemented with skip lists. We consider realizations of these data structures that allow us to study the performance overhead of authentication and persistence. We explore various design decisions and analyze the impact of garbage collection and virtual memory paging, as well. Our empirical study confirms the effi- ciency of authenticated skip lists and offers guidelines for incorporating them in various applications. 1 Introduction A proven paradigm from distributed computing is that of using a large collec- tion of widely distributed computers to respond to queries from geographically dispersed users. This approach forms the foundation, for example, of the DNS system. Of course, we can abstract the main query and update tasks of such systems as simple data structures, such as distributed versions of dictionar- ies and maps, and easily characterize their asymptotic performance (with most operations running in logarithmic time). There are a number of interesting im- plementation issues concerning practical systems that use such distributed data structures, however, including the additional features that such structures should provide. For instance, a feature that can be useful in a number of real-world ap- plications is that distributed query responders provide authenticated responses, that is, answers that are provably trustworthy. An authenticated response in- cludes both an answer (for example, a yes/no answer to the query “is item x a member of the set S?”) and a proof of this answer, equivalent to a digital signature from the data source.
    [Show full text]
  • Toward Harnessing High-Level Language Virtual Machines for Further Speeding up Weak Mutation Testing
    2012 IEEE Fifth International Conference on Software Testing, Verification and Validation Toward Harnessing High-level Language Virtual Machines for Further Speeding up Weak Mutation Testing Vinicius H. S. Durelli Jeff Offutt Marcio E. Delamaro Computer Systems Department Software Engineering Computer Systems Department Universidade de Sao˜ Paulo George Mason University Universidade de Sao˜ Paulo Sao˜ Carlos, SP, Brazil Fairfax, VA, USA Sao˜ Carlos, SP, Brazil [email protected] [email protected] [email protected] Abstract—High-level language virtual machines (HLL VMs) have tried to exploit the control that HLL VMs exert over run- are now widely used to implement high-level programming ning programs to facilitate and speedup software engineering languages. To a certain extent, their widespread adoption is due activities. Thus, this research suggests that software testing to the software engineering benefits provided by these managed execution environments, for example, garbage collection (GC) activities can benefit from HLL VMs support. and cross-platform portability. Although HLL VMs are widely Test tools are usually built on top of HLL VMs. However, used, most research has concentrated on high-end optimizations they often end up tampering with the emergent computation. such as dynamic compilation and advanced GC techniques. Few Using features within the HLL VMs can avoid such problems. efforts have focused on introducing features that automate or fa- Moreover, embedding testing tools within HLL VMs can cilitate certain software engineering activities, including software testing. This paper suggests that HLL VMs provide a reasonable significantly speedup computationally expensive techniques basis for building an integrated software testing environment. As such as mutation testing [6].
    [Show full text]
  • Exploring the Duality Between Skip Lists and Binary Search Trees
    Exploring the Duality Between Skip Lists and Binary Search Trees Brian C. Dean Zachary H. Jones School of Computing School of Computing Clemson University Clemson University Clemson, SC Clemson, SC [email protected] [email protected] ABSTRACT statically optimal [5] skip lists have already been indepen- Although skip lists were introduced as an alternative to bal- dently derived in the literature). anced binary search trees (BSTs), we show that the skip That the skip list can be interpreted as a type of randomly- list can be interpreted as a type of randomly-balanced BST balanced tree is not particularly surprising, and this has whose simplicity and elegance is arguably on par with that certainly not escaped the attention of other authors [2, 10, of today’s most popular BST balancing mechanisms. In this 9, 8]. However, essentially every tree interpretation of the paper, we provide a clear, concise description and analysis skip list in the literature seems to focus entirely on casting of the “BST” interpretation of the skip list, and compare the skip list as a randomly-balanced multiway branching it to similar randomized BST balancing mechanisms. In tree (e.g., a randomized B-tree [4]). Messeguer [8] calls this addition, we show that any rotation-based BST balancing structure the skip tree. Since there are several well-known mechanism can be implemented in a simple fashion using a ways to represent a multiway branching search tree as a BST skip list. (e.g., replace each multiway branching node with a minia- ture balanced BST, or replace (first child, next sibling) with (left child, right child) pointers), it is clear that the skip 1.
    [Show full text]
  • Investigation of a Dynamic Kd Search Skip List Requiring [Theta](Kn) Space
    Hcqulslmns ana nqulstwns et Bibliographie Services services bibliographiques 395 Wellington Street 395, rue Wellington Ottawa ON KIA ON4 Ottawa ON KIA ON4 Canada Canada Your file Votre réference Our file Notre reldrence The author has granted a non- L'auteur a accordé une licence non exclusive licence allowing the exclusive permettant a la National Libraq of Canada to Bibliothèque nationale du Canada de reproduce, loan, distribute or sell reproduire, prêter, distribuer ou copies of this thesis in microforni, vendre des copies de cette thèse sous paper or electronic formats. la forme de microfiche/film, de reproduction sur papier ou sur format électronique. The author retains ownership of the L'auteur conserve la propriété du copyright in this thesis. Neither the droit d'auteur qui protège cette thèse. thesis nor substantial extracts fiom it Ni la thèse ni des extraits substantiels may be printed or othexwise de celle-ci ne doivent être imprimés reproduced without the author's ou autrement reproduits sans son permission. autorisation. A new dynamic k-d data structure (supporting insertion and deletion) labeled the k-d Search Skip List is developed and analyzed. The time for k-d semi-infinite range search on n k-d data points without space restriction is shown to be SZ(k1og n) and O(kn + kt), and there is a space-time trade-off represented as (log T' + log log n) = Sà(n(1og n)"e), where 0 = 1 for k =2, 0 = 2 for k 23, and T' is the scaled query time. The dynamic update tirne for the k-d Search Skip List is O(k1og a).
    [Show full text]
  • Comparison of Dictionary Data Structures
    A Comparison of Dictionary Implementations Mark P Neyer April 10, 2009 1 Introduction A common problem in computer science is the representation of a mapping between two sets. A mapping f : A ! B is a function taking as input a member a 2 A, and returning b, an element of B. A mapping is also sometimes referred to as a dictionary, because dictionaries map words to their definitions. Knuth [?] explores the map / dictionary problem in Volume 3, Chapter 6 of his book The Art of Computer Programming. He calls it the problem of 'searching,' and presents several solutions. This paper explores implementations of several different solutions to the map / dictionary problem: hash tables, Red-Black Trees, AVL Trees, and Skip Lists. This paper is inspired by the author's experience in industry, where a dictionary structure was often needed, but the natural C# hash table-implemented dictionary was taking up too much space in memory. The goal of this paper is to determine what data structure gives the best performance, in terms of both memory and processing time. AVL and Red-Black Trees were chosen because Pfaff [?] has shown that they are the ideal balanced trees to use. Pfaff did not compare hash tables, however. Also considered for this project were Splay Trees [?]. 2 Background 2.1 The Dictionary Problem A dictionary is a mapping between two sets of items, K, and V . It must support the following operations: 1. Insert an item v for a given key k. If key k already exists in the dictionary, its item is updated to be v.
    [Show full text]
  • An Experimental Study of Dynamic Biased Skip Lists
    AN EXPERIMENTAL STUDY OF DYNAMIC BIASED SKIP LISTS by Yiqun Zhao Submitted in partial fulfillment of the requirements for the degree of Master of Computer Science at Dalhousie University Halifax, Nova Scotia August 2017 ⃝c Copyright by Yiqun Zhao, 2017 Table of Contents List of Tables ................................... iv List of Figures .................................. v Abstract ...................................... vii List of Abbreviations and Symbols Used .................. viii Acknowledgements ............................... ix Chapter 1 Introduction .......................... 1 1.1 Dynamic Data Structures . .2 1.1.1 Move-to-front Lists . .3 1.1.2 Red-black Trees . .3 1.1.3 Splay Trees . .3 1.1.4 Skip Lists . .4 1.1.5 Other Biased Data Structures . .7 1.2 Our Work . .8 1.3 Organization of The Thesis . .8 Chapter 2 A Review of Different Search Structures ......... 10 2.1 Move-to-front Lists . 10 2.2 Red-black Trees . 11 2.3 Splay Trees . 14 2.3.1 Original Splay Tree . 14 2.3.2 Randomized Splay Tree . 18 2.3.3 W-splay Tree . 19 2.4 Skip Lists . 20 Chapter 3 Dynamic Biased Skip Lists ................. 24 3.1 Biased Skip List . 24 3.2 Modifications . 29 3.2.1 Improved Construction Scheme . 29 3.2.2 Lazy Update Scheme . 32 ii 3.3 Hybrid Search Algorithm . 34 Chapter 4 Experiments .......................... 36 4.1 Experimental Setup . 38 4.2 Analysis of Different C1 Sizes . 40 4.3 Analysis of Varying Value of t in H-DBSL . 46 4.4 Comparison of Data Structures using Real-World Data . 49 4.5 Comparison with Data Structures using Generated Data . 55 Chapter 5 Conclusions ..........................
    [Show full text]
  • A Set Intersection Algorithm Via X-Fast Trie
    Journal of Computers A Set Intersection Algorithm Via x-Fast Trie Bangyu Ye* National Engineering Laboratory for Information Security Technologies, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, 100091, China. * Corresponding author. Tel.:+86-10-82546714; email: [email protected] Manuscript submitted February 9, 2015; accepted May 10, 2015. doi: 10.17706/jcp.11.2.91-98 Abstract: This paper proposes a simple intersection algorithm for two sorted integer sequences . Our algorithm is designed based on x-fast trie since it provides efficient find and successor operators. We present that our algorithm outperforms skip list based algorithm when one of the sets to be intersected is relatively ‘dense’ while the other one is (relatively) ‘sparse’. Finally, we propose some possible approaches which may optimize our algorithm further. Key words: Set intersection, algorithm, x-fast trie. 1. Introduction Fast set intersection is a key operation in the context of several fields when it comes to big data era [1], [2]. For example, modern search engines use the set intersection for inverted posting list which is a standard data structure in information retrieval to return relevant documents. So it has been studied in many domains and fields [3]-[8]. One of the most typical situations is Boolean query which is required to retrieval the documents that contains all the terms in the query. Besides, set intersection is naturally a key component of calculating the Jaccard coefficient of two sets, which is defined by the fraction of size of intersection and union. In this paper, we propose a new algorithm via x-fast trie which is a kind of advanced data structure.
    [Show full text]
  • Leftist Heap: Is a Binary Tree with the Normal Heap Ordering Property, but the Tree Is Not Balanced. in Fact It Attempts to Be Very Unbalanced!
    Leftist heap: is a binary tree with the normal heap ordering property, but the tree is not balanced. In fact it attempts to be very unbalanced! Definition: the null path length npl(x) of node x is the length of the shortest path from x to a node without two children. The null path lengh of any node is 1 more than the minimum of the null path lengths of its children. (let npl(nil)=-1). Only the tree on the left is leftist. Null path lengths are shown in the nodes. Definition: the leftist heap property is that for every node x in the heap, the null path length of the left child is at least as large as that of the right child. This property biases the tree to get deep towards the left. It may generate very unbalanced trees, which facilitates merging! It also also means that the right path down a leftist heap is as short as any path in the heap. In fact, the right path in a leftist tree of N nodes contains at most lg(N+1) nodes. We perform all the work on this right path, which is guaranteed to be short. Merging on a leftist heap. (Notice that an insert can be considered as a merge of a one-node heap with a larger heap.) 1. (Magically and recursively) merge the heap with the larger root (6) with the right subheap (rooted at 8) of the heap with the smaller root, creating a leftist heap. Make this new heap the right child of the root (3) of h1.
    [Show full text]
  • EINLADUNG Zum Gastvortrag Am Freitag, 26
    Fachbereich Computerwissenschaften EINLADUNG zum Gastvortrag am Freitag, 26. Nov. 2010, 15:15 Uhr, T01 Institutsgebäude Jakob-Haringer-Str. 2, Itzling von Doug Simon Sun Microsystems Laboratories zum Thema: “What a meta-circular JVM buys you - and what not!” Abstract: Since the open source release of the Maxine VM, it has progressed to the point where it can now run application servers such as Glassfish and WebLogic. With the recent addition of a new compiler that leverages the mature design behind the HotSpot client compiler (aka C1), the VM is on track to deliver performance on par with the HotSpot client compiler and better. At the same time, its adoption by VM researchers and enthusiasts is increasing. That is, we believe the productivity advantages of system level programming in Java are being realized. This talk will highlight and demonstrate the advantages of both the Maxine architecture and of meta-circular JVM development in general. Bio: Doug Simon is a Principle Member of Technical Staff at Oracle working in Sun Labs and is currently leading the Maxine project, an open source Virtual Machine for the JavaTM platform written in Java. Prior to Maxine, Doug co-developed Squawk, a CLDC compliant JVM implemented also in Java. His first project at the labs was investigating secure, fine-grained dynamic provisioning of applications on small devices. Once again, this work was in the context of a VM, this time a modified version of the KVM, which was the last non-Java-in-Java VM Doug hacked on! Doug obtained a Bachelors in Information Technology from the University of Queensland in 1997, graduating with first class honors.
    [Show full text]
  • Chapter 10: Efficient Collections (Skip Lists, Trees)
    Chapter 10: Efficient Collections (skip lists, trees) If you performed the analysis exercises in Chapter 9, you discovered that selecting a bag- like container required a detailed understanding of the tasks the container will be expected to perform. Consider the following chart: Dynamic array Linked list Ordered array add O(1)+ O(1) O(n) contains O(n) O(n) O(log n) remove O(n) O(n) O(n) If we are simply considering the cost to insert a new value into the collection, then nothing can beat the constant time performance of a simple dynamic array or linked list. But if searching or removals are common, then the O(log n) cost of searching an ordered list may more than make up for the slower cost to perform an insertion. Imagine, for example, an on-line telephone directory. There might be several million search requests before it becomes necessary to add or remove an entry. The benefit of being able to perform a binary search more than makes up for the cost of a slow insertion or removal. What if all three bag operations are more-or-less equal? Are there techniques that can be used to speed up all three operations? Are arrays and linked lists the only ways of organizing a data for a bag? Indeed, they are not. In this chapter we will examine two very different implementation techniques for the Bag data structure. In the end they both have the same effect, which is providing O(log n) execution time for all three bag operations.
    [Show full text]