Design and Provability of a Statically Configurable Hypervisor

Total Page:16

File Type:pdf, Size:1020Kb

Design and Provability of a Statically Configurable Hypervisor Fakult¨atf¨urElektrotechnik und Informatik Institut f¨ur Softwaretechnik und Theoretische Informatik Fachgebiet Security in Telecommunications Design and Provability of a Statically Configurable Hypervisor vorgelegt von Dipl.-Inform. Jan Nordholz geb. in Stadthagen von der Fakult¨atIV – Elektrotechnik und Informatik der Technischen Universit¨atBerlin zur Erlangung des akademischen Grades Doktor der Ingenieurwissenschaften (Dr.-Ing.) genehmigte Dissertation Promotionsausschuss: Vorsitzender: Prof. Dr. Hans-Ulrich Heiß, Technische Universit¨atBerlin Gutachter: Prof. Dr. Jean-Pierre Seifert, Technische Universit¨atBerlin Gutachter: Prof. Fabio Massacci, Ph.D., Universit`adi Trento Gutachter: Prof. Dr.-Ing. Stefan J¨ahnichen, Technische Universit¨atBerlin Tag der wissenschaftlichen Aussprache: 2. Juni 2017 Berlin 2017 Abstract In this thesis we develop a novel, minimalist design for Type I hypervisors and present a fully working prototype for the ARMv7 and ARMv8 architectures. We introduce its key design paradigm, the Principle of Staticity, elaborate on its consequences for the resulting implementation, and discuss scenarios where not all dynamicity can be eliminated. We also describe, implement, and evaluate the configuration framework necessary to build this specific kind of hypervisor. Even though the compiled hypervisor images are fully static, we show that the framework is powerful enough to support various system-on-chip target platforms, different address translation regimes, and even com- pletely unrelated architectures. We further demonstrate its versatility by extending the framework to even support embedded processors which only contain a memory protection unit. After performing a selection of benchmarks to substantiate the competitiveness of our implementation, we continue by subjecting our hypervisor to an analysis using a specialized form of symbolic model checking. We succeed in deriving integrity properties for the hypervisor as well as among virtual machines with surprisingly little effort, but show that complexity immediately ramps up once dynamic components like a shadow paging unit are added to our design. Finally we demonstrate the extensibility of our design despite our postulated Principle of Staticity. As an optimization example, we introduce the concept of ”lightweight VMs” and present a novel use case for a recent feature of the ARM architecture, thereby reducing the switch latency between virtual machines. We conclude with an overview over the published research efforts based on or heavily inspired by our hypervisor and discuss future research questions. ii Contents 1. Introduction1 2. Background4 2.1. A Brief History of Hypervisors . .4 2.2. A Brief History of Microkernels . .6 2.3. Converging Concepts . .9 3. Structural Design 12 3.1. Principle of Staticity . 12 3.2. Applying the Principle of Staticity . 13 3.2.1. Memory Management . 13 3.2.2. Virtualization Interface . 14 3.2.3. Scheduler . 14 3.2.4. Interrupts . 15 3.2.5. Emulated Devices . 16 3.2.6. Events and Timers . 16 3.2.7. Inter-VM Communication . 17 3.2.8. Virtual TLB . 18 3.3. Related Decisions . 20 3.3.1. Multicore Architecture . 20 3.3.2. Execution Model . 21 3.3.3. Preemptibility . 22 3.4. Phidias .................................. 23 4. Static Configuration 25 4.1. Configurable Elements . 25 4.1.1. Hypervisor Core . 25 4.1.2. Virtual Machines . 26 4.2. Completing and Sanitizing . 28 4.3. Resolving Recursive Dependencies . 28 4.4. Generating the Data Structures . 29 4.5. Generating the Page tables . 30 4.6. Schism .................................. 31 4.7. Discussion . 32 4.7.1. Code Deduplication and Extensibility . 32 4.7.2. Complexity Shift . 33 5. Evaluation 34 5.1. Microbenchmarks . 35 5.2. Macrobenchmarks . 36 5.3. Worst-Case Latency . 38 6. Provability 40 6.1. Methodology . 42 iii Contents 6.2. Symbolic Execution Engine Design . 45 6.3. Implementation . 48 6.4. Evaluation . 49 6.5. Corner Cases . 51 6.5.1. Multicore Interaction . 51 6.5.2. Preemptibility . 52 6.6. Verifying the VTLB . 52 7. Lightweight Virtual Machines 56 7.1. Concept . 56 7.2. Hardware Virtualization Analysis . 57 7.2.1. VT-x and AMD-V . 58 7.2.2. ARM VE . 59 7.3. Implementation . 60 7.3.1. Technical Design . 60 7.3.2. Experimental Setup . 61 7.4. Evaluation . 62 7.4.1. Microbenchmarks . 62 7.4.2. VGIC Optimization . 63 7.4.3. Discussion . 63 7.5. Proof Implications . 64 7.6. Conclusion . 65 8. Derived Work 66 8.1. Direct Descendants . 66 8.1.1. XNPro . 66 8.1.2. RkDetect . 67 8.2. Technologically Related Articles . 68 8.2.1. Usurping Hypervisor Mode . 68 8.2.2. Covert Channels in Microkernels . 69 8.3. Ongoing Projects . 70 9. Conclusion and Future Research 71 A. Code Fragments 73 A.1. XML Architecture Definitions . 73 A.2. XML Sample Scenario Definition . 74 A.3. Proof Example Trace . 81 B. Open Source Repositories 83 C. Glossary 84 D. List of Figures 86 E. List of Tables 87 F. Bibliography 88 iv 1. Introduction Microkernels had spent more than a decade on the sideline of academic research when Jochen Liedtke revived the idea by formulating his famous minimality principle[62] and creating the L3 microkernel and its successor L4. These embodiments of his principle became the ancestors of a large family of microkernels, and their success brought attention back to a seemingly uninteresting research field. Since those days, the evolution of computing platforms has radically changed the circumstances. Contemporary personal computers easily run several virtual machines under the control of the host operating system, and even architectures with clear focus on the embedded devices market like ARM and MIPS have entered the virtualization business. The hypervisors, the pieces of control software that are in charge of managing virtualization, follow a design paradigm that is not completely unlike microkernels. Their history however is very different. Due to the recent trend of consolidating disparate workloads onto a single compute platform, both designs increasingly find their way into highly critical environments, e. g. avionics, train control, and road vehicle electronics. All these environments require strict isolation of the individual workloads placed on the computing device. In addition to coding in special-purpose dialects like MISRA C and following safety- specific development processes like IEC 61508, many developers of microkernels and hypervisors have begun subjecting their existing designs to formal methods. Ideally these analyses result in proofs of formal properties of the corresponding implementation such as integrity, absence of covert channels, or even correctness— however endeavours like seL4[58] have demonstrated that generating such a proof is an arduous task which should not be underestimated. The complexity of a proof is significantly influenced by the amount of dynamicity in the underlying system design. Dynamic subsystems like memory allocators, kernel object factories, balancing schedulers, and memory balloon mechanisms are in a way the only challenge to the creation of a formal model: each of these requires the specification of an invariant, a proof that the invariant holds under all possible operations and circumstances, especially the exhaustion case, and if global resources are being managed, additional reasoning that the mechanism cannot be abused to covertly transmit data. Algorithms which operate on static data sets on the other hand, like an iteration over a fixed number of elements of a linked list which is a priori guaranteed to be loop-free, can be easily expressed in formal terms. We strongly believe that the path to a provably secure hypervisor implementation does not necessarily lead to the generation of a proof for one of the big existing general-purpose kernels or hypervisor cores. We argue instead that for most highly critical environments a far simpler implementation is sufficient. At the core of this simplification we propose to reduce dynamicity as much as possible by pregenerating data structures and choosing algorithmic parameters already at the compilation stage. The reduced complexity is going to result in less code, retaining only the relevant device drivers and event handlers for the desired configuration, as well as in “simpler code” in terms of provability. This design paradigm does not even impose any versatil- ity limitations on the resulting implementation, quite the contrary: we envision that 1 1. Introduction a hypervisor designed in this way can still support multiple platforms, even multiple different architectures, with the build process carefully selecting the desired piecesof functionality and generating the necessary corresponding data structures to form an easily provable and perfectly tailored code subset. We claim that this approach is viable because embedded computing platforms such as those designed for highly critical environments are meticulously specified in terms of processing power, available memory resources, and peripheral devices, as opposed to the construction kit mentality of present-day desktop platforms. Likewise, the software components for such embedded devices usually have strict scheduling requirements and require a constant amount of CPU time to perform their tasks, whereas desktop computing is characterized by a vast variability of system utilization. We argue that the advantages of a static system configuration have not been fully recognized yet. Another topic which poses extraordinary difficulty to formal methods is the correct and consistent modelling of multiprocessor interactions.
Recommended publications
  • Improving the Reliability of Commodity Operating Systems
    Improving the Reliability of Commodity Operating Systems MICHAEL M. SWIFT, BRIAN N. BERSHAD, and HENRY M. LEVY University of Washington Despite decades of research in extensible operating system technology, extensions such as device drivers remain a significant cause of system failures. In Windows XP, for example, drivers account for 85% of recently reported failures. This paper describes Nooks, a reliability subsystem that seeks to greatly enhance OS reliability by isolating the OS from driver failures. The Nooks approach is practical: rather than guaranteeing complete fault tolerance through a new (and incompatible) OS or driver architecture, our goal is to prevent the vast majority of driver-caused crashes with little or no change to existing driver and system code. Nooks isolates drivers within lightweight protection domains inside the kernel address space, where hardware and software prevent them from corrupting the kernel. Nooks also tracks a driver’s use of kernel resources to facilitate automatic clean-up during recovery. To prove the viability of our approach, we implemented Nooks in the Linux operating system and used it to fault-isolate several device drivers. Our results show that Nooks offers a substantial increase in the reliability of operating systems, catching and quickly recovering from many faults that would otherwise crash the system. Under a wide range and number of fault conditions, we show that Nooks recovers automatically from 99% of the faults that otherwise cause Linux to crash. While Nooks was designed for drivers, our techniques generalize to other kernel extensions. We demonstrate this by isolating a kernel-mode file system and an in-kernel Internet service.
    [Show full text]
  • A Binary Decision Diagram Based Approach on Improving Probabilistic Databases Kaj Derek Van Rijn University of Twente P.O
    A Binary Decision Diagram based approach on improving Probabilistic Databases Kaj Derek van Rijn University of Twente P.O. Box 217, 7500AE Enschede The Netherlands [email protected] ABSTRACT to be insufficient [5]. This research hopes to come up with This research focusses on improving Binary Decision Di- algorithms that can create, evaluate and perform logical agram algorithms in the scope of probabilistic databases. operations on BDDs in a more efficient way. The driving factor is to create probabilistic databases that scale, the first order logic formulae generated when re- 2. BACKGROUND trieving data from these databases create a bottleneck 2.1 Probabilistic Databases in the scalability in the number of random variables. It Many real world databases contain uncertainty in correct- is believed that Binary Decision Diagrams are capable ness of the data, probabilistic databases deal with such of manipulating these formulae in a more scalable way. uncertainty. We assign probability values (between 0 and This research studies the complexity of existing BDD al- 1) to a group of entries in the database, the probabilities gorithms with regards to characteristics such as the depth in this group add to one. For example, entry A and B have and breadth of a tree, the scale and other metrics. These probability values of 0.4 and 0.6 respectively. Probabilistic results will be evaluated and compared to the charac- databases keep track of multiple states and play out dif- teristics of equations typically produced by probabilistic ferent scenarios, a query will not just return a single right databases.
    [Show full text]
  • Xtratum User Manual
    XtratuM Hypervisor for LEON4 Volume 2: XtratuM User Manual Miguel Masmano, Alfons Crespo, Javier Coronel November, 2012 Reference: xm-4-usermanual-047d This page is intentionally left blank. iii/134 DOCUMENT CONTROL PAGE TITLE: XtratuM Hypervisor for LEON4: Volume 2: XtratuM User Manual AUTHOR/S: Miguel Masmano, Alfons Crespo, Javier Coronel LAST PAGE NUMBER: 134 VERSION OF SOURCE CODE: XtratuM 4 for LEON3 () REFERENCE ID: xm-4-usermanual-047d SUMMARY: This guide describes the fundamental concepts and the features provided by the API of the XtratuM hypervisor. DISCLAIMER: This documentation is currently under active development. Therefore, there are no explicit or implied warranties regarding any properties, including, but not limited to, correctness and fitness for purpose. Contributions to this documentation (new material, suggestions or corrections) are welcome. REFERENCING THIS DOCUMENT: @techreport fxm-4-usermanual-047d, title = fXtratuM Hypervisor for LEON4: Volume 2: XtratuM User Manualg, author = f Miguel Masmano and Alfons Crespo and Javier Coronelg, institution = fUniversidad Polit´ecnicade Valenciag, number = fxm-4-usermanual-047dg, year=fNovember, 2012g, g Copyright c November, 2012 Miguel Masmano, Alfons Crespo, Javier Coronel Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled ”GNU Free Documentation License”. Changes: Version Date Comments 0.1 November, 2011 [xm-4-usermanual-047] Initial document 0.2 March, 2012 [xm-4-usermanual-047b] IOMMU included.
    [Show full text]
  • Access Control and Operating System
    Outline (may not finish in one lecture) Access Control and Operating Access Control Concepts Secure OS System Security • Matrix, ACL, Capabilities • Methods for resisting • Multi-level security (MLS) stronger attacks OS Mechanisms Assurance • Multics • Orange Book, TCSEC John Mitchell – Ring structure • Common Criteria • Amoeba • Windows 2000 – Distributed, capabilities certification • Unix Some Limitations – File system, Setuid • Information flow • Windows • Covert channels – File system, Tokens, EFS • SE Linux – Role-based, Domain type enforcement Access control Access control matrix [Lampson] Common Assumption Objects • System knows who the user is File 1 File 2 File 3 … File n – User has entered a name and password, or other info • Access requests pass through gatekeeper User 1 read write - - read – OS must be designed monitor cannot be bypassed User 2 write write write - - Reference Subjects monitor User 3 - - - read read User process ? Resource … User m read write read write read Decide whether user can apply operation to resource Two implementation concepts Capabilities Access control list (ACL) File 1 File 2 … Operating system concept • “… of the future and always will be …” • Store column of matrix User 1 read write - Examples with the resource User 2 write write - • Dennis and van Horn, MIT PDP-1 Timesharing Capability User 3 - - read • Hydra, StarOS, Intel iAPX 432, Eros, … • User holds a “ticket” for … • Amoeba: distributed, unforgeable tickets each resource User m read write write • Two variations References – store
    [Show full text]
  • Research Purpose Operating Systems – a Wide Survey
    GESJ: Computer Science and Telecommunications 2010|No.3(26) ISSN 1512-1232 RESEARCH PURPOSE OPERATING SYSTEMS – A WIDE SURVEY Pinaki Chakraborty School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi – 110067, India. E-mail: [email protected] Abstract Operating systems constitute a class of vital software. A plethora of operating systems, of different types and developed by different manufacturers over the years, are available now. This paper concentrates on research purpose operating systems because many of them have high technological significance and they have been vividly documented in the research literature. Thirty-four academic and research purpose operating systems have been briefly reviewed in this paper. It was observed that the microkernel based architecture is being used widely to design research purpose operating systems. It was also noticed that object oriented operating systems are emerging as a promising option. Hence, the paper concludes by suggesting a study of the scope of microkernel based object oriented operating systems. Keywords: Operating system, research purpose operating system, object oriented operating system, microkernel 1. Introduction An operating system is a software that manages all the resources of a computer, both hardware and software, and provides an environment in which a user can execute programs in a convenient and efficient manner [1]. However, the principles and concepts used in the operating systems were not standardized in a day. In fact, operating systems have been evolving through the years [2]. There were no operating systems in the early computers. In those systems, every program required full hardware specification to execute correctly and perform each trivial task, and its own drivers for peripheral devices like card readers and line printers.
    [Show full text]
  • TECOM Project Results Leaflet
    Project Results TECOM (ITEA2 ~ 06038) Delivering trust in embedded ••••••••••••••• systems n Partners Offering secure and dependable solutions Atego EADS DS for a wide range of applications Fagor ••••••••••••••••••••••••••••••••••••• Ikerlan Technicolor Technikon The TECOM (Trusted Embedded ABSTRACT ARCHITECTURES Trialog Computing) project has developed TECOM focused on the growing demand Universidad Politécnica de Madrid architectures and solutions combining for execution platforms in embedded Universidad Politécnica de Valencia embedded trust services and trusted systems that address both integrity and Visual Tools operating system technologies to security concerns. It developed abstract ensure security and dependability in architectures based on generic modules a wide range of complex and dynamic involving on one side an embedded trust n Countries involved embedded systems. The project focused services layer offering hardware security, on enabling multiple applications to and on the other trusted operating Austria be run safely on the same systems system technology involving system and France and processors while acting totally middleware space. The result can be Spain independently of each other. customised to a specific application. Applications range from protecting film rights in video-on-demand applications The state-of-the art approach to trust at n Project start to ensuring bug-free software upgrades systems level, close to the processor, is September 2007 in domestic appliances. some form of hypervisor or virtualisation application for securely partitioning the n Project end Industry and society are increasingly applications. While this has already been August 2010 dependent on embedded systems that are developed for use on PCs where it was becoming ever more complex, dynamic and possible to run two windows independently open, while interacting with progressively at the same time, it has not been available more demanding and heterogeneous for embedded systems.
    [Show full text]
  • An Introduction to Binary Decision Diagrams
    An Introduction to Binary Decision Diagrams Henrik Reif Andersen x y y z 1 0 Lecture notes for Efficient Algorithms and Programs, Fall 1999. E-mail: [email protected]. Web: http://www.itu.dk/people/hra The IT University of Copenhagen Glentevej 67 2400 Copenhagen NV. 1 2 Preface This note is a short introduction to Binary Decision Diagrams. It provides some background knowledge and describes the core algorithms. More details can be found in Bryant’s original paper on Reduced Ordered Binary Decision Diagrams [Bry86] and the survey paper [Bry92]. A recent extension called Boolean Expression Diagrams is described in [AH97]. This note is a revision of an earlier version from fall 1996 (based on versions from 1995 and 1994). The major differences are as follows: Firstly, ROBDDs are now viewed as nodes of one global graph with one fixed ordering to reflect state-of-the-art of efficient BDD packages. The algorithms have been changed (and simplified) to reflect this fact. Secondly, a proof of the canonicity lemma has been added. Thirdly, the sections presenting the algorithms have been completely restructured. Finally, the project proposal has been revised. Acknowledgements Thanks to the students on the courses of fall 1994, 1995, and 1996 for helping me debug and improve the notes. Thanks are also due to Hans Rischel, Morten Ulrik Sørensen, Niels Maretti, Jørgen Staunstrup, Kim Skak Larsen, Henrik Hulgaard, and various people on the Internet who found typos and suggested improvements. 3 CONTENTS 4 Contents 1 Boolean Expressions 6 2 Normal Forms 7 3 Binary Decision Diagrams 8 4 Constructing and Manipulating ROBDDs 15 4.1 MK .
    [Show full text]
  • Spatial Isolation Refers to the System Ability to Detect and Avoid the Possibility That a Partition Can Access to Another Partition for Reading Or Writing
    Virtualización Apolinar González Alfons Crespo OUTLINE Introduction Virtualisation techniques Hypervisors and real-time TSP Roles and functions Scheduling issues Case study: XtratuM 2 Conceptos previos Máquina virtual (VM): software que implementa una máquina (computadora) como el comportamiento real. Hipervisor (también virtual machine monitor VMM) es una capa de software (o combinación de software/hardware) que permite ejecutar varios entornos de ejecución independientes o particiones en un computador. Partición: Entorno de ejecución de programas. Ejemplos: Linux + aplicaciones; un sistema operativo de tiempo real + tareas; … 3 Conceptos previos Partition Hypervisor 4 INTRODUCTION: Isolation Temporal isolation refers to the system ability to execute several executable partitions guaranteeing: • the timing constraints of the partition tasks • the execution of each partition does not depend on the temporal behaviour of other partitions. The temporal isolation enforcement is achieved by means of a scheduling policy: • Cyclic scheduling, the ARINC 653 • Periodic Priority Server • EDF Server • Priority 5 PARTITIONED SYSTEMS Temporal Isolation: duration Origin relative to MAF Slot (temporal window) P1P1 P2P2 P1P1 P3P3 P2P2 P2P2 P2P2 MAF (Major Frame) SlotP1P1 id = 3 start = 400ms duration = 100 partition: P1 Execution MAF 1 MAF 2 MAF 3 MAF 4 6 INTRODUCTION: Isolation Spatial isolation refers to the system ability to detect and avoid the possibility that a partition can access to another partition for reading or writing. The hardware
    [Show full text]
  • Virtualization Technologies Overview Course: CS 490 by Mendel
    Virtualization technologies overview Course: CS 490 by Mendel Rosenblum Name Can boot USB GUI Live 3D Snaps Live an OS on mem acceleration hot of migration another ory runnin disk alloc g partition ation system as guest Bochs partially partially Yes No Container s Cooperati Yes[1] Yes No No ve Linux (supporte d through X11 over networkin g) Denali DOSBox Partial (the Yes No No host OS can provide DOSBox services with USB devices) DOSEMU No No No FreeVPS GXemul No No Hercules Hyper-V iCore Yes Yes No Yes No Virtual Accounts Imperas Yes Yes Yes Yes OVP (Eclipse) Tools Integrity Yes No Yes Yes No Yes (HP-UX Virtual (Integrity guests only, Machines Virtual Linux and Machine Windows 2K3 Manager in near future) (add-on) Jail No Yes partially Yes No No No KVM Yes [3] Yes Yes [4] Yes Supported Yes [5] with VMGL [6] Linux- VServer LynxSec ure Mac-on- Yes Yes No No Linux Mac-on- No No Mac OpenVZ Yes Yes Yes Yes No Yes (using Xvnc and/or XDMCP) Oracle Yes Yes Yes Yes Yes VM (manage d by Oracle VM Manager) OVPsim Yes Yes Yes Yes (Eclipse) Padded Yes Yes Yes Cell for x86 (Green Hills Software) Padded Yes Yes Yes No Cell for PowerPC (Green Hills Software) Parallels Yes, if Boot Yes Yes Yes DirectX 9 Desktop Camp is and for Mac installed OpenGL 2.0 Parallels No Yes Yes No partially Workstati on PearPC POWER Yes Yes No Yes No Yes (on Hypervis POWER 6- or (PHYP) based systems, requires PowerVM Enterprise Licensing) QEMU Yes Yes Yes [4] Some code Yes done [7]; Also supported with VMGL [6] QEMU w/ Yes Yes Yes Some code Yes kqemu done [7]; Also module supported
    [Show full text]
  • Mixed-Criticality Scheduling and Resource Sharing for High-Assurance Operating Systems
    Mixed-Criticality Scheduling and Resource Sharing for High-Assurance Operating Systems Anna Lyons Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy School of Computer Science and Engineering University of New South Wales Sydney, Australia September 2018 Abstract Criticality of a software system refers to the severity of the impact of a failure. In a high-criticality system, failure risks significant loss of life or damage to the environ- ment. In a low-criticality system, failure may risk a downgrade in user-experience. As criticality of a software system increases, so too does the cost and time to develop that software: raising the criticality also raises the assurance level, with the highest levels requiring extensive, expensive, independent certification. For modern cyber-physical systems, including autonomous aircraft and other vehicles, the traditional approach of isolating systems of different criticality by using completely separate physical hardware, is no longer practical, being both restrictive and inefficient. The result is mixed-criticality systems, where software applications with different criticalities execute on the same hardware. Sufficient mechanisms are required to ascertain that software in mixed-criticality systems is sufficiently isolated, otherwise, all software on that hardware is promoted to the highest criticality level, driving up costs to impractical levels. For mixed-criticality systems to be viable, both spatial and temporal isolation are required. Current aviation standards allow for mixed-criticality systems where temporal and spatial resources are strictly and statically partitioned in time and space, allowing some improvement over fully isolated hardware. However, further improvements are not only possible, but required for future innovation in cyber-physical systems.
    [Show full text]
  • Legacy Reuse
    Faculty of Computer Science Institute for System Architecture, Operating Systems Group LEGACY REUSE CARSTEN WEINHOLD THIS LECTURE ... ■ So far ... ■ Basic microkernel concepts ■ Drivers, resource management ■ Today: ■ How to provide legacy OS personalities ■ How to reuse existing infrastructure ■ How to make applications happy TU Dresden Legacy Reuse 2 VIRTUALIZATION ■ Virtualization: ■ Reuse legacy OS + applications ■ Run applications in natural environment ■ Problem: Applications trapped in VMs ■ Different resource pools, namespaces ■ Cooperation is cumbersome (network, ...) ■ Full legacy OS in VM adds overhead ■ Multiple desktops? Bad user experience TU Dresden Legacy Reuse 3 MAKING THE CUT ■ Hardware level: Next week ■ Virtualize legacy OS on top of new OS ■ Operating System Personality: ■ Legacy OS interfaces reimplemented on top of – or ported to – new OS ■ Hybrid operating systems: Today ■ Run legacy OS virtualized … ■ … but tightly integrated with new OS TU Dresden Legacy Reuse 4 OPERATING SYSTEM PERSONALITIES TU Dresden Legacy Reuse 5 OS PERSONALITY ■ Idea: Adapt OS / application boundary ■ (Re-)Implement legacy APIs, not whole OS ■ May need to recompile application ■ Benefits: ■ Get desired application, established APIs ■ Good integration (namespaces, files, ...) ■ Smaller overhead than virtualization ■ Flexible, configurable, but more effort? TU Dresden Legacy Reuse 6 MONOLITHIC KERNELS App App Monolithic Kernel System Call Entry Ext2 VFAT IP Stack Disk Driver NIC Driver TU Dresden Legacy Reuse 7 DECOMPOSITION App App App App Monolithic
    [Show full text]
  • M.Tech-DSCE- Academic Regulations & Syllabus – MLR18
    ACADEMIC REGULATIONS AND COURSE STRUCTURE CHOICE BASED CREDIT SYSTEM MLR20 EMBEDDED SYSTEMS for Master of Technology (M.Tech) M. Tech. - Regular Two Year Degree Program (For batches admitted from the academic year 2020 - 2021) MLR Institute of Technology (Autonomous) Laxman Reddy Avenue, Dundigal Hyderabad – 500043, Telangana State www.mlrinstitutions.ac.in Email: [email protected] FOREWORD The autonomy is conferred on MLR Institute of Technology by UGC based on its performance as well as future commitment and competency to impart quality education. It is a mark of its ability to function independently in accordance with the set norms of the monitoring bodies like UGC and AICTE. It reflects the confidence of the UGC in the autonomous institution to uphold and maintain standards it expects to deliver on its own behalf and thus awards degrees on behalf of the college. Thus, an autonomous institution is given the freedom to have its own curriculum, examination system and monitoring mechanism, independent of the affiliating University but under its observance. MLR Institute of Technology is proud to win the credence of all the above bodies monitoring the quality in education and has gladly accepted the responsibility of sustaining, if not improving upon the standards and ethics for which it has been striving for more than a decade in reaching its present standing in the arena of contemporary technical education. As a follow up, statutory bodies like Academic Council and Boards of Studies are constituted with the guidance of the Governing Body of the College and recommendations of the JNTU Hyderabad to frame the regulations, course structure and syllabi under autonomous status.
    [Show full text]