A Hardware Virtualization Based Component Sandboxing Architecture
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Allgemeines Abkürzungsverzeichnis
Allgemeines Abkürzungsverzeichnis L. -
Effects and Opportunities of Native Code Extensions For
Effects and Opportunities of Native Code Extensions for Computationally Demanding Web Applications DISSERTATION zur Erlangung des akademischen Grades Dr. Phil. im Fach Bibliotheks- und Informationswissenschaft eingereicht an der Philosophischen Fakultät I Humboldt-Universität zu Berlin von Dipl. Inform. Dennis Jarosch Präsident der Humboldt-Universität zu Berlin: Prof. Dr. Jan-Hendrik Olbertz Dekan der Philosophischen Fakultät I: Prof. Michael Seadle, Ph.D. Gutachter: 1. Prof. Dr. Robert Funk 2. Prof. Michael Seadle, Ph.D. eingereicht am: 28.10.2011 Tag der mündlichen Prüfung: 16.12.2011 Abstract The World Wide Web is amidst a transition from interactive websites to web applications. An increasing number of users perform their daily computing tasks entirely within the web browser — turning the Web into an important platform for application development. The Web as a platform, however, lacks the computational performance of native applications. This problem has motivated the inception of Microsoft Xax and Google Native Client (NaCl), two independent projects that fa- cilitate the development of native web applications. Native web applications allow the extension of conventional web applications with compiled native code, while maintaining operating system portability. This dissertation determines the bene- fits and drawbacks of native web applications. It also addresses the question how the performance of JavaScript web applications compares to that of native appli- cations and native web applications. Four application benchmarks are introduced that focus on different performance aspects: number crunching (serial and parallel), 3D graphics performance, and data processing. A performance analysis is under- taken in order to determine and compare the performance characteristics of native C applications, JavaScript web applications, and NaCl native web applications. -
Improving System Security Through TCB Reduction
Improving System Security Through TCB Reduction Bernhard Kauer March 31, 2015 Dissertation vorgelegt an der Technischen Universität Dresden Fakultät Informatik zur Erlangung des akademischen Grades Doktoringenieur (Dr.-Ing.) Erstgutachter Prof. Dr. rer. nat. Hermann Härtig Technische Universität Dresden Zweitgutachter Prof. Dr. Paulo Esteves Veríssimo University of Luxembourg Verteidigung 15.12.2014 Abstract The OS (operating system) is the primary target of todays attacks. A single exploitable defect can be sufficient to break the security of the system and give fully control over all the software on the machine. Because current operating systems are too large to be defect free, the best approach to improve the system security is to reduce their code to more manageable levels. This work shows how the security-critical part of theOS, the so called TCB (Trusted Computing Base), can be reduced from millions to less than hundred thousand lines of code to achieve these security goals. Shrinking the software stack by more than an order of magnitude is an open challenge since no single technique can currently achieve this. We therefore followed a holistic approach and improved the design as well as implementation of several system layers starting with a newOS called NOVA. NOVA provides a small TCB for both newly written applications but also for legacy code running inside virtual machines. Virtualization is thereby the key technique to ensure that compatibility requirements will not increase the minimal TCB of our system. The main contribution of this work is to show how the virtual machine monitor for NOVA was implemented with significantly less lines of code without affecting the per- formance of its guest OS. -
Understanding the Microsoft Office 2013 Protected-View Sandbox
MWRI PUBLIC UNDERSTANDING THE MICROSOFT OFFICE 2013 PROTECTED-VIEW SANDBOX Yong Chuan, Koh (@yongchuank) 2015/07/09 mwrinfosecurity.com | © MWR InfoSecurity MWRI PUBLIC MWRI PUBLIC Table of Contents 1. Introduction .................................................................................................................... 3 2. Sandbox Internals ............................................................................................................. 4 2.1 Architecture .............................................................................................................. 4 2.1.1 Interception Component ......................................................................................... 4 2.1.2 Elevation Policy Manager ........................................................................................ 4 2.1.3 Inter-Process Communication ................................................................................... 5 2.2 Sandbox Restrictions.................................................................................................... 6 2.2.1 Sandbox Initialization ............................................................................................ 6 2.2.2 File Locations .................................................................................................... 12 2.2.3 Registry Keys ..................................................................................................... 12 2.2.4 Network Connections .......................................................................................... -
Protected Mode - Wikipedia
2/12/2019 Protected mode - Wikipedia Protected mode In computing, protected mode, also called protected virtual address mode,[1] is an operational mode of x86- compatible central processing units (CPUs). It allows system software to use features such as virtual memory, paging and safe multi-tasking designed to increase an operating system's control over application software.[2][3] When a processor that supports x86 protected mode is powered on, it begins executing instructions in real mode, in order to maintain backward compatibility with earlier x86 processors.[4] Protected mode may only be entered after the system software sets up one descriptor table and enables the Protection Enable (PE) bit in the control register 0 (CR0).[5] Protected mode was first added to the x86 architecture in 1982,[6] with the release of Intel's 80286 (286) processor, and later extended with the release of the 80386 (386) in 1985.[7] Due to the enhancements added by protected mode, it has become widely adopted and has become the foundation for all subsequent enhancements to the x86 architecture,[8] although many of those enhancements, such as added instructions and new registers, also brought benefits to the real mode. Contents History The 286 The 386 386 additions to protected mode Entering and exiting protected mode Features Privilege levels Real mode application compatibility Virtual 8086 mode Segment addressing Protected mode 286 386 Structure of segment descriptor entry Paging Multitasking Operating systems See also References External links History https://en.wikipedia.org/wiki/Protected_mode -
Lock-In-Pop: Securing Privileged Operating System Kernels By
Lock-in-Pop: Securing Privileged Operating System Kernels by Keeping on the Beaten Path Yiwen Li, Brendan Dolan-Gavitt, Sam Weber, and Justin Cappos, New York University https://www.usenix.org/conference/atc17/technical-sessions/presentation/li-yiwen This paper is included in the Proceedings of the 2017 USENIX Annual Technical Conference (USENIX ATC ’17). July 12–14, 2017 • Santa Clara, CA, USA ISBN 978-1-931971-38-6 Open access to the Proceedings of the 2017 USENIX Annual Technical Conference is sponsored by USENIX. Lock-in-Pop: Securing Privileged Operating System Kernels by Keeping on the Beaten Path Yiwen Li Brendan Dolan-Gavitt Sam Weber Justin Cappos New York University Abstract code needed to run such a system increases the odds that Virtual machines (VMs) that try to isolate untrusted flaws will be present, and, in turn, that tens of millions code are widely used in practice. However, it is often of user machines could be at risk [25]. Furthermore, iso- possible to trigger zero-day flaws in the host Operating lation will not work if a malicious program can access System (OS) from inside of such virtualized systems. In even a small portion of the host OS’s kernel that contains this paper, we propose a new security metric showing a zero-day flaw [12]. Both of these drawbacks reveal the strong correlation between “popular paths” and kernel key underlying weakness in designing OSVM systems – vulnerabilities. We verify that the OS kernel paths ac- a lack of information as to which parts of the host kernel cessed by popular applications in everyday use contain can be safely exported to user programs. -
A+ Certification for Dummies, 2Nd Edition.Pdf
A+ Certification for Dummies, Second Edition by Ron Gilster ISBN: 0764508121 | Hungry Minds © 2001 , 567 pages Your fun and easy guide to Exams 220-201 and 220-202! A+ Certification For Dummies by Ron Gilster Published by Hungry Minds, Inc. 909 Third Avenue New York, NY 10022 www.hungryminds.com www.dummies.com Copyright © 2001 Hungry Minds, Inc. All rights reserved. No part of this book, including interior design, cover design, and icons, may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording, or otherwise) without the prior written permission of the publisher. Library of Congress Control Number: 2001086260 ISBN: 0-7645-0812-1 Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 2O/RY/QU/QR/IN Distributed in the United States by Hungry Minds, Inc. Distributed by CDG Books Canada Inc. for Canada; by Transworld Publishers Limited in the United Kingdom; by IDG Norge Books for Norway; by IDG Sweden Books for Sweden; by IDG Books Australia Publishing Corporation Pty. Ltd. for Australia and New Zealand; by TransQuest Publishers Pte Ltd. for Singapore, Malaysia, Thailand, Indonesia, and Hong Kong; by Gotop Information Inc. for Taiwan; by ICG Muse, Inc. for Japan; by Intersoft for South Africa; by Eyrolles for France; by International Thomson Publishing for Germany, Austria and Switzerland; by Distribuidora Cuspide for Argentina; by LR International for Brazil; by Galileo Libros for Chile; by Ediciones ZETA S.C.R. Ltda. for Peru; by WS Computer Publishing Corporation, Inc., for the Philippines; by Contemporanea de Ediciones for Venezuela; by Express Computer Distributors for the Caribbean and West Indies; by Micronesia Media Distributor, Inc. -
Open Benchmark Alignment
Benchmark Selection Guide, Vol. 1 Doing it Right Art Morgan Performance Engineer, Intel Corporation Spring 2015 Legal Disclaimers Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance. Intel is a sponsor and member of the BenchmarkXPRT* Development Community, and was the major developer of the XPRT* family of benchmarks. Principled Technologies* is the publisher of the XPRT family of benchmarks. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases. © 2015 Intel Corporation. Intel, Intel Inside and other logos are trademarks or registered trademarks of Intel Corporation, or its subsidiaries in the United States and other countries. * Other names and brands may be claimed as the property of others. 2 About this Guide This guide uses benchmark comparison tables to help you select the right set of tools for Windows*, Android* and Chrome* platform evaluations. Benchmarks covered in this guide: . General Guidance on Platform Evaluations 3DMark* 1.2.0 ANDEBench* PRO AnTuTu* v5.6.1 . Benchmark Selection for Windows BaseMark* CL BatteryXPRT* 2014 CompuBench* CL 1.5 for OpenCL* . Benchmark Selection for Android CompuBench* RS 2.0 for RenderScript* CrXPRT* 2015 Geekbench* 3.3.1 . -
Nacldroid: Native Code Isolation for Android Applications
NaClDroid: Native Code Isolation for Android Applications Elias Athanasopoulos1, Vasileios P. Kemerlis2, Georgios Portokalidis3, and Angelos D. Keromytis4 1 Vrije Universiteit Amsterdam, The Netherlands [email protected] 2 Brown University, Providence, RI, USA [email protected] 3 Stevens Institute of Technology, Hoboken, NJ, USA [email protected] 4 Columbia University, New York, NY, USA [email protected] Abstract. Android apps frequently incorporate third-party libraries that contain native code; this not only facilitates rapid application develop- ment and distribution, but also provides new ways to generate revenue. As a matter of fact, one in two apps in Google Play are linked with a library providing ad network services. However, linking applications with third-party code can have severe security implications: malicious libraries written in native code can exfiltrate sensitive information from a running app, or completely modify the execution runtime, since all native code is mapped inside the same address space with the execution environment, namely the Dalvik/ART VM. We propose NaClDroid, a framework that addresses these problems, while still allowing apps to include third-party code. NaClDroid prevents malicious native-code libraries from hijacking Android applications using Software Fault Isolation. More specifically, we place all native code in a Native Client sandbox that prevents uncon- strained reads, or writes, inside the process address space. NaClDroid has little overhead; for native code running inside the NaCl sandbox the slowdown is less than 10% on average. Keywords: SFI, NaCl, Android 1 Introduction Android is undoubtedly the most rapidly growing platform for mobile devices, with estimates predicting one billion Android-based smartphones shipping in 2017 [12]. -
Vx32: Lightweight User-Level Sandboxing on the X86
Vx32: Lightweight User-level Sandboxing on the x86 Bryan Ford and Russ Cox Massachusetts Institute of Technology {baford,rsc}@pdos.csail.mit.edu Abstract 1 Introduction Code sandboxing is useful for many purposes, but most A sandbox is a mechanism by which a host software sandboxing techniques require kernel modifications, do system may execute arbitrary guest code in a confined not completely isolate guest code, or incur substantial environment, so that the guest code cannot compromise performance costs. Vx32 is a multipurpose user-level or affect the host other than according to a well-defined sandbox that enables any application to load and safely policy. Sandboxing is useful for many purposes, such execute one or more guest plug-ins, confining each guest as running untrusted Web applets within a browser [6], to a system call API controlled by the host application safely extending operating system kernels [5, 32], and and to a restricted memory region within the host’s ad- limiting potential damage caused by compromised ap- dress space. Vx32 runs guest code efficiently on several plications [19, 22]. Most sandboxing mechanisms, how- widespread operating systems without kernel extensions ever, either require guest code to be (re-)written in a type- or special privileges; it protects the host program from safe language [5,6], depend on special OS-specific facil- both reads and writes by its guests; and it allows the host ities [8, 15, 18, 19], allow guest code unrestricted read to restrict the instruction set available to guests. The key access to the host’s state [29, 42], or entail a substantial to vx32’s combination of portability, flexibility, and effi- performance cost [33,34,37]. -
2018 Webist Lnbip (20)
Client-Side Cornucopia: Comparing the Built-In Application Architecture Models in the Web Browser Antero Taivalsaari1, Tommi Mikkonen2, Cesare Pautasso3, and Kari Syst¨a4, 1Nokia Bell Labs, Tampere, Finland 2University of Helsinki, Helsinki, Finland 3University of Lugano, Lugano, Swizerland 4Tampere University, Tampere, Finland [email protected], [email protected], [email protected], [email protected] Abstract. The programming capabilities of the Web can be viewed as an afterthought, designed originally by non-programmers for relatively simple scripting tasks. This has resulted in cornucopia of partially over- lapping options for building applications. Depending on one's viewpoint, a generic standards-compatible web browser supports three, four or five built-in application rendering and programming models. In this paper, we give an overview and comparison of these built-in client-side web ap- plication architectures in light of the established software engineering principles. We also reflect on our earlier work in this area, and provide an expanded discussion of the current situation. In conclusion, while the dominance of the base HTML/CSS/JS technologies cannot be ignored, we expect Web Components and WebGL to gain more popularity as the world moves towards increasingly complex web applications, including systems supporting virtual and augmented reality. Keywords|Web programming, single page web applications, web com- ponents, web application architectures, rendering engines, web rendering, web browser 1 Introduction The World Wide Web has become such an integral part of our lives that it is often forgotten that the Web has existed only about thirty years. The original design sketches related to the World Wide Web date back to the late 1980s. -
Chapter 3 Protected-Mode Memory Management
CHAPTER 3 PROTECTED-MODE MEMORY MANAGEMENT This chapter describes the Intel 64 and IA-32 architecture’s protected-mode memory management facilities, including the physical memory requirements, segmentation mechanism, and paging mechanism. See also: Chapter 5, “Protection” (for a description of the processor’s protection mechanism) and Chapter 20, “8086 Emulation” (for a description of memory addressing protection in real-address and virtual-8086 modes). 3.1 MEMORY MANAGEMENT OVERVIEW The memory management facilities of the IA-32 architecture are divided into two parts: segmentation and paging. Segmentation provides a mechanism of isolating individual code, data, and stack modules so that multiple programs (or tasks) can run on the same processor without interfering with one another. Paging provides a mech- anism for implementing a conventional demand-paged, virtual-memory system where sections of a program’s execution environment are mapped into physical memory as needed. Paging can also be used to provide isolation between multiple tasks. When operating in protected mode, some form of segmentation must be used. There is no mode bit to disable segmentation. The use of paging, however, is optional. These two mechanisms (segmentation and paging) can be configured to support simple single-program (or single- task) systems, multitasking systems, or multiple-processor systems that used shared memory. As shown in Figure 3-1, segmentation provides a mechanism for dividing the processor’s addressable memory space (called the linear address space) into smaller protected address spaces called segments. Segments can be used to hold the code, data, and stack for a program or to hold system data structures (such as a TSS or LDT).