Intel Architecture Overview by Ren-Hung Hwang 6\VWHPDUFKLWHFWXUHRYHUYLHZ
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Geekos Overview
CMSC 412 GeekOS overview GeekOS Overview A. Udaya Shankar [email protected] Jeffrey K. Hollingsworth [email protected] 2/12/2014 Abstract This document gives an overview of the GeekOS distribution (for 412- Spring 2014) and related background on QEMU and x86. It describes some operations in GeekOS in more detail, in particular, initialization, low-level interrupt handling and context switching, thread creation, and user program spawning. Contents 1. Introduction ..................................................................................................................................................................... 2 2. Qemu ............................................................................................................................................................................... 3 3. Intel x86 real mode .......................................................................................................................................................... 4 4. Intel x86 protected mode ................................................................................................................................................. 5 5. Booting and kernel initialization ...................................................................................................................................... 8 6. Context switching .......................................................................................................................................................... 11 6.1. Context state ............................................................................................................................................................. -
SOS Internals
Understanding a Simple Operating System SOS is a Simple Operating System designed for the 32-bit x86 architecture. Its purpose is to understand basic concepts of operating system design. These notes are meant to help you recall the class discussions. Chapter 1 : Starting Up SOS 3 Registers in the IA-32 x86 Architecture BIOS (Basic Input/Ouput System) Routines Real Mode Addressing Organization of SOS on Disk and Memory Master Boot Record SOS Startup A20 Line 32-bit Protected Mode Addressing Privilege Level Global Descriptor Table (GDT) More on Privilege Levels The GDT Setup Enabling Protected Mode Calling main() Chapter 2 : SOS Initializations 10 In main() Disk Initialization Display Initialization Setting Up Interrupt Handlers Interrupt Descriptor Table (IDT) A Default Interrupt Handler Load IDT The Programmable Interrupt Controller (PIC) The Keyboard Interrupt Handler Starting the Console Putting It All Together Chapter 3: SOS1 – Single-tasking SOS 16 Running User Programs GDT Entries Default Exception Handler User Programs Executable Format System Calls Creating User Programs The run Command Understanding a Simple Operating System The DUMB Memory Manager Program Address Space Process Control Block Switching to a User Program Kernel-Mode Stack Chapter 4 : SOS2 – Multi-tasking SOS 24 Running Multiple User Programs NAÏVE Memory Manager Programmable Interval Timer (PIT) Process States Timer Interrupt Handler Sleep System Call The run Command Process Queue The Scheduler The Complete Picture ps Command Chapter 5 : SOS3 – Paging in SOS 31 -
Assignment No. 6 Aim: Write X86/64 ALP To
Assignment No. 6 Att Perm Oral Total Sign (2) (5) (3) (10) Aim: Write X86/64 ALP to switch from real mode to protected mode and display the values of GDTR, LDTR, IDTR, TR and MSW Registers. 6.1 Theory: Real Mode: Real mode, also called real address mode, is an operating mode of all x86-compatible CPUs. Real mode is characterized by a 20-bit segmented memory address space (giving exactly 1 MiB of addressable memory) and unlimited direct software access to all addressable memory, I/O addresses and peripheral hardware. Real mode provides no support for memory protection, multitasking, or code privilege levels. Protected Mode: In computing, protected mode, also called protected virtual address mode is an operational mode of x86-compatible central processing units (CPUs). It allows system software to use features such as virtual memory, paging and safe multi-tasking designed to increase an operating system's control over application software. When a processor that supports x86 protected mode is powered on, it begins executing instructions in real mode, in order to maintain backward compatibility with earlier x86 processors. Protected mode may only be entered after the system software sets up several descriptor tables and enables the Protection Enable (PE) bit in the control register 0 (CR0). Control Register : Global Descriptor Table Register This register holds the 32-bit base address and 16-bit segment limit for the global descriptor table (GDT). When a reference is made to data in memory, a segment selector is used to find a segment descriptor in the GDT or LDT. -
Chapter 3 System Calls, Exceptions, and Interrupts
DRAFT as of September 29, 2009: Copyright 2009 Cox, Kaashoek, Morris Chapter 3 System calls, exceptions, and interrupts An operating system must handle system calls, exceptions, and interrupts. With a system call a user program can ask for an operating system service, as we saw at the end of the last chapter. Exceptions are illegal program actions that generate an inter- rupt. Examples of illegal programs actions include divide by zero, attempt to access memory outside segment bounds, and so on. Interrupts are generated by hardware de- vices that need attention of the operating system. For example, a clock chip may gen- erate an interrupt every 100 msec to allow the kernel to implement time sharing. As another example, when the disk has read a block from disk, it generates an interrupt to alert the operating system that the block is ready to be retrieved. In all three cases, the operating system design must range for the following to happen. The system must save user state for future transparent resume. The system must be set up for continued execution in the kernel. The system must chose a place for the kernel to start executing. The kernel must be able to retrieve information about the event, including arguments. It must all be done securely; the system must main- tain isolation of user processes and the kernel. To achieve this goal the operating system must be aware of the details of how the hardware handles system calls, exceptions, and interrupts. In most processors these three events are handled by a single hardware mechanism. -
Bringing Virtualization to the X86 Architecture with the Original Vmware Workstation
12 Bringing Virtualization to the x86 Architecture with the Original VMware Workstation EDOUARD BUGNION, Stanford University SCOTT DEVINE, VMware Inc. MENDEL ROSENBLUM, Stanford University JEREMY SUGERMAN, Talaria Technologies, Inc. EDWARD Y. WANG, Cumulus Networks, Inc. This article describes the historical context, technical challenges, and main implementation techniques used by VMware Workstation to bring virtualization to the x86 architecture in 1999. Although virtual machine monitors (VMMs) had been around for decades, they were traditionally designed as part of monolithic, single-vendor architectures with explicit support for virtualization. In contrast, the x86 architecture lacked virtualization support, and the industry around it had disaggregated into an ecosystem, with different ven- dors controlling the computers, CPUs, peripherals, operating systems, and applications, none of them asking for virtualization. We chose to build our solution independently of these vendors. As a result, VMware Workstation had to deal with new challenges associated with (i) the lack of virtual- ization support in the x86 architecture, (ii) the daunting complexity of the architecture itself, (iii) the need to support a broad combination of peripherals, and (iv) the need to offer a simple user experience within existing environments. These new challenges led us to a novel combination of well-known virtualization techniques, techniques from other domains, and new techniques. VMware Workstation combined a hosted architecture with a VMM. The hosted architecture enabled a simple user experience and offered broad hardware compatibility. Rather than exposing I/O diversity to the virtual machines, VMware Workstation also relied on software emulation of I/O devices. The VMM combined a trap-and-emulate direct execution engine with a system-level dynamic binary translator to ef- ficiently virtualize the x86 architecture and support most commodity operating systems. -
Virtual Memory in X86
Fall 2017 :: CSE 306 Virtual Memory in x86 Nima Honarmand Fall 2017 :: CSE 306 x86 Processor Modes • Real mode – walks and talks like a really old x86 chip • State at boot • 20-bit address space, direct physical memory access • 1 MB of usable memory • No paging • No user mode; processor has only one protection level • Protected mode – Standard 32-bit x86 mode • Combination of segmentation and paging • Privilege levels (separate user and kernel) • 32-bit virtual address • 32-bit physical address • 36-bit if Physical Address Extension (PAE) feature enabled Fall 2017 :: CSE 306 x86 Processor Modes • Long mode – 64-bit mode (aka amd64, x86_64, etc.) • Very similar to 32-bit mode (protected mode), but bigger address space • 48-bit virtual address space • 52-bit physical address space • Restricted segmentation use • Even more obscure modes we won’t discuss today xv6 uses protected mode w/o PAE (i.e., 32-bit virtual and physical addresses) Fall 2017 :: CSE 306 Virt. & Phys. Addr. Spaces in x86 Processor • Both RAM hand hardware devices (disk, Core NIC, etc.) connected to system bus • Mapped to different parts of the physical Virtual Addr address space by the BIOS MMU Data • You can talk to a device by performing Physical Addr read/write operations on its physical addresses Cache • Devices are free to interpret reads/writes in any way they want (driver knows) System Interconnect (Bus) : all addrs virtual DRAM Network … Disk (Memory) Card : all addrs physical Fall 2017 :: CSE 306 Virt-to-Phys Translation in x86 0xdeadbeef Segmentation 0x0eadbeef Paging 0x6eadbeef Virtual Address Linear Address Physical Address Protected/Long mode only • Segmentation cannot be disabled! • But can be made a no-op (a.k.a. -
Supervisor-Mode Virtualization for X86 in Vdebug
Supervisor-Mode Virtualization for x86 in VDebug Prashanth P. Bungale, Swaroop Sridhar, and Jonathan S. Shapiro { prash, swaroop, shap } @ cs.jhu.edu Systems Research Laboratory The Johns Hopkins University Baltimore, MD 21218, U. S. A. March 10, 2004 Abstract Machine virtualization techniques offer many ways to improve both debugging and performance analysis facilities available to kernel developers. A minimal hardware interposition, exposing as much as possible of the underlying hardware device model, would enable high-level debugging of almost all parts of an operating system. Existing emulators either lack a debugging interface, impose excessive performance penalties, or obscure details of machine-specific hardware, all of which impede their value as debugging platforms. Because it is a paragon of complexity, techniques for emulating the protection state of today's most popular processor – the Pentium – have not been widely published. This paper presents the design of supervisor-mode virtualization in the VDebug kernel debugger system, which uses dynamic translation and dynamic shadowing to provide translucent CPU emulation. By running directly on the bare machine, VDebug is able to achieve an unusually low translation overhead and is able to exploit the hardware protection mechanisms to provide interposition and protection. We focus here on our design for supervisor-mode emulation. We also identify some of the fundamental challenges posed by dynamic translation of supervisor-mode code, and propose new ways of overcoming them. While the work described is not yet running fully, it is far enough along to have confidence in the design presented here, and several of the techniques used have not previously been published. -
Windows Security Hardening Through Kernel Address Protection
Windows Security Hardening Through Kernel Address Protection Mateusz \j00ru" Jurczyk August 2011 Abstract As more defense-in-depth protection schemes like Windows Integrity Control or sandboxing technologies are deployed, threats affecting local system components become a relevant issue in terms of the overall oper- ating system user's security plan. In order to address continuous develop- ment of Elevation of Privileges exploitation techniques, Microsoft started to enhance the Windows kernel security, by hardening the most sensitive system components, such as Kernel Pools with the Safe Unlinking mecha- nism introduced in Windows 7[19]. At the same time, the system supports numerous both official and undocumented services, providing valuable in- formation regarding the current state of the kernel memory layout. In this paper, we discuss the potential threats and problems concerning un- privileged access to the system address space information. In particular, we also present how subtle information leakages can prove useful in prac- tical attack scenarios. Further in the document, we conclusively provide some suggestions as to how problems related to kernel address information availability can be mitigated, or entirely eliminated. 1 Introduction Communication between distinct executable modules running at different priv- ilege levels or within separate security domains takes place most of the time, in numerous fields of modern computing. Both hardware- and software-enforced privilege separation mechanisms are designed to control the access to certain re- sources - grant it to modules with higher rights, while ensuring that unathorized entities are not able to reach the protected data. The discussed architecture is usually based on setting up a trusted set of modules (further referred to as \the broker") in the privileged area, while hav- ing the potentially malicious code (also called \the guest") executed in a con- trolled environment. -
Chapter 3 Protected-Mode Memory Management
CHAPTER 3 PROTECTED-MODE MEMORY MANAGEMENT This chapter describes the Intel 64 and IA-32 architecture’s protected-mode memory management facilities, including the physical memory requirements, segmentation mechanism, and paging mechanism. See also: Chapter 5, “Protection” (for a description of the processor’s protection mechanism) and Chapter 20, “8086 Emulation” (for a description of memory addressing protection in real-address and virtual-8086 modes). 3.1 MEMORY MANAGEMENT OVERVIEW The memory management facilities of the IA-32 architecture are divided into two parts: segmentation and paging. Segmentation provides a mechanism of isolating individual code, data, and stack modules so that multiple programs (or tasks) can run on the same processor without interfering with one another. Paging provides a mech- anism for implementing a conventional demand-paged, virtual-memory system where sections of a program’s execution environment are mapped into physical memory as needed. Paging can also be used to provide isolation between multiple tasks. When operating in protected mode, some form of segmentation must be used. There is no mode bit to disable segmentation. The use of paging, however, is optional. These two mechanisms (segmentation and paging) can be configured to support simple single-program (or single- task) systems, multitasking systems, or multiple-processor systems that used shared memory. As shown in Figure 3-1, segmentation provides a mechanism for dividing the processor’s addressable memory space (called the linear address space) into smaller protected address spaces called segments. Segments can be used to hold the code, data, and stack for a program or to hold system data structures (such as a TSS or LDT). -
Intel 64 and IA-32 Architectures Software Developer's Manual
SYSTEM ARCHITECTURE OVERVIEW Physical Address EFLAGS Register Code, Data or Linear Address Stack Segment Control Registers Task-State CR4 Segment Selector Segment (TSS) Task CR3 Code CR2 Register CR1 Data Stack CR0 Global Descriptor Task Register Table (GDT) Interrupt Handler Segment Sel. Seg. Desc. Code Current Interrupt TSS Seg. Sel. TSS Desc. TSS Stack Vector Seg. Desc. Interrupt Descriptor Task-State Segment (TSS) Table (IDT) TSS Desc. Task Code Interrupt Gate LDT Desc. Data Stack Task Gate GDTR Trap Gate Local Descriptor Exception Handler Table (LDT) Code Current TSS Stack IDTR Call-Gate Seg. Desc. Segment Selector Call Gate Protected Procedure Code XCR0 (XFEM) LDTR Current TSS Stack Linear Address Space Linear Address Dir Table Offset Linear Addr. Page Directory Page Table Page Physical Addr. Pg. Dir. Entry Pg. Tbl. Entry 0 This page mapping example is for 4-KByte pages CR3* and the normal 32-bit physical address size. *Physical Address Figure 2-1. IA-32 System-Level Registers and Data Structures Vol. 3 2-3 SYSTEM ARCHITECTURE OVERVIEW 31 22 21 20 19 18 17 16 15 1413 12 11 10 9 8 7 6 5432 1 0 I V V I A V R N O O D I T S Z A P C Reserved (set to 0) I I 0 0 0 1 D C M F T P F F F F F F F F F P F L ID — Identification Flag VIP — Virtual Interrupt Pending VIF — Virtual Interrupt Flag AC — Alignment Check VM — Virtual-8086 Mode RF — Resume Flag NT — Nested Task Flag IOPL— I/O Privilege Level IF — Interrupt Enable Flag TF — Trap Flag Reserved Figure 2-4. -
Chapter 1: Introduction
Access Control in Practice CS461/ECE422 Fall 2009 1 Reading • Computer Security – Chapter 15 2 Outline • Evolution of OS • Object Access Control – Access control lists – Capabilities 3 In the Beginning... • The program owned the machine – Access all power of the hardware – Could really mess things up • Executives emerged – Gather common functionality • Multi-user systems required greater separation – Multics, the source of much early OS development 4 Protecting objects • Desire to protect logical entities – Memory – Files or data sets – Executing program – File directory – A particular data structure like a stack – Operating system control structures – Privileged instructions Access Control Matrix • Access Control Matrix (ACM) and related concepts provides very basic abstraction – Map different systems to a common form for comparison – Enables standard proof techniques – Not directly used in implementation 6 Definitions • Protection state of system – Describes current settings, values of system relevant to protection • Access control matrix – Describes protection state precisely – Matrix describing rights of subjects – State transitions change elements of matrix 7 Description objects (entities) o1 … om s1 … sn • Subjects S = { s1,…,sn } s1 • Objects O = { o1,…,om } s 2 • Rights R = { r1,…,rk } ⊆ • Entries A[si, oj] R subjects … • A[si, oj] = { rx, …, ry } sn means subject si has rights rx, …, ry over object oj 8 Practical object access control • Can slice the logical ACM two ways – By row: Store with subject – By column: Store with object -
Downloads/Kornau-Tim--Diplomarbeit--Rop.Pdf [34] Sebastian Krahmer
CFI CaRE: Hardware-supported Call and Return Enforcement for Commercial Microcontrollers Thomas Nyman Jan-Erik Ekberg Lucas Davi N. Asokan Aalto University, Finland Trustonic, Finland University of Aalto University, Finland [email protected] [email protected] Duisburg-Essen, [email protected] Trustonic, Finland Germany thomas.nyman@ lucas.davi@wiwinf. trustonic.com uni-due.de ABSTRACT CFI (Section 3.1) is a well-explored technique for resisting With the increasing scale of deployment of Internet of Things (IoT), the code-reuse attacks such as Return-Oriented Programming concerns about IoT security have become more urgent. In particular, (ROP) [47] that allow attackers in control of data memory to subvert memory corruption attacks play a predominant role as they allow the control flow of a program. CFI commonly takes the formof remote compromise of IoT devices. Control-flow integrity (CFI) is inlined enforcement, where CFI checks are inserted at points in the a promising and generic defense technique against these attacks. program code where control flow changes occur. For legacy applica- However, given the nature of IoT deployments, existing protection tions CFI checks must be introduced by instrumenting the pre-built mechanisms for traditional computing environments (including CFI) binary. Such binary instrumentation necessarily modifies the mem- need to be adapted to the IoT setting. In this paper, we describe ory layout of the code, requiring memory addresses referenced by the challenges of enabling CFI on microcontroller (MCU) based the program to be adjusted accordingly [28]. This is typically done IoT devices. We then present CaRE, the first interrupt-aware CFI through load-time dynamic binary rewriting software [14, 39].