Introduction to the X86 Microprocessor
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Memory Management
Memory Management Reading: Silberschatz chapter 9 Reading: Stallings chapter 7 EEL 358 1 Outline ¾ Background ¾ Issues in Memory Management ¾ Logical Vs Physical address, MMU ¾ Dynamic Loading ¾ Memory Partitioning Placement Algorithms Dynamic Partitioning ¾ Buddy System ¾ Paging ¾ Memory Segmentation ¾ Example – Intel Pentium EEL 358 2 Background ¾ Main memory → fast, relatively high cost, volatile ¾ Secondary memory → large capacity, slower, cheaper than main memory and is usually non volatile ¾ The CPU fetches instructions/data of a program from memory; therefore, the program/data must reside in the main (RAM and ROM) memory ¾ Multiprogramming systems → main memory must be subdivided to accommodate several processes ¾ This subdivision is carried out dynamically by OS and known as memory management EEL 358 3 Issues in Memory Management ¾ Relocation: Swapping of active process in and out of main memory to maximize CPU utilization Process may not be placed back in same main memory region! Ability to relocate the process to different area of memory ¾ Protection: Protection against unwanted interference by another process Must be ensured by processor (hardware) rather than OS ¾ Sharing: Flexibility to allow several process to access the same portions of the main memory ¾ Efficiency: Memory must be fairly allocated for high processor utilization, Systematic flow of information between main and secondary memory EEL 358 4 Binding of Instructions and Data to Memory Compiler → Generates Object Code Linker → Combines the Object code into -
Geekos Overview
CMSC 412 GeekOS overview GeekOS Overview A. Udaya Shankar [email protected] Jeffrey K. Hollingsworth [email protected] 2/12/2014 Abstract This document gives an overview of the GeekOS distribution (for 412- Spring 2014) and related background on QEMU and x86. It describes some operations in GeekOS in more detail, in particular, initialization, low-level interrupt handling and context switching, thread creation, and user program spawning. Contents 1. Introduction ..................................................................................................................................................................... 2 2. Qemu ............................................................................................................................................................................... 3 3. Intel x86 real mode .......................................................................................................................................................... 4 4. Intel x86 protected mode ................................................................................................................................................. 5 5. Booting and kernel initialization ...................................................................................................................................... 8 6. Context switching .......................................................................................................................................................... 11 6.1. Context state ............................................................................................................................................................. -
SOS Internals
Understanding a Simple Operating System SOS is a Simple Operating System designed for the 32-bit x86 architecture. Its purpose is to understand basic concepts of operating system design. These notes are meant to help you recall the class discussions. Chapter 1 : Starting Up SOS 3 Registers in the IA-32 x86 Architecture BIOS (Basic Input/Ouput System) Routines Real Mode Addressing Organization of SOS on Disk and Memory Master Boot Record SOS Startup A20 Line 32-bit Protected Mode Addressing Privilege Level Global Descriptor Table (GDT) More on Privilege Levels The GDT Setup Enabling Protected Mode Calling main() Chapter 2 : SOS Initializations 10 In main() Disk Initialization Display Initialization Setting Up Interrupt Handlers Interrupt Descriptor Table (IDT) A Default Interrupt Handler Load IDT The Programmable Interrupt Controller (PIC) The Keyboard Interrupt Handler Starting the Console Putting It All Together Chapter 3: SOS1 – Single-tasking SOS 16 Running User Programs GDT Entries Default Exception Handler User Programs Executable Format System Calls Creating User Programs The run Command Understanding a Simple Operating System The DUMB Memory Manager Program Address Space Process Control Block Switching to a User Program Kernel-Mode Stack Chapter 4 : SOS2 – Multi-tasking SOS 24 Running Multiple User Programs NAÏVE Memory Manager Programmable Interval Timer (PIT) Process States Timer Interrupt Handler Sleep System Call The run Command Process Queue The Scheduler The Complete Picture ps Command Chapter 5 : SOS3 – Paging in SOS 31 -
Assignment No. 6 Aim: Write X86/64 ALP To
Assignment No. 6 Att Perm Oral Total Sign (2) (5) (3) (10) Aim: Write X86/64 ALP to switch from real mode to protected mode and display the values of GDTR, LDTR, IDTR, TR and MSW Registers. 6.1 Theory: Real Mode: Real mode, also called real address mode, is an operating mode of all x86-compatible CPUs. Real mode is characterized by a 20-bit segmented memory address space (giving exactly 1 MiB of addressable memory) and unlimited direct software access to all addressable memory, I/O addresses and peripheral hardware. Real mode provides no support for memory protection, multitasking, or code privilege levels. Protected Mode: In computing, protected mode, also called protected virtual address mode is an operational mode of x86-compatible central processing units (CPUs). It allows system software to use features such as virtual memory, paging and safe multi-tasking designed to increase an operating system's control over application software. When a processor that supports x86 protected mode is powered on, it begins executing instructions in real mode, in order to maintain backward compatibility with earlier x86 processors. Protected mode may only be entered after the system software sets up several descriptor tables and enables the Protection Enable (PE) bit in the control register 0 (CR0). Control Register : Global Descriptor Table Register This register holds the 32-bit base address and 16-bit segment limit for the global descriptor table (GDT). When a reference is made to data in memory, a segment selector is used to find a segment descriptor in the GDT or LDT. -
Chapter 3 System Calls, Exceptions, and Interrupts
DRAFT as of September 29, 2009: Copyright 2009 Cox, Kaashoek, Morris Chapter 3 System calls, exceptions, and interrupts An operating system must handle system calls, exceptions, and interrupts. With a system call a user program can ask for an operating system service, as we saw at the end of the last chapter. Exceptions are illegal program actions that generate an inter- rupt. Examples of illegal programs actions include divide by zero, attempt to access memory outside segment bounds, and so on. Interrupts are generated by hardware de- vices that need attention of the operating system. For example, a clock chip may gen- erate an interrupt every 100 msec to allow the kernel to implement time sharing. As another example, when the disk has read a block from disk, it generates an interrupt to alert the operating system that the block is ready to be retrieved. In all three cases, the operating system design must range for the following to happen. The system must save user state for future transparent resume. The system must be set up for continued execution in the kernel. The system must chose a place for the kernel to start executing. The kernel must be able to retrieve information about the event, including arguments. It must all be done securely; the system must main- tain isolation of user processes and the kernel. To achieve this goal the operating system must be aware of the details of how the hardware handles system calls, exceptions, and interrupts. In most processors these three events are handled by a single hardware mechanism. -
Chapter 4 Memory Management and Virtual Memory Asst.Prof.Dr
Chapter 4 Memory management and Virtual Memory Asst.Prof.Dr. Supakit Nootyaskool IT-KMITL Object • To discuss the principle of memory management. • To understand the reason that memory partitions are importance for system organization. • To describe the concept of Paging and Segmentation. 4.1 Difference in main memory in the system • The uniprogramming system (example in DOS) allows only a program to be present in memory at a time. All resources are provided to a program at a time. Example in a memory has a program and an OS running 1) Operating system (on Kernel space) 2) A running program (on User space) • The multiprogramming system is difference from the mentioned above by allowing programs to be present in memory at a time. All resource must be organize and sharing to programs. Example by two programs are in the memory . 1) Operating system (on Kernel space) 2) Running programs (on User space + running) 3) Programs are waiting signals to execute in CPU (on User space). The multiprogramming system uses memory management to organize location to the programs 4.2 Memory management terms Frame Page Segment 4.2 Memory management terms Frame Page A fixed-lengthSegment block of main memory. 4.2 Memory management terms Frame Page A fixed-length block of data that resides in secondary memory. A page of data may temporarily beSegment copied into a frame of main memory. A variable-lengthMemory management block of data that residesterms in secondary memory. A segment may temporarily be copied into an available region of main memory or the segment may be divided into pages which can be individuallyFrame copied into mainPage memory. -
Chapter 7 Memory Management Roadmap
Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 7 Memory Management Dave Bremer Otago Polytechnic, N.Z. ©2009, Prentice Hall Roadmap • Basic requirements of Memory Management • Memory Partitioning • Basic blocks of memory management – Paging – Segmentation The need for memory management • Memory is cheap today, and getting cheaper – But applications are demanding more and more memory, there is never enough! • Memory Management, involves swapping blocks of data from secondary storage. • Memory I/O is slow compared to a CPU – The OS must cleverly time the swapping to maximise the CPU’s efficiency Memory Management Memory needs to be allocated to ensure a reasonable supply of ready processes to consume available processor time Memory Management Requirements • Relocation • Protection • Sharing • Logical organisation • Physical organisation Requirements: Relocation • The programmer does not know where the program will be placed in memory when it is executed, – it may be swapped to disk and return to main memory at a different location (relocated) • Memory references must be translated to the actual physical memory address Memory Management Terms Table 7.1 Memory Management Terms Term Description Frame Fixed -length block of main memory. Page Fixed -length block of data in secondary memory (e.g. on disk). Segment Variable-length block of data that resides in secondary memory. Addressing Requirements: Protection • Processes should not be able to reference memory locations in another process without permission -
Bringing Virtualization to the X86 Architecture with the Original Vmware Workstation
12 Bringing Virtualization to the x86 Architecture with the Original VMware Workstation EDOUARD BUGNION, Stanford University SCOTT DEVINE, VMware Inc. MENDEL ROSENBLUM, Stanford University JEREMY SUGERMAN, Talaria Technologies, Inc. EDWARD Y. WANG, Cumulus Networks, Inc. This article describes the historical context, technical challenges, and main implementation techniques used by VMware Workstation to bring virtualization to the x86 architecture in 1999. Although virtual machine monitors (VMMs) had been around for decades, they were traditionally designed as part of monolithic, single-vendor architectures with explicit support for virtualization. In contrast, the x86 architecture lacked virtualization support, and the industry around it had disaggregated into an ecosystem, with different ven- dors controlling the computers, CPUs, peripherals, operating systems, and applications, none of them asking for virtualization. We chose to build our solution independently of these vendors. As a result, VMware Workstation had to deal with new challenges associated with (i) the lack of virtual- ization support in the x86 architecture, (ii) the daunting complexity of the architecture itself, (iii) the need to support a broad combination of peripherals, and (iv) the need to offer a simple user experience within existing environments. These new challenges led us to a novel combination of well-known virtualization techniques, techniques from other domains, and new techniques. VMware Workstation combined a hosted architecture with a VMM. The hosted architecture enabled a simple user experience and offered broad hardware compatibility. Rather than exposing I/O diversity to the virtual machines, VMware Workstation also relied on software emulation of I/O devices. The VMM combined a trap-and-emulate direct execution engine with a system-level dynamic binary translator to ef- ficiently virtualize the x86 architecture and support most commodity operating systems. -
Virtual Memory in X86
Fall 2017 :: CSE 306 Virtual Memory in x86 Nima Honarmand Fall 2017 :: CSE 306 x86 Processor Modes • Real mode – walks and talks like a really old x86 chip • State at boot • 20-bit address space, direct physical memory access • 1 MB of usable memory • No paging • No user mode; processor has only one protection level • Protected mode – Standard 32-bit x86 mode • Combination of segmentation and paging • Privilege levels (separate user and kernel) • 32-bit virtual address • 32-bit physical address • 36-bit if Physical Address Extension (PAE) feature enabled Fall 2017 :: CSE 306 x86 Processor Modes • Long mode – 64-bit mode (aka amd64, x86_64, etc.) • Very similar to 32-bit mode (protected mode), but bigger address space • 48-bit virtual address space • 52-bit physical address space • Restricted segmentation use • Even more obscure modes we won’t discuss today xv6 uses protected mode w/o PAE (i.e., 32-bit virtual and physical addresses) Fall 2017 :: CSE 306 Virt. & Phys. Addr. Spaces in x86 Processor • Both RAM hand hardware devices (disk, Core NIC, etc.) connected to system bus • Mapped to different parts of the physical Virtual Addr address space by the BIOS MMU Data • You can talk to a device by performing Physical Addr read/write operations on its physical addresses Cache • Devices are free to interpret reads/writes in any way they want (driver knows) System Interconnect (Bus) : all addrs virtual DRAM Network … Disk (Memory) Card : all addrs physical Fall 2017 :: CSE 306 Virt-to-Phys Translation in x86 0xdeadbeef Segmentation 0x0eadbeef Paging 0x6eadbeef Virtual Address Linear Address Physical Address Protected/Long mode only • Segmentation cannot be disabled! • But can be made a no-op (a.k.a. -
A+ Certification for Dummies, 2Nd Edition.Pdf
A+ Certification for Dummies, Second Edition by Ron Gilster ISBN: 0764508121 | Hungry Minds © 2001 , 567 pages Your fun and easy guide to Exams 220-201 and 220-202! A+ Certification For Dummies by Ron Gilster Published by Hungry Minds, Inc. 909 Third Avenue New York, NY 10022 www.hungryminds.com www.dummies.com Copyright © 2001 Hungry Minds, Inc. All rights reserved. No part of this book, including interior design, cover design, and icons, may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording, or otherwise) without the prior written permission of the publisher. Library of Congress Control Number: 2001086260 ISBN: 0-7645-0812-1 Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 2O/RY/QU/QR/IN Distributed in the United States by Hungry Minds, Inc. Distributed by CDG Books Canada Inc. for Canada; by Transworld Publishers Limited in the United Kingdom; by IDG Norge Books for Norway; by IDG Sweden Books for Sweden; by IDG Books Australia Publishing Corporation Pty. Ltd. for Australia and New Zealand; by TransQuest Publishers Pte Ltd. for Singapore, Malaysia, Thailand, Indonesia, and Hong Kong; by Gotop Information Inc. for Taiwan; by ICG Muse, Inc. for Japan; by Intersoft for South Africa; by Eyrolles for France; by International Thomson Publishing for Germany, Austria and Switzerland; by Distribuidora Cuspide for Argentina; by LR International for Brazil; by Galileo Libros for Chile; by Ediciones ZETA S.C.R. Ltda. for Peru; by WS Computer Publishing Corporation, Inc., for the Philippines; by Contemporanea de Ediciones for Venezuela; by Express Computer Distributors for the Caribbean and West Indies; by Micronesia Media Distributor, Inc. -
Windows Security Hardening Through Kernel Address Protection
Windows Security Hardening Through Kernel Address Protection Mateusz \j00ru" Jurczyk August 2011 Abstract As more defense-in-depth protection schemes like Windows Integrity Control or sandboxing technologies are deployed, threats affecting local system components become a relevant issue in terms of the overall oper- ating system user's security plan. In order to address continuous develop- ment of Elevation of Privileges exploitation techniques, Microsoft started to enhance the Windows kernel security, by hardening the most sensitive system components, such as Kernel Pools with the Safe Unlinking mecha- nism introduced in Windows 7[19]. At the same time, the system supports numerous both official and undocumented services, providing valuable in- formation regarding the current state of the kernel memory layout. In this paper, we discuss the potential threats and problems concerning un- privileged access to the system address space information. In particular, we also present how subtle information leakages can prove useful in prac- tical attack scenarios. Further in the document, we conclusively provide some suggestions as to how problems related to kernel address information availability can be mitigated, or entirely eliminated. 1 Introduction Communication between distinct executable modules running at different priv- ilege levels or within separate security domains takes place most of the time, in numerous fields of modern computing. Both hardware- and software-enforced privilege separation mechanisms are designed to control the access to certain re- sources - grant it to modules with higher rights, while ensuring that unathorized entities are not able to reach the protected data. The discussed architecture is usually based on setting up a trusted set of modules (further referred to as \the broker") in the privileged area, while hav- ing the potentially malicious code (also called \the guest") executed in a con- trolled environment. -
Chapter 3 Protected-Mode Memory Management
CHAPTER 3 PROTECTED-MODE MEMORY MANAGEMENT This chapter describes the Intel 64 and IA-32 architecture’s protected-mode memory management facilities, including the physical memory requirements, segmentation mechanism, and paging mechanism. See also: Chapter 5, “Protection” (for a description of the processor’s protection mechanism) and Chapter 20, “8086 Emulation” (for a description of memory addressing protection in real-address and virtual-8086 modes). 3.1 MEMORY MANAGEMENT OVERVIEW The memory management facilities of the IA-32 architecture are divided into two parts: segmentation and paging. Segmentation provides a mechanism of isolating individual code, data, and stack modules so that multiple programs (or tasks) can run on the same processor without interfering with one another. Paging provides a mech- anism for implementing a conventional demand-paged, virtual-memory system where sections of a program’s execution environment are mapped into physical memory as needed. Paging can also be used to provide isolation between multiple tasks. When operating in protected mode, some form of segmentation must be used. There is no mode bit to disable segmentation. The use of paging, however, is optional. These two mechanisms (segmentation and paging) can be configured to support simple single-program (or single- task) systems, multitasking systems, or multiple-processor systems that used shared memory. As shown in Figure 3-1, segmentation provides a mechanism for dividing the processor’s addressable memory space (called the linear address space) into smaller protected address spaces called segments. Segments can be used to hold the code, data, and stack for a program or to hold system data structures (such as a TSS or LDT).