Memory Main Memory

Total Page:16

File Type:pdf, Size:1020Kb

Memory Main Memory Main Memory Electrical and Computer Engineering Stephen Kim ([email protected]) ECE/IUPUI RTOS & APPS 1 Main Memory n Background n Swapping n Contiguous allocation n Paging n Segmentation n Segmentation with paging ECE/IUPUI RTOS & APPS 2 1 Background n Program must be brought into memory and placed within a process for it to be run. n Input queue – collection of processes on the disk that are waiting to be brought into memory to run the program. n User programs go through several steps before being run. n From the memory-unit viewpoint, a CPU generate a stream of memory address. u It doesn’t matter how they were generated or what they are for. ECE/IUPUI RTOS & APPS 3 Binding of Instructions and Data to Memory n Most systems allow a user program to reside in any part of the physical memory. u How can a user program know where its data are? n Address binding of instructions and data to memory addresses can happen at three different stages u Compile time : if memory location known a priori, absolute code can be generated; Must recompile code if starting location changes. u Load time: must generate relocatable code if memory location is not known at compile time. u Execution time : binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., Base and limit registers). ECE/IUPUI RTOS & APPS 4 2 Multistep Processing of a User Program ECE/IUPUI RTOS & APPS 5 Logical vs. Physical Address Space n The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. u Logical address – generated by the CPU; also referred to as virtual address. u Physical address – address seen by the memory unit. n Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme. ECE/IUPUI RTOS & APPS 6 3 Memory-Management Unit (MMU) n Hardware device that maps virtual to physical address. n In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory. n The user program deals with logical addresses; it never sees the real physical addresses. ECE/IUPUI RTOS & APPS 7 Dynamic relocation using a relocation register ECE/IUPUI RTOS & APPS 8 4 Dynamic Loading n Routine is not loaded until it is called n Better memory-space utilization; Unused routine is never loaded. u All routines are staying on disk in a relocatable load format. The main program is loaded into memory and is executed. When a routine needs to call another routine, the calling routine first check to see whether the other routine has been loaded. If not, the relocatable linking loader is called to load the desired routine into memory and to update the program’s address table to reflect this change. Then, control is passed to the newly loaded routine. n Useful when large amounts of code are needed to handle infrequently occurring cases. (eg. Error handling routines) n No special support from the operating system is required implemented through program design. u It is the responsibility of the user to design their program to take advantage of such a method. ECE/IUPUI RTOS & APPS 9 Dynamic Linking n Similar to dynamic loading. Linking postponed until execution time. n Small piece of code, stub, used to locate the appropriate memory-resident library routine. n Stub replaces itself with the address of the routine, and executes the routine. n All processes that use a language library execute only one copy of the library code. n Operating system needed to check if routine is in processes’ memory address. n Dynamic linking is particularly useful for libraries. u A library may be replaced by a new version, and all programs that references the library will automatically use the new version. ECE/IUPUI RTOS & APPS 10 5 Overlays n To make a process larger than an allocated memory n Keep in memory only those instructions and data that are needed at any given time. n Implemented by user, no special support needed from operating system, programming design of overlay structure is complex n Example) u Pass1 70 KB, Pass2 80 KB, Symbol Table 20KB, Common routine 30KB u Without overlay, 200KB is required u The overlay itself needs some memory space, says 10KB ECE/IUPUI RTOS & APPS 11 Overlays for a Two-Pass Assembler With the overlay, 140KB is enough ECE/IUPUI RTOS & APPS 12 6 Swapping (1) n A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. n Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. ECE/IUPUI RTOS & APPS 13 Swapping (2) n Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped. u For example, 1 MB process, 5MB / sec hard-disk, no head seeks, average latency 8 msec. u The total swap time is ( swap-out and swap-in) 2 * (1MB / (5 MB/sec) + 8 msec)=416 msec u If the system uses Round-robin, the time quantum must be much larger than 416 msec. n Modified versions of swapping are found on many systems, i.e., UNIX, Linux, and Windows. ECE/IUPUI RTOS & APPS 14 7 Schemes of Memory Allocation n Contiguous Allocation n Paging n Segmentation n Segmentation with Paging ECE/IUPUI RTOS & APPS 15 Contiguous Allocation n Main memory usually into two partitions: u Resident operating system, usually held in low memory u User processes then held in high memory. n Why an OS in low memory? u location of interrupt vector table n Single-partition allocation u Relocation-register scheme used to protect user processes from each other, and from changing operating-system code and data. u Relocation register contains value of smallest physical address; limit register contains range of logical addresses – each logical address must be less than the limit register. ECE/IUPUI RTOS & APPS 16 8 CA, Hardware Support for Relocation and Limit Registers n MMU maps the logical address dynamically by adding the value in the relocation register. Memory Management Unit limit relocation register register logical physical address yes address CPU < + Memory no trap: addressing error ECE/IUPUI RTOS & APPS 17 CA, Memory Allocation n OS maintains a table for allocated memory block and available memory block n Initially, all memory is available, so there is one large block of available memory, or one big hole n Holes of various size are scattered throughout memory. u When a process arrives, it is allocated memory from a hole large enough to accommodate it. u If a hole is large, it is split into two segments, t one for allocating to a new process, t the other hole is returned to the pool of holes. u When a process terminates, it releases its block which is placed to the pool of holes. ECE/IUPUI RTOS & APPS 18 9 CA, Dynamic Storage-Allocation Problem n How to satisfy a request from the set of available holes u Random-fit: choose an arbitrary hole among those large enough; not good utilization u First-fit: Allocate the first hole of the table that is big enough. Time complexity is O(N/2), where N is the number of holes. u Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole. Time complexity is O(N). u Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole. utilization is worse than FF & BF. u Next-fit: Like FF, but start the search at the hole where the last placement was made and treat the list of holes as a circular lis t. Time complexity O(N) n utilization = occupied_memory / total_memory ECE/IUPUI RTOS & APPS 19 CA, Knuth Foundings n For most placement policies including BF&FF, the number of holes is, on average, approximately equal to one half the number of segments u N segments u N/2 unusable holes n BF & FF had about the same utilization ECE/IUPUI RTOS & APPS 20 10 CA, Fragmentation n External Fragmentation– total memory space exists to satisfy a request, but there is no single hole to hold the request n Internal Fragmentation u To solve the external fragmentation, the memory is partitioned into a fixed-size block. u Then, allocate memory in unit of block size, but not in byte. u Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used. ECE/IUPUI RTOS & APPS 21 CA, Compaction n Reduce external fragmentation by compaction n Shuffle memory contents to place all free memory together in one large block. n Compaction is possible only if relocation is dynamic, and is done at execution time. ECE/IUPUI RTOS & APPS 22 11 CA, Binary Buddy Systems, Method n The system keeps the lists of holes, one list for each of the size 2m, 2m+1,… ,2n, where 2m is the smallest segment size. n A request to place a segment of size s k k-1 k u Round s upto the next power of 2, ie. 2 such that 2 <s£2 k u Look in the free list for size 2 .
Recommended publications
  • Better Performance Through a Disk/Persistent-RAM Hybrid Design
    The Conquest File System: Better Performance Through a Disk/Persistent-RAM Hybrid Design AN-I ANDY WANG Florida State University GEOFF KUENNING Harvey Mudd College PETER REIHER, GERALD POPEK University of California, Los Angeles ________________________________________________________________________ Modern file systems assume the use of disk, a system-wide performance bottleneck for over a decade. Current disk caching and RAM file systems either impose high overhead to access memory content or fail to provide mechanisms to achieve data persistence across reboots. The Conquest file system is based on the observation that memory is becoming inexpensive, which enables all file system services to be delivered from memory, except providing large storage capacity. Unlike caching, Conquest uses memory with battery backup as persistent storage, and provides specialized and separate data paths to memory and disk. Therefore, the memory data path contains no disk-related complexity. The disk data path consists of only optimizations for the specialized disk usage pattern. Compared to a memory-based file system, Conquest incurs little performance overhead. Compared to several disk-based file systems, Conquest achieves 1.3x to 19x faster memory performance, and 1.4x to 2.0x faster performance when exercising both memory and disk. Conquest realizes most of the benefits of persistent RAM at a fraction of the cost of a RAM-only solution. Conquest also demonstrates that disk-related optimizations impose high overheads for accessing memory content in a memory-rich environment. Categories and Subject Descriptors: D.4.2 [Operating Systems]: Storage Management—Storage Hierarchies; D.4.3 [Operating Systems]: File System Management—Access Methods and Directory Structures; D.4.8 [Operating Systems]: Performance—Measurements General Terms: Design, Experimentation, Measurement, and Performance Additional Key Words and Phrases: Persistent RAM, File Systems, Storage Management, and Performance Measurement ________________________________________________________________________ 1.
    [Show full text]
  • Virtual Memory
    Chapter 4 Virtual Memory Linux processes execute in a virtual environment that makes it appear as if each process had the entire address space of the CPU available to itself. This virtual address space extends from address 0 all the way to the maximum address. On a 32-bit platform, such as IA-32, the maximum address is 232 − 1or0xffffffff. On a 64-bit platform, such as IA-64, this is 264 − 1or0xffffffffffffffff. While it is obviously convenient for a process to be able to access such a huge ad- dress space, there are really three distinct, but equally important, reasons for using virtual memory. 1. Resource virtualization. On a system with virtual memory, a process does not have to concern itself with the details of how much physical memory is available or which physical memory locations are already in use by some other process. In other words, virtual memory takes a limited physical resource (physical memory) and turns it into an infinite, or at least an abundant, resource (virtual memory). 2. Information isolation. Because each process runs in its own address space, it is not possible for one process to read data that belongs to another process. This improves security because it reduces the risk of one process being able to spy on another pro- cess and, e.g., steal a password. 3. Fault isolation. Processes with their own virtual address spaces cannot overwrite each other’s memory. This greatly reduces the risk of a failure in one process trig- gering a failure in another process. That is, when a process crashes, the problem is generally limited to that process alone and does not cause the entire machine to go down.
    [Show full text]
  • Chapter 3. Booting Operating Systems
    Chapter 3. Booting Operating Systems Abstract: Chapter 3 provides a complete coverage on operating systems booting. It explains the booting principle and the booting sequence of various kinds of bootable devices. These include booting from floppy disk, hard disk, CDROM and USB drives. Instead of writing a customized booter to boot up only MTX, it shows how to develop booter programs to boot up real operating systems, such as Linux, from a variety of bootable devices. In particular, it shows how to boot up generic Linux bzImage kernels with initial ramdisk support. It is shown that the hard disk and CDROM booters developed in this book are comparable to GRUB and isolinux in performance. In addition, it demonstrates the booter programs by sample systems. 3.1. Booting Booting, which is short for bootstrap, refers to the process of loading an operating system image into computer memory and starting up the operating system. As such, it is the first step to run an operating system. Despite its importance and widespread interests among computer users, the subject of booting is rarely discussed in operating system books. Information on booting are usually scattered and, in most cases, incomplete. A systematic treatment of the booting process has been lacking. The purpose of this chapter is to try to fill this void. In this chapter, we shall discuss the booting principle and show how to write booter programs to boot up real operating systems. As one might expect, the booting process is highly machine dependent. To be more specific, we shall only consider the booting process of Intel x86 based PCs.
    [Show full text]
  • ELF1 7D Virtual Memory
    ELF1 7D Virtual Memory Young W. Lim 2021-07-06 Tue Young W. Lim ELF1 7D Virtual Memory 2021-07-06 Tue 1 / 77 Outline 1 Based on 2 Virtual memory Virtual Memory in Operating System Physical, Logical, Virtual Addresses Kernal Addresses Kernel Logical Address Kernel Virtual Address User Virtual Address User Space Young W. Lim ELF1 7D Virtual Memory 2021-07-06 Tue 2 / 77 Based on "Study of ELF loading and relocs", 1999 http://netwinder.osuosl.org/users/p/patb/public_html/elf_ relocs.html I, the copyright holder of this work, hereby publish it under the following licenses: GNU head Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License. CC BY SA This file is licensed under the Creative Commons Attribution ShareAlike 3.0 Unported License. In short: you are free to share and make derivative works of the file under the conditions that you appropriately attribute it, and that you distribute it only under a license compatible with this one. Young W. Lim ELF1 7D Virtual Memory 2021-07-06 Tue 3 / 77 Compling 32-bit program on 64-bit gcc gcc -v gcc -m32 t.c sudo apt-get install gcc-multilib sudo apt-get install g++-multilib gcc-multilib g++-multilib gcc -m32 objdump -m i386 Young W.
    [Show full text]
  • Isolation, Resource Management, and Sharing in Java
    Processes in KaffeOS: Isolation, Resource Management, and Sharing in Java Godmar Back, Wilson C. Hsieh, Jay Lepreau School of Computing University of Utah Abstract many environments for executing untrusted code: for example, applets, servlets, active packets [41], database Single-language runtime systems, in the form of Java queries [15], and kernel extensions [6]. Current systems virtual machines, are widely deployed platforms for ex- (such as Java) provide memory protection through the ecuting untrusted mobile code. These runtimes pro- enforcement of type safety and secure system services vide some of the features that operating systems pro- through a number of mechanisms, including namespace vide: inter-application memory protection and basic sys- and access control. Unfortunately, malicious or buggy tem services. They do not, however, provide the ability applications can deny service to other applications. For to isolate applications from each other, or limit their re- example, a Java applet can generate excessive amounts source consumption. This paper describes KaffeOS, a of garbage and cause a Web browser to spend all of its Java runtime system that provides these features. The time collecting it. KaffeOS architecture takes many lessons from operating To support the execution of untrusted code, type-safe system design, such as the use of a user/kernel bound- language runtimes need to provide a mechanism to iso- ary, and employs garbage collection techniques, such as late and manage the resources of applications, analogous write barriers. to that provided by operating systems. Although other re- The KaffeOS architecture supports the OS abstraction source management abstractions exist [4], the classic OS of a process in a Java virtual machine.
    [Show full text]
  • FIRM: Fair and High-Performance Memory Control for Persistent Memory Systems
    FIRM: Fair and High-Performance Memory Control for Persistent Memory Systems Jishen Zhao∗†, Onur Mutlu†, Yuan Xie‡∗ ∗Pennsylvania State University, †Carnegie Mellon University, ‡University of California, Santa Barbara, Hewlett-Packard Labs ∗ † ‡ [email protected], [email protected], [email protected] Abstract—Byte-addressable nonvolatile memories promise a new tech- works [54, 92] even demonstrated a persistent memory system with nology, persistent memory, which incorporates desirable attributes from performance close to that of a system without persistence support in both traditional main memory (byte-addressability and fast interface) memory. and traditional storage (data persistence). To support data persistence, a persistent memory system requires sophisticated data duplication Various types of physical devices can be used to build persistent and ordering control for write requests. As a result, applications memory, as long as they appear byte-addressable and nonvolatile that manipulate persistent memory (persistent applications) have very to applications. Examples of such byte-addressable nonvolatile different memory access characteristics than traditional (non-persistent) memories (BA-NVMs) include spin-transfer torque RAM (STT- applications, as shown in this paper. Persistent applications introduce heavy write traffic to contiguous memory regions at a memory channel, MRAM) [31, 93], phase-change memory (PCM) [75, 81], resis- which cannot concurrently service read and write requests, leading to tive random-access memory (ReRAM) [14, 21], battery-backed memory bandwidth underutilization due to low bank-level parallelism, DRAM [13, 18, 28], and nonvolatile dual in-line memory mod- frequent write queue drains, and frequent bus turnarounds between reads ules (NV-DIMMs) [89].1 and writes. These characteristics undermine the high-performance and fairness offered by conventional memory scheduling schemes designed As it is in its early stages of development, persistent memory for non-persistent applications.
    [Show full text]
  • Improving the Performance of Hybrid Main Memory Through System Aware Management of Heterogeneous Resources
    IMPROVING THE PERFORMANCE OF HYBRID MAIN MEMORY THROUGH SYSTEM AWARE MANAGEMENT OF HETEROGENEOUS RESOURCES by Juyoung Jung B.S. in Information Engineering, Korea University, 2000 Master in Computer Science, University of Pittsburgh, 2013 Submitted to the Graduate Faculty of the Kenneth P. Dietrich School of Arts and Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science University of Pittsburgh 2016 UNIVERSITY OF PITTSBURGH KENNETH P. DIETRICH SCHOOL OF ARTS AND SCIENCES This dissertation was presented by Juyoung Jung It was defended on December 7, 2016 and approved by Rami Melhem, Ph.D., Professor at Department of Computer Science Bruce Childers, Ph.D., Professor at Department of Computer Science Daniel Mosse, Ph.D., Professor at Department of Computer Science Jun Yang, Ph.D., Associate Professor at Electrical and Computer Engineering Dissertation Director: Rami Melhem, Ph.D., Professor at Department of Computer Science ii IMPROVING THE PERFORMANCE OF HYBRID MAIN MEMORY THROUGH SYSTEM AWARE MANAGEMENT OF HETEROGENEOUS RESOURCES Juyoung Jung, PhD University of Pittsburgh, 2016 Modern computer systems feature memory hierarchies which typically include DRAM as the main memory and HDD as the secondary storage. DRAM and HDD have been extensively used for the past several decades because of their high performance and low cost per bit at their level of hierarchy. Unfortunately, DRAM is facing serious scaling and power consumption problems, while HDD has suffered from stagnant performance improvement and poor energy efficiency. After all, computer system architects have an implicit consensus that there is no hope to improve future system’s performance and power consumption unless something fundamentally changes.
    [Show full text]
  • Virtual Memory
    CSE 410: Systems Programming Virtual Memory Ethan Blanton Department of Computer Science and Engineering University at Buffalo Introduction Address Spaces Paging Summary References Virtual Memory Virtual memory is a mechanism by which a system divorces the address space in programs from the physical layout of memory. Virtual addresses are locations in program address space. Physical addresses are locations in actual hardware RAM. With virtual memory, the two need not be equal. © 2018 Ethan Blanton / CSE 410: Systems Programming Introduction Address Spaces Paging Summary References Process Layout As previously discussed: Every process has unmapped memory near NULL Processes may have access to the entire address space Each process is denied access to the memory used by other processes Some of these statements seem contradictory. Virtual memory is the mechanism by which this is accomplished. Every address in a process’s address space is a virtual address. © 2018 Ethan Blanton / CSE 410: Systems Programming Introduction Address Spaces Paging Summary References Physical Layout The physical layout of hardware RAM may vary significantly from machine to machine or platform to platform. Sometimes certain locations are restricted Devices may appear in the memory address space Different amounts of RAM may be present Historically, programs were aware of these restrictions. Today, virtual memory hides these details. The kernel must still be aware of physical layout. © 2018 Ethan Blanton / CSE 410: Systems Programming Introduction Address Spaces Paging Summary References The Memory Management Unit The Memory Management Unit (MMU) translates addresses. It uses a per-process mapping structure to transform virtual addresses into physical addresses. The MMU is physical hardware between the CPU and the memory bus.
    [Show full text]
  • CS 351: Systems Programming
    Virtual Memory CS 351: Systems Programming Michael Saelee <[email protected]> Computer Science Science registers cache (SRAM) main memory (DRAM) local hard disk drive (HDD/SSD) remote storage (networked drive / cloud) previously: SRAM ⇔ DRAM Computer Science Science registers cache (SRAM) main memory (DRAM) local hard disk drive (HDD/SSD) remote storage (networked drive / cloud) next: DRAM ⇔ HDD, SSD, etc. i.e., memory as a “cache” for disk Computer Science Science main goals: 1. maximize memory throughput 2. maximize memory utilization 3. provide address space consistency & memory protection to processes Computer Science Science throughput = # bytes per second - depends on access latencies (DRAM, HDD) and “hit rate” Computer Science Science utilization = fraction of allocated memory that contains “user” data (aka payload) - vs. metadata and other overhead required for memory management Computer Science Science address space consistency → provide a uniform “view” of memory to each process 0xffffffff Computer Science Science Kernel virtual memory Memory (code, data, heap, stack) invisible to 0xc0000000 user code User stack (created at runtime) %esp (stack pointer) Memory mapped region for shared libraries 0x40000000 brk Run-time heap (created by malloc) Read/write segment ( , ) .data .bss Loaded from the Read-only segment executable file (.init, .text, .rodata) 0x08048000 0 Unused address space consistency → provide a uniform “view” of memory to each process Computer Science Science memory protection → prevent processes from directly accessing
    [Show full text]
  • Linux 2.6 Memory Management
    L i n u x 2 . 6 Me mo r y Ma n a ge me n t Joseph Garvin Wh y d o w e c a r e ? Without keeping multiple process in memory at once, we loose all the hard work we just did on scheduling. Multiple processes have to be kept in memory in order for scheduling to be effective. Z o n e s ZONE_DMA (0-16MB) Pages capable of undergoing DMA ZONE_NORMAL (16-896MB) Normal pages ZONE_HIGHMEM (896MB+) Pages not permanently mapped to kernel address space Z o n e s ( c o n t . ) ZONE_DMA needed because hardware can only perform DMA on certain addresses. But why is ZONE_HIGHMEM needed? Z o n e s ( c o n t . 2 ) That's a Complicated Question 32-bit x86 can address up to 4GB of logical address space. Kernel cuts a deal with user processes: I get 1GB of space, you get 3GB (hardcoded) The kernel can only manipulate memory mapped into its address space Z o n e s ( c o n t . 3 ) 128MB of logical address space automatically set aside for the kernel code itself 1024MB – 128MB = 896MB If you have > 896MB of physical memory, you need to compile your kernel with high memory support Z o n e s ( c o n t . 4 ) What does it actually do? On the fly unmaps memory from ZONE_NORMAL in kernel address space and exchanges it with memory in ZONE_HIGHMEM Enabling high memory has a small performance hit S e gme n t a t i o n -Segmentation is old school -The kernel tries to avoid using it, because it's not very portable -The main reason the kernel does make use of it is for compatibility with architectures that need it.
    [Show full text]
  • Memory Hierarchies.Pages
    CPS311 Lecture: Memory Hierarchies Last Revised October 21, 2019 Objectives: 1. To introduce cache memory 2. To introduce logical-physical address mapping 3. To introduce virtual memory Materials 1. Memory System Demo 2. Files for demo: CAMCache1ByteLine.parameters, CAMCache8ByteLine.parameters, DMCache.parameters, SACache.parameters, WBCache.parameters, NonVirtualNoTLB.parameters, NonVirtualWithTLB.system, Virtual.parameters, Full.parameters 3. AMAT Calculation example handout I. Introduction A.In the previous lecture, we looked at the basic building blocks of memory systems: the individual devices. We now focus on complete memory systems. B. Since every instruction executed by the CPU requires at least one memory access (to fetch the instruction) and often more, the performance of memory has a strong impact on system performance. In fact, the overall system cannot perform better than its memory system. 1. Note that we are distinguishing between the CPU and the memory system on a functional basis. The CPU accesses memory both when fetching an instruction, and as a result of executing instructions that reference memory (such as lw and sw on MIPS). 2. We will consider the memory system to be a logical unit separate from the CPU, even though it is often the case that, for performance reasons, some portions of it physically reside on the same chip as the CPU. !1 C. At one point in time, the speed of the dominant technology used for memory was well matched to the speed of the CPU. This, however, is not the case today (and has not been the case - in at least some portions of the computer system landscape - for decades).
    [Show full text]
  • IA-32 Architecture
    Outline IA-32 Architecture Intel Microprocessors IA-32 Registers Computer Organization Instruction Execution Cycle & IA-32 Memory Management Assembly Language Programming Dr Adnan Gutub aagutub ‘at’ uqu.edu.sa [Adapted from slides of Dr. Kip Irvine: Assembly Language for Intel-Based Computers] Most Slides contents have been arranged by Dr Muhamed Mudawar & Dr Aiman El-Maleh from Computer Engineering Dept. at KFUPM 45/٢ IA-32 Architecture Computer Organization and Assembly Language slide Intel Microprocessors Intel 80286 and 80386 Processors Intel introduced the 8086 microprocessor in 1979 80286 was introduced in 1982 8086, 8087, 8088, and 80186 processors 24-bit address bus ⇒ 224 bytes = 16 MB address space 16-bit processors with 16-bit registers Introduced protected mode 16-bit data bus and 20-bit address bus Segmentation in protected mode is different from the real mode Physical address space = 220 bytes = 1 MB 80386 was introduced in 1985 8087 Floating-Point co-processor First 32-bit processor with 32-bit general-purpose registers Uses segmentation and real-address mode to address memory First processor to define the IA-32 architecture Each segment can address 216 bytes = 64 KB 32-bit data bus and 32-bit address bus 8088 is a less expensive version of 8086 232 bytes ⇒ 4 GB address space Uses an 8-bit data bus Introduced paging , virtual memory , and the flat memory model 80186 is a faster version of 8086 Segmentation can be turned off 45/٤ 45 IA-32 Architecture Computer Organization and Assembly Language slide/٣
    [Show full text]