Lecture 8: Memory Management CSE 120: Principles of OperaNg Systems

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 8: Memory Management CSE 120: Principles of Opera�Ng Systems Lecture 8: Memory Management CSE 120: Principles of Operang Systems UC San Diego: Summer Session I, 2009 Frank Uyeda Announcements • PeerWise quesons due tomorrow. • Project 2 is due on Friday. – Milestone on Tuesday night. Tonight. • Homework 3 is due next Monday. 2 PeerWise • A base register contains: – The first available physical memory address for the system – The beginning physical memory address of a process’s address space – The base (i.e. base 2 for binary, etc) that is used for determining page sizes – The minimum size of physical memory that will be assigned to each process 3 PeerWise • If the base register holds 640000 and the limit register is 200000, then what range of addresses can the program legally access? – 440000 through 640000 – 200000 through 640000 – 0 through 200000 – 640000 through 840000 4 Goals for Today • Understand Paging – Address translaon – Page Tables • Understand Segmentaon – How it combines features of other techniques 5 Recap: Virtual Memory • Memory Management Unit (MMU) – Hardware unit that translates a virtual address to a physical address – Each memory reference is passed through the MMU – Translate a virtual address to a physical address • Translaon Lookaside Buffer (TLB) – Essenally a cache for the MMU’s virtual‐to‐physical translaons table – Not needed for correctness but source of significant performance gain Virtual Address Translaon Physical CPU MMU Table Address Memory TLB 6 Recap: Paging • Paging solves the external fragmentaon problem by using fixed sized units in both physical and virtual memory Physical Memory Virtual Memory Page 1 Page 1 Page 2 Page 2 ….. Page 3 Page 4 Page N Page 5 7 Memory Management in Nachos Virtual Memory Physical Memory 0x00000000 0x00000000 Code Page 1 0x00000400 Code Page 2 Data Page 3 ….. ... Stack Page P 0x….. .. Page N 0x00040000 8 Memory Management in Nachos Virtual Memory Physical Memory 0x00000000 0x00000000 Code Page 1 0x00000400 Code Page 2 Data Page 3 ….. ... Stack Page P 0x….. .. • What if code secon isn’t a mulple of page size? • What if 0x….. is larger than physical memory? Page N • What if 0x….. is smaller than physical memory? 0x00040000 9 Memory Management in Nachos • How do we know how large a program is? – A lot is done for you in userprog/UserProcess.java • Your applicaons will be wrien in “C” • Your “C” programs compile into .coff files • Coff.java and CoffSecon.java are provided to paron the coff file – If not enough physical memory, exec() returns error • Nachos exec() is different from Unix exec()!!!!! • Virtual Memory maps exactly to Physical Memory? – Who needs a page table, TLB or MMU? – What are the implicaons for this in terms of mulprogramming? 10 Project 2: Mulprogramming • Need to support for mulprogamming – Programs coexist on physical memory – How do we do this? Physical Memory Physical Memory Base Register P4’s Base P1 P1 P2 P2 Limit Register Base Register P3’s Base P4’s Base P3 Virtual Address < Yes? Virtual Address + P3 P4 Offset Offset + P5 No? Would Fixed Parons work? Protecon Fault Would Variable Parons work? 11 Paging Process 1’s VAS Physical Memory Page 1 0x00000000 Page 1 Page 2 0x00000400 Process 2’s VAS Page 2 Page 3 MMU ….. Page 1 TLB Page 3 Page ... Page M Page 2 Table Page P Page 3 .. ….. Page N Page Q 0x00040000 12 MMU and TLB • Memory Management Unit (MMU) – Hardware unit that translates a virtual address to a physical address – Each memory reference is passed through the MMU – Translate a virtual address to a physical address • Translaon Lookaside Buffer (TLB) – Essenally a cache for the MMU’s virtual‐to‐physical translaons table – Not needed for correctness but source of significant performance gain Virtual Address Translaon Physical CPU MMU Table Address Memory TLB 13 Paging: Translaons • Translang addresses – Virtual address has two parts: virtual page number and offset – Virtual page number (VPN) is an index into a page table – Page table determines page frame number (PFN) – Physical address is PFN::offset 0xBAADF00D = 0xBAADF 0x00D virtual address virtual page number offset 0xBAADF Translaon 0x900DF Table virtual page number physical page number (page frame number) page table 14 Page Table Entries (PTEs) M R V Prot Page Frame Number 1 1 1 2 20 • Page table entries control mapping – The Modify bit says whether or not the page has been wrien • It is set when a write to the page occurs – The Reference bit says whether the page has been accessed • It is set when a read or write to the page occurs – The Valid bit says whether or not the PTE can be used • It is checked each me the virtual address is used – The Protecon bits say what operaons are allowed on page • Read, write, execute – The page frame number (PFN) determines physical page • Note: when you do exercises in exams or hw, we’ll oen tell you to ignore counng M,R,V,Prot bits when calculang size of structures 15 Paging Example Revisited 0xBAADF00D Physical Memory 0x00000000 Virtual Address Page number Offset Page 1 0xBAADF 0xF00D Physical Address Page 2 Page frame Offset Page Table 0x900DF 0xF00D Page 3 ….. Page number Page table entry 0x900DF00D Page N 0xFFFFFFFF M R V Prot Page Frame Number 16 V Page Frame Paging 1 0x04 Process 1’s VAS 1 0x01 Physical Memory 1 0x05 Page 1 0x00000000 1 0x07 Page 1 Page 2 0x00000400 Process 2’s VAS Page 2 Page 3 MMU ….. Page 1 TLB Page 3 Page ... Page M Page 2 Table Page P Page 3 V Page Frame .. ….. 1 0x04 Page N 1 0x01 Page Q 0x00040000 17 1 0x05 1 0x07 Example • Assume we are using Paging and: – Memory access = 5us – TLB search = 500ns • What is the avg. memory access me without the TLB? • What is the avg. memory access me with 50% TLB hit rate? • What is the avg. memory access me with 90% TLB hit rate? Virtual Address Translaon Physical CPU MMU Table Address Memory TLB 18 Paging Advantages • Easy to allocate memory – Memory comes from a free list of fixed‐size chunks – Allocang a page is just removing it from the list – External fragmentaon is not a problem • Easy to swap out chunks of a program – All chunks are the same size – Pages are a convenient mulple of the disk block size – How do we know if a page is in memory or not? 19 Paging Limitaons • Can sll have internal fragmentaon – Process may not use memory in mulples of pages • Memory reference overhead – 2 references per address lookup (page table, then memory) – Soluon – use a hardware cache of lookups (TLB) • Memory required to hold page table can be significant – Need one PTE per page – 32‐bit address space w/4KB pages = up to ____ PTEs – 4 bytes/PTE = _____ MB page table – 25 processes = ______ MB just for page tables! – Soluon: page the page tables (more later) 20 Paging: Linear Address Space 0xFFF….. (Ending Address) Stack Heap Data Segment Text Segment 0x00……. (Starng Address) 21 Segmentaon: Mulple Address Spaces Heap Stack Text Data 22 Segmentaon Physical Memory Segment Table Virtual Address limit base Segment # Offset Yes? < + No? Protecon Fault 23 Segmentaon • Segmentaon is a technique that parons memory into logically related data units – Module, procedure, stack, data, file, etc. – Virtual addresses become <segment #, offset> – Units of memory from user’s perspecve • Natural extension of variable‐sized parons – Variable‐sized parons = 1 segment/process – Segmentaon = many segments/process • Hardware support – Mulple base/limit pairs, one per segment (segment table) – Segments named by #, used to index into table 24 Segment Table • Extensions – Can have one segment table per process • Segment #s are then process‐relave (why do this?) – Can easily share memory • Put same translaon into base/limit pair • Can share with different protecons (same base/limit, diff prot) • Why is this different from pure paging? – Can define protecon by segment • Problems – Cross‐segment addresses • Segments need to have same #s for pointers to them to be shared among processes – Large segment tables • Keep in main memory, use hardware cache for speed 25 Segmentaon Example External Fragmentaon in Segmentaon Note: Image courtesy of Tanenbaum, MOS 3/e 26 Paging vs Segmentaon Consideraon Paging Segmentaon Need the programmer be aware that this technique is being used? How many linear address spaces are there? Can the total address space exceed the size of physical memory? Can procedures and data be disnguished and separately protected? Is sharing of procedures between users facilitated? Note: Image adapted from Tanenbaum, MOS 3/e 27 Segmentaon and Paging • Can combine segmentaon and paging – The x86 supports segments and paging • Use segments to manage logically related units – Module, procedure, stack, file, data, etc. – Segments vary in size, but usually large (mulple pages) • Use pages to paron segments into fixed sized chunks – Makes segments easier to manage within physical memory • Segments become “pageable” – rather than moving segments into and out of memory, just move page porons of segment – Need to allocate page table entries only for those pieces of the segments that have themselves been allocated • Tends to be complex… 28 Virtual Memory Summary • Virtual memory – Processes use virtual addresses – OS + hardware translates virtual addresses into physical addresses • Various techniques – Fixed parons – easy to use, but internal fragmentaon – Variable parons – more efficient, but external fragmentaon – Paging – use small, fixed size chunks, efficient for OS – Segmentaon – manage in chunks from user’s perspecve – Combine paging and segmentaon to get benefits of both 29 Managing Page Tables • We computed the size of the page table for a 32‐bit address space with 4K pages to be ___ MB – This
Recommended publications
  • Memory Protection at Option
    Memory Protection at Option Application-Tailored Memory Safety in Safety-Critical Embedded Systems – Speicherschutz nach Wahl Auf die Anwendung zugeschnittene Speichersicherheit in sicherheitskritischen eingebetteten Systemen Der Technischen Fakultät der Universität Erlangen-Nürnberg zur Erlangung des Grades Doktor-Ingenieur vorgelegt von Michael Stilkerich Erlangen — 2012 Als Dissertation genehmigt von der Technischen Fakultät Universität Erlangen-Nürnberg Tag der Einreichung: 09.07.2012 Tag der Promotion: 30.11.2012 Dekan: Prof. Dr.-Ing. Marion Merklein Berichterstatter: Prof. Dr.-Ing. Wolfgang Schröder-Preikschat Prof. Dr. Michael Philippsen Abstract With the increasing capabilities and resources available on microcontrollers, there is a trend in the embedded industry to integrate multiple software functions on a single system to save cost, size, weight, and power. The integration raises new requirements, thereunder the need for spatial isolation, which is commonly established by using a memory protection unit (MPU) that can constrain access to the physical address space to a fixed set of address regions. MPU-based protection is limited in terms of available hardware, flexibility, granularity and ease of use. Software-based memory protection can provide an alternative or complement MPU-based protection, but has found little attention in the embedded domain. In this thesis, I evaluate qualitative and quantitative advantages and limitations of MPU-based memory protection and software-based protection based on a multi-JVM. I developed a framework composed of the AUTOSAR OS-like operating system CiAO and KESO, a Java implementation for deeply embedded systems. The framework allows choosing from no memory protection, MPU-based protection, software-based protection, and a combination of the two.
    [Show full text]
  • HALO: Post-Link Heap-Layout Optimisation
    HALO: Post-Link Heap-Layout Optimisation Joe Savage Timothy M. Jones University of Cambridge, UK University of Cambridge, UK [email protected] [email protected] Abstract 1 Introduction Today, general-purpose memory allocators dominate the As the gap between memory and processor speeds continues landscape of dynamic memory management. While these so- to widen, efficient cache utilisation is more important than lutions can provide reasonably good behaviour across a wide ever. While compilers have long employed techniques like range of workloads, it is an unfortunate reality that their basic-block reordering, loop fission and tiling, and intelligent behaviour for any particular workload can be highly subop- register allocation to improve the cache behaviour of pro- timal. By catering primarily to average and worst-case usage grams, the layout of dynamically allocated memory remains patterns, these allocators deny programs the advantages of largely beyond the reach of static tools. domain-specific optimisations, and thus may inadvertently Today, when a C++ program calls new, or a C program place data in a manner that hinders performance, generating malloc, its request is satisfied by a general-purpose allocator unnecessary cache misses and load stalls. with no intimate knowledge of what the program does or To help alleviate these issues, we propose HALO: a post- how its data objects are used. Allocations are made through link profile-guided optimisation tool that can improve the fixed, lifeless interfaces, and fulfilled by
    [Show full text]
  • Chapter 4 Memory Management and Virtual Memory Asst.Prof.Dr
    Chapter 4 Memory management and Virtual Memory Asst.Prof.Dr. Supakit Nootyaskool IT-KMITL Object • To discuss the principle of memory management. • To understand the reason that memory partitions are importance for system organization. • To describe the concept of Paging and Segmentation. 4.1 Difference in main memory in the system • The uniprogramming system (example in DOS) allows only a program to be present in memory at a time. All resources are provided to a program at a time. Example in a memory has a program and an OS running 1) Operating system (on Kernel space) 2) A running program (on User space) • The multiprogramming system is difference from the mentioned above by allowing programs to be present in memory at a time. All resource must be organize and sharing to programs. Example by two programs are in the memory . 1) Operating system (on Kernel space) 2) Running programs (on User space + running) 3) Programs are waiting signals to execute in CPU (on User space). The multiprogramming system uses memory management to organize location to the programs 4.2 Memory management terms Frame Page Segment 4.2 Memory management terms Frame Page A fixed-lengthSegment block of main memory. 4.2 Memory management terms Frame Page A fixed-length block of data that resides in secondary memory. A page of data may temporarily beSegment copied into a frame of main memory. A variable-lengthMemory management block of data that residesterms in secondary memory. A segment may temporarily be copied into an available region of main memory or the segment may be divided into pages which can be individuallyFrame copied into mainPage memory.
    [Show full text]
  • Memory Management
    Memory Management These slides are created by Dr. Huang of George Mason University. Students registered in Dr. Huang’s courses at GMU can make a single machine readable copy and print a single copy of each slide for their own reference as long as the slide contains the copyright statement, and the GMU facilities are not used to produce the paper copies. Permission for any other use, either in machine-readable or printed form, must be obtained form the author in writing. CS471 1 Memory 0000 A a set of data entries 0001 indexed by addresses 0002 0003 Typically the basic data 0004 0005 unit is byte 0006 0007 In 32 bit machines, 4 bytes 0008 0009 grouped to words 000A 000B Have you seen those 000C 000D DRAM chips in your PC ? 000E 000F CS471 2 1 Logical vs. Physical Address Space The addresses used by the RAM chips are called physical addresses. In primitive computing devices, the address a programmer/processor use is the actual address. – When the process fetches byte 000A, the content of 000A is provided. CS471 3 In advanced computers, the processor operates in a separate address space, called logical address, or virtual address. A Memory Management Unit (MMU) is used to map logical addresses to physical addresses. – Various mapping technologies to be discussed – MMU is a hardware component – Modern processors have their MMU on the chip (Pentium, Athlon, …) CS471 4 2 Continuous Mapping: Dynamic Relocation Virtual Physical Memory Memory Space Processor 4000 The processor want byte 0010, the 4010th byte is fetched CS471 5 MMU for Dynamic Relocation CS471 6 3 Segmented Mapping Virtual Physical Memory Memory Space Processor Obviously, more sophisticated MMU needed to implement this CS471 7 Swapping A process can be swapped temporarily out of memory to a backing store (a hard drive), and then brought back into memory for continued execution.
    [Show full text]
  • A Hybrid Swapping Scheme Based on Per-Process Reclaim for Performance Improvement of Android Smartphones (August 2018)
    Received August 19, 2018, accepted September 14, 2018, date of publication October 1, 2018, date of current version October 25, 2018. Digital Object Identifier 10.1109/ACCESS.2018.2872794 A Hybrid Swapping Scheme Based On Per-Process Reclaim for Performance Improvement of Android Smartphones (August 2018) JUNYEONG HAN 1, SUNGEUN KIM1, SUNGYOUNG LEE1, JAEHWAN LEE2, AND SUNG JO KIM2 1LG Electronics, Seoul 07336, South Korea 2School of Software, Chung-Ang University, Seoul 06974, South Korea Corresponding author: Sung Jo Kim ([email protected]) This work was supported in part by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education under Grant 2016R1D1A1B03931004 and in part by the Chung-Ang University Research Scholarship Grants in 2015. ABSTRACT As a way to increase the actual main memory capacity of Android smartphones, most of them make use of zRAM swapping, but it has limitation in increasing its capacity since it utilizes main memory. Unfortunately, they cannot use secondary storage as a swap space due to the long response time and wear-out problem. In this paper, we propose a hybrid swapping scheme based on per-process reclaim that supports both secondary-storage swapping and zRAM swapping. It attempts to swap out all the pages in the working set of a process to a zRAM swap space rather than killing the process selected by a low-memory killer, and to swap out the least recently used pages into a secondary storage swap space. The main reason being is that frequently swap- in/out pages use the zRAM swap space while less frequently swap-in/out pages use the secondary storage swap space, in order to reduce the page operation cost.
    [Show full text]
  • An Evolutionary Study of Linux Memory Management for Fun and Profit Jian Huang, Moinuddin K
    An Evolutionary Study of Linux Memory Management for Fun and Profit Jian Huang, Moinuddin K. Qureshi, and Karsten Schwan, Georgia Institute of Technology https://www.usenix.org/conference/atc16/technical-sessions/presentation/huang This paper is included in the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16). June 22–24, 2016 • Denver, CO, USA 978-1-931971-30-0 Open access to the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16) is sponsored by USENIX. An Evolutionary Study of inu emory anagement for Fun and rofit Jian Huang, Moinuddin K. ureshi, Karsten Schwan Georgia Institute of Technology Astract the patches committed over the last five years from 2009 to 2015. The study covers 4587 patches across Linux We present a comprehensive and uantitative study on versions from 2.6.32.1 to 4.0-rc4. We manually label the development of the Linux memory manager. The each patch after carefully checking the patch, its descrip- study examines 4587 committed patches over the last tions, and follow-up discussions posted by developers. five years (2009-2015) since Linux version 2.6.32. In- To further understand patch distribution over memory se- sights derived from this study concern the development mantics, we build a tool called MChecker to identify the process of the virtual memory system, including its patch changes to the key functions in mm. MChecker matches distribution and patterns, and techniues for memory op- the patches with the source code to track the hot func- timizations and semantics. Specifically, we find that tions that have been updated intensively.
    [Show full text]
  • Understanding Memory Resource Management in Vmware Vsphere® 5.0
    Understanding Memory Resource Management in ® VMware vSphere 5.0 Performance Study TECHNICAL WHITE PAPER Understanding Memory Resource Management in VMware vSphere 5.0 Table of Contents Overview......................................................................................................................................................................................................................... 3 Introduction................................................................................................................................................................................................................... 3 ESXi Memory Management Overview ............................................................................................................................................................. 4 Terminology .......................................................................................................................................................................................................... 4 Memory Virtualization Basics ........................................................................................................................................................................ 4 Memory Management Basics in ESXi ......................................................................................................................................................... 5 Memory Reclamation in ESXi ..............................................................................................................................................................................
    [Show full text]
  • Linux 2.6 Memory Management
    L i n u x 2 . 6 Me mo r y Ma n a ge me n t Joseph Garvin Wh y d o w e c a r e ? Without keeping multiple process in memory at once, we loose all the hard work we just did on scheduling. Multiple processes have to be kept in memory in order for scheduling to be effective. Z o n e s ZONE_DMA (0-16MB) Pages capable of undergoing DMA ZONE_NORMAL (16-896MB) Normal pages ZONE_HIGHMEM (896MB+) Pages not permanently mapped to kernel address space Z o n e s ( c o n t . ) ZONE_DMA needed because hardware can only perform DMA on certain addresses. But why is ZONE_HIGHMEM needed? Z o n e s ( c o n t . 2 ) That's a Complicated Question 32-bit x86 can address up to 4GB of logical address space. Kernel cuts a deal with user processes: I get 1GB of space, you get 3GB (hardcoded) The kernel can only manipulate memory mapped into its address space Z o n e s ( c o n t . 3 ) 128MB of logical address space automatically set aside for the kernel code itself 1024MB – 128MB = 896MB If you have > 896MB of physical memory, you need to compile your kernel with high memory support Z o n e s ( c o n t . 4 ) What does it actually do? On the fly unmaps memory from ZONE_NORMAL in kernel address space and exchanges it with memory in ZONE_HIGHMEM Enabling high memory has a small performance hit S e gme n t a t i o n -Segmentation is old school -The kernel tries to avoid using it, because it's not very portable -The main reason the kernel does make use of it is for compatibility with architectures that need it.
    [Show full text]
  • Introduction to Memory Management 1
    CHAPTER 1 Introduction to Memory Management 1 This chapter is a general introduction to memory management on Macintosh computers. It describes how the Operating System organizes and manages the available memory, 1 and it shows how you can use the services provided by the Memory Manager and other Introduction to Memory Management system software components to manage the memory in your application partition effectively. You should read this chapter if your application or other software allocates memory dynamically during its execution. This chapter describes how to ■ set up your application partition at launch time ■ determine the amount of free memory in your application heap ■ allocate and dispose of blocks of memory in your application heap ■ minimize fragmentation in your application heap caused by blocks of memory that cannot move ■ implement a scheme to avoid low-memory conditions You should be able to accomplish most of your application’s memory allocation and management by following the instructions given in this chapter. If, however, your application needs to allocate memory outside its own partition (for instance, in the system heap), you need to read the chapter “Memory Manager” in this book. If your application has timing-critical requirements or installs procedures that execute at interrupt time, you need to read the chapter “Virtual Memory Manager” in this book. If your application’s executable code is divided into multiple segments, you might also want to look at the chapter “Segment Manager” in Inside Macintosh: Processes for guidelines on how to divide your code into segments. If your application uses resources, you need to read the chapter “Resource Manager” in Inside Macintosh: More Macintosh Toolbox for information on managing memory allocated to resources.
    [Show full text]
  • Memory Management in Linux Is a Complex System ○ Supports a Variety of Systems from MMU-Less Microcontrollers to Supercomputers
    Linux Memory Management Ahmed Ali-Eldin This lecture ● Kernel memory allocations ● User-space memory management ● Caching in memory Numbers every programmer should know... Yearly updated data: https://colin-scott.github.io/personal_website/research/interactive_latency.html Kernel memory allocation ● Kernel processes require memory allocation ○ kernel cannot easily deal with memory allocation errors ○ kernel often cannot sleep ● The kernel memory allocation mechanisms differ from user-space allocation ● The kernel treats physical pages as the basic unit of memory management ● x-86 processors include a hardware Memory Management Unit (MMU) ● Memory management in Linux is a complex system ○ supports a variety of systems from MMU-less microcontrollers to supercomputers. Pages in Linux ● Old days: Pages with default size 4 KB ● Now: Huge pages + 4KB pages (since kernel 2.6.3) ● Rationale ○ on a machine with 8KB pages and 32GB of memory, physical memory is divided into 4,194,304 distinct pages ○ 64-bit Linux allows up to 128 TB of virtual address space for individual processes, and can address approximately 64 TB of physical memory ○ While assigning memory to a process is relatively cheap, memory management is not! ■ Every memory access requires a page walk ■ Typically take 10 to 100 cpu cycles ■ A TLB can hold typically only 1024 entries (out of the 4 Million distinct pages above) and has to be flushed every context switch Large Memory Applications ● Applications touching greater than 10 GB such as a large in-memory database with 17 GiB with 1000 connections ○ Uses approximately 4.4 million pages ○ With a page table size of 16 GiB ● Idea: Having larger page sizes enables a significant reduction in memory walks for such an application Huge Pages ● on x86, it is possible to map 2M and even 1G pages using entries in the second and the third level page tables.
    [Show full text]
  • Virtual Memory and Linux
    Virtual Memory and Linux Matt Porter Embedded Linux Conference Europe October 13, 2016 About the original author, Alan Ott ● Unfortunately, he is unable to be here at ELCE 2016. ● Veteran embedded systems and Linux developer ● Linux Architect at SoftIron – 64-bit ARM servers and data center appliances – Hardware company, strong on software – Overdrive 3000, more products in process Physical Memory Single Address Space ● Simple systems have a single address space ● Memory and peripherals share – Memory is mapped to one part – Peripherals are mapped to another ● All processes and OS share the same memory space – No memory protection! – Processes can stomp one another – User space can stomp kernel mem! Single Address Space ● CPUs with single address space ● 8086-80206 ● ARM Cortex-M ● 8- and 16-bit PIC ● AVR ● SH-1, SH-2 ● Most 8- and 16-bit systems x86 Physical Memory Map ● Lots of Legacy ● RAM is split (DOS Area and Extended) ● Hardware mapped between RAM areas. ● High and Extended accessed differently Limitations ● Portable C programs expect flat memory ● Multiple memory access methods limit portability ● Management is tricky ● Need to know or detect total RAM ● Need to keep processes separated ● No protection ● Rogue programs can corrupt the entire system Virtual Memory What is Virtual Memory? ● Virtual Memory is a system that uses an address mapping ● Maps virtual address space to physical address space – Maps virtual addresses to physical RAM – Maps virtual addresses to hardware devices ● PCI devices ● GPU RAM ● On-SoC IP blocks What is Virtual Memory? ● Advantages ● Each processes can have a different memory mapping – One process's RAM is inaccessible (and invisible) to other processes.
    [Show full text]
  • Memory Management
    University of Mississippi eGrove American Institute of Certified Public Guides, Handbooks and Manuals Accountants (AICPA) Historical Collection 1993 Memory management American Institute of Certified Public Accountants. Information echnologyT Division Follow this and additional works at: https://egrove.olemiss.edu/aicpa_guides Part of the Accounting Commons, and the Taxation Commons Recommended Citation American Institute of Certified Public Accountants. Information echnologyT Division, "Memory management" (1993). Guides, Handbooks and Manuals. 486. https://egrove.olemiss.edu/aicpa_guides/486 This Book is brought to you for free and open access by the American Institute of Certified Public Accountants (AICPA) Historical Collection at eGrove. It has been accepted for inclusion in Guides, Handbooks and Manuals by an authorized administrator of eGrove. For more information, please contact [email protected]. INFORMATION TECHNOLOGY DIVISION BULLETIN AICPA American Institute of Certified Public Accountants TECHNOLOGY Notice to Readers This technology bulletin is the first in a series of bulletins that provide accountants with information about a particular technology. These bulletins are issued by the AICPA Information Technology Division for the benefit of Information Technology Section Members. This bulletin does not establish standards or preferred practice; it represents the opinion of the author and does not necessarily reflect the policies of the AICPA or the Information Technology Division. The Information Technology Division expresses its appreciation to the author of this technology bulletin, Liz O’Dell. She is employed by Crowe, Chizek and Company in South Bend, Indiana, as a manager of firmwide microcomputer operations, supporting both hardware and software applications. Liz is an Indiana University graduate with an associate’s degree in computer information systems and a bachelor’s degree in business management.
    [Show full text]