In Lieu of Swap: Analyzing Compressed RAM in Mac OS X and Linux

Total Page:16

File Type:pdf, Size:1020Kb

In Lieu of Swap: Analyzing Compressed RAM in Mac OS X and Linux In Lieu of Swap: Analyzing Compressed RAM in Mac OS X and Linux Golden G. Richard III University of New Orleans and Arcane Alloy, LLC [email protected] Andrew Case The Volatility Project [email protected] 2 Who? Professor of Computer Science and University Research Professor, Director, GNOCIA, University of New Orleans hp://www.cs.uno.edu/~golden Digital forensics, OS internals, reverse engineering, offensive compuKng, pushing students to the brink of destrucKon, et al. Founder, Arcane Alloy, LLC. hp://www.arcanealloy.com Digital forensics, reverse engineering, malware analysis, security research, tool development, training. Co-Founder, Partner / Photographer, High ISO Music, LLC. hp://www.highisomusic.com Rock stars. Heavy Metal. Earplugs. © 2014 by Golden G. Richard III and Andrew Case 3 Forensics, Features, Privacy • New OS features may negavely forensics • e.g.: – Nave whole disk encrypKon – Encrypted swap – Secure Recycle Bin / Trash faciliKes – Secure erasure during disk formang • Today’s topic: Compressed RAM • Claim: posively impacts forensics • First some background © 2014 by Golden G. Richard III and Andrew Case 4 Where’s the Evidence? Files and Filesystem ApplicaOon Windows Deleted Files metadata metadata registry Print spool Hibernaon Temp files Log files files files Browser Network Slack space Swap files caches traces RAM: OS and app data Memory analysis “Tradional” structures storage forensics © 2014 by Golden G. Richard III and Andrew Case 5 Memory Analysis on RAM Dump • Processes (dead/alive) e.g., discover unauthorized programs • Open files e.g., detect keystroke loggers, hidden processes • Network connections e.g., find backdoors, connections to contraband sites • Volatile registry contents • Volatile application data e.g., plaintext for encrypted material, chat messages, email fragments • Other OS structures e.g., analysis of memory allocators to retrieve interesting structures, filesystem cache, etc. • Encryption keys e.g., keys for whole disk encryption schemes What’s missing? © 2014 by Golden G. Richard III and Andrew Case 6 Memory Analysis: Swap Files “TradiKonal” forensics: capture contents of storage devices, including the swap file swap file on disk Swap file contains RAM “overflow”—if you don’t have this data, you probably don’t have a complete memory dump Live forensics / memory analysis: capture RAM contents RAM 00000000000000000000000000000000 0 11110010010011010010010101010101 4096 If you acquire both RAM and disk contents, 01000001010010010010001000100010 … … memory smearing is likely and the swap file 111111111111111111111111111111111 may be (very) out of sync with the memory . dump 010101010101010101010101010101011 2GB © 2014 by Golden G. Richard III and Andrew Case 7 Forensically, Swap Files are a Mess Data trapped order in unsanized RAM dump Swap file blocks allocated to swap file encrypted swap void swap_crypt_ctx_initialize ( void ) { unsigned int i; 554-88-2345 if ( swap_crypt_ctx_initialized == FALSE ) { for (i = 0; i < ( sizeof ( swap_crypt_key ) / [email protected] sizeof ( swap_crypt_key [0])); i++) { swap_crypt_key [i] = random(); } aes_encrypt_key ( murder (const unsigned char *) swap_crypt_key , SWAP_CRYPT_AES_KEY_SIZE , &swap_crypt_ctx.encrypt); struct { ... swap_crypt_ctx_initialized = TRUE ; int ip; } … } } netstat; © 2014 by Golden G. Richard III and Andrew Case 8 Background: Memory Management 2GB • Need to allocate RAM to: – Operang system – Individual applicaons – All applicaons • Possibly fairly, or some other policy • OS’s need to manage memory efficiently to feed RAM-hungry apps • Ever-increasing “need” for more memory © 2014 by Golden G. Richard III and Andrew Case 9 Virtual Memory • Provides conKguous, linear address space for processes • Virtual address space is divided into fixed-size pages – Physical memory is divided into pages of same size – On x86 Intel, these are normally 4K • OS keeps track of free and allocated pages • Processes get only as many pages as they need © 2014 by Golden G. Richard III and Andrew Case 10 Logical vs. Physical Addresses • Virtual address: Address whose scope is a parKcular process or OS kernel • Physical address: Locaon in physical RAM Process P2 0 Process P1 RAM 4096 Process P3 0 0 … 4096 4096 0 … 4096 … … … … … . … . 4GB . 2GB 4GB . 4GB © 2014 by Golden G. Richard III and Andrew Case 11 Paging: (Very) Simplified View • Page tables (per process) are used to translate logical to physical addresses • Pages for a parKcular process are generally not in conKguous order in RAM Process P1 P1 page table RAM 0 01000001010010010010001000100010 2 0 4096 11110010010011010010010101010101 1 11110010010011010010010101010101 4096 … 01000001010010010010001000100010 … … … . 2GB 4GB © 2014 by Golden G. Richard III and Andrew Case 12 64-bit Mode: 4-level Page Tables: 4K Pages © 2014 by Golden G. Richard III and Andrew Case 13 Stretching RAM • Want to allocate more memory to processes than total amount of RAM • Extend RAM by swapping pages to disk and retrieving as necessary • Allows more (and bigger) processes to execute • Workable, because: – Not all execuKng processes are acKve at once – Processes typically have “working sets” of pages, e.g., memory they’re acKvely using “now” © 2014 by Golden G. Richard III and Andrew Case 14 Virtual Memory with Swapping For good performance, can’t overdo it--the pipe swap file between RAM and disk is 100MB/sec on disk small Otherwise: THRASHING Process P1 P1 page table RAM 0 01000001010010010010001000100010 2 00000000000000000000000000000000 0 4096 11110010010011010010010101010101 1 11110010010011010010010101010101 4096 … 01001001010100111111111111111000 Not present 01000001010010010010001000100010 … … 11111111111111111111000011111111 Not present … 111111111111111111111111111111111 . 010101010101010101010101010101011 2GB 4GB © 2014 by Golden G. Richard III and Andrew Case 15 Virtual Memory with Swapping SSDs substanKally increase size of pipe swap file 500GB/sec or so on SSD Process P1 P1 page table RAM 0 01000001010010010010001000100010 2 00000000000000000000000000000000 0 4096 11110010010011010010010101010101 1 11110010010011010010010101010101 4096 … 01001001010100111111111111111000 Not present 01000001010010010010001000100010 … … 11111111111111111111000011111111 Not present … 111111111111111111111111111111111 . 010101010101010101010101010101011 2GB 4GB © 2014 by Golden G. Richard III and Andrew Case 16 Virtual Memory with Swapping up to 1.2GB/sec+ swap file on PCIe (e.g., new Mac Pro, flash storage newest iMacs) Process P1 P1 page table RAM 0 01000001010010010010001000100010 2 00000000000000000000000000000000 0 4096 11110010010011010010010101010101 1 11110010010011010010010101010101 4096 … 01001001010100111111111111111000 Not present 01000001010010010010001000100010 … … 11111111111111111111000011111111 Not present … 111111111111111111111111111111111 . 010101010101010101010101010101011 2GB 4GB © 2014 by Golden G. Richard III and Andrew Case 17 Avoiding Swapping by Compressing RAM • OpKmizaon: – “Hoard” some RAM, not available to processes – When RAM is running low, compress pages in memory instead of swapping to disk • Why bother? – SSDs are fast – PCIe solid state storage is “crazy” fast – Memory is ever cheaper • So…why? © 2014 by Golden G. Richard III and Andrew Case 18 It’s Worth It • Ultraportable laptops sKll RAM limited – e.g., current Macbook Air: Base 4GB, Max 8GB RAM • Virtualizaon, even on laptops • Malware analysts, VMWare Workstaon, Fusion, etc. • Swapping can cause serious performance issues • New Mac Pro memory bandwidth: 60GB/sec • Max swap bandwidth: 1.2GB/sec • Modern CPUs compress and decompress very efficiently © 2014 by Golden G. Richard III and Andrew Case 19 Virtual Memory with Compressed RAM In Mac OS X Mavericks and Linux, set aside swap file some RAM and compress pages that would on SSD have been swapped due to memory pressure Process P1 P1 page table RAM 0 01000001010010010010001000100010 2 00000000000000000000000000000000 4096 11110010010011010010010101010101 1 11110010010011010010010101010101 … 01001001010100111111111111111000 Not present 01000001010010010010001000100010 … 11111111111111111111000011111111 Not present 111111111111111111111111111111111 0100100101 . 0100100101 compressed . 0100100101 0100100101 pages 4GB © 2014 by Golden G. Richard III and Andrew Case 20 © 2014 by Golden G. Richard III and Andrew Case 21 It’s Back • RAM Doubler -- ~20 years ago • Historically RAM compression not very effecKve – Processors too slow to offset compress/decompress costs – Poor integraon with OS • Old academic paper (2003) on compression schemes for RAM: – M. Simpson, R. Barua, S. Biswas, Analysis of Compression Algorithms for Program Data, Tech. rep., U. of Maryland, ECE department, 2003. • Compression scheme in Mavericks (WKdm) was invented in 1999: – P. R. Wilson, S. F. Kaplan, and Y. Smaragdakis, The Case for Compressed Caching in Virtual Memory Systems, Proceedings of the USENIX Annual Technical Conference, 1999. • Now, super fast CPUs, lots of cores, Kght OS integraon © 2014 by Golden G. Richard III and Andrew Case 22 Compressed RAM: Forensic Impact Currently, “simultaneous” capture results in lots of smearing swap file on disk With compressed RAM, may be li;le or no data swapped out—capturing RAM captures (some or all) “swap”! PHYSICAL RAM No support in current tools—compressed 00000000000000000000000000000000 0 data opaque or ignored 11110010010011010010010101010101 4096 01000001010010010010001000100010 … 111111111111111111111111111111111 … Our tools enable analysis of . compressed RAM 010101010101010101010101010101011
Recommended publications
  • Virtual Memory
    Chapter 4 Virtual Memory Linux processes execute in a virtual environment that makes it appear as if each process had the entire address space of the CPU available to itself. This virtual address space extends from address 0 all the way to the maximum address. On a 32-bit platform, such as IA-32, the maximum address is 232 − 1or0xffffffff. On a 64-bit platform, such as IA-64, this is 264 − 1or0xffffffffffffffff. While it is obviously convenient for a process to be able to access such a huge ad- dress space, there are really three distinct, but equally important, reasons for using virtual memory. 1. Resource virtualization. On a system with virtual memory, a process does not have to concern itself with the details of how much physical memory is available or which physical memory locations are already in use by some other process. In other words, virtual memory takes a limited physical resource (physical memory) and turns it into an infinite, or at least an abundant, resource (virtual memory). 2. Information isolation. Because each process runs in its own address space, it is not possible for one process to read data that belongs to another process. This improves security because it reduces the risk of one process being able to spy on another pro- cess and, e.g., steal a password. 3. Fault isolation. Processes with their own virtual address spaces cannot overwrite each other’s memory. This greatly reduces the risk of a failure in one process trig- gering a failure in another process. That is, when a process crashes, the problem is generally limited to that process alone and does not cause the entire machine to go down.
    [Show full text]
  • Overcoming Traditional Problems with OS Huge Page Management
    MEGA: Overcoming Traditional Problems with OS Huge Page Management Theodore Michailidis Alex Delis Mema Roussopoulos University of Athens University of Athens University of Athens Athens, Greece Athens, Greece Athens, Greece [email protected] [email protected] [email protected] ABSTRACT KEYWORDS Modern computer systems now feature memory banks whose Huge pages, Address Translation, Memory Compaction aggregate size ranges from tens to hundreds of GBs. In this context, contemporary workloads can and do often consume ACM Reference Format: Theodore Michailidis, Alex Delis, and Mema Roussopoulos. 2019. vast amounts of main memory. This upsurge in memory con- MEGA: Overcoming Traditional Problems with OS Huge Page Man- sumption routinely results in increased virtual-to-physical agement. In The 12th ACM International Systems and Storage Con- address translations, and consequently and more importantly, ference (SYSTOR ’19), June 3–5, 2019, Haifa, Israel. ACM, New York, more translation misses. Both of these aspects collectively NY, USA, 11 pages. https://doi.org/10.1145/3319647.3325839 do hamper the performance of workload execution. A solu- tion aimed at dramatically reducing the number of address translation misses has been to provide hardware support for 1 INTRODUCTION pages with bigger sizes, termed huge pages. In this paper, we Computer memory capacities have been increasing sig- empirically demonstrate the benefits and drawbacks of using nificantly over the past decades, leading to the development such huge pages. In particular, we show that it is essential of memory hungry workloads that consume vast amounts for modern OS to refine their software mechanisms to more of main memory. This upsurge in memory consumption rou- effectively manage huge pages.
    [Show full text]
  • The Case for Compressed Caching in Virtual Memory Systems
    THE ADVANCED COMPUTING SYSTEMS ASSOCIATION The following paper was originally published in the Proceedings of the USENIX Annual Technical Conference Monterey, California, USA, June 6-11, 1999 The Case for Compressed Caching in Virtual Memory Systems _ Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis aUniversity of Texas at Austin © 1999 by The USENIX Association All Rights Reserved Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. For more information about the USENIX Association: Phone: 1 510 528 8649 FAX: 1 510 548 5738 Email: [email protected] WWW: http://www.usenix.org The Case for Compressed Caching in Virtual Memory Systems Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis Dept. of Computer Sciences University of Texas at Austin Austin, Texas 78751-1182 g fwilson|sfkaplan|smaragd @cs.utexas.edu http://www.cs.utexas.edu/users/oops/ Abstract In [Wil90, Wil91b] we proposed compressed caching for virtual memory—storing pages in compressed form Compressed caching uses part of the available RAM to in a main memory compression cache to reduce disk pag- hold pages in compressed form, effectively adding a new ing. Appel also promoted this idea [AL91], and it was level to the virtual memory hierarchy. This level attempts evaluated empirically by Douglis [Dou93] and by Russi- to bridge the huge performance gap between normal (un- novich and Cogswell [RC96]. Unfortunately Douglis’s compressed) RAM and disk.
    [Show full text]
  • Oracle® Linux 7 Release Notes for Oracle Linux 7.2
    Oracle® Linux 7 Release Notes for Oracle Linux 7.2 E67200-22 March 2021 Oracle Legal Notices Copyright © 2015, 2021 Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • Memory Protection at Option
    Memory Protection at Option Application-Tailored Memory Safety in Safety-Critical Embedded Systems – Speicherschutz nach Wahl Auf die Anwendung zugeschnittene Speichersicherheit in sicherheitskritischen eingebetteten Systemen Der Technischen Fakultät der Universität Erlangen-Nürnberg zur Erlangung des Grades Doktor-Ingenieur vorgelegt von Michael Stilkerich Erlangen — 2012 Als Dissertation genehmigt von der Technischen Fakultät Universität Erlangen-Nürnberg Tag der Einreichung: 09.07.2012 Tag der Promotion: 30.11.2012 Dekan: Prof. Dr.-Ing. Marion Merklein Berichterstatter: Prof. Dr.-Ing. Wolfgang Schröder-Preikschat Prof. Dr. Michael Philippsen Abstract With the increasing capabilities and resources available on microcontrollers, there is a trend in the embedded industry to integrate multiple software functions on a single system to save cost, size, weight, and power. The integration raises new requirements, thereunder the need for spatial isolation, which is commonly established by using a memory protection unit (MPU) that can constrain access to the physical address space to a fixed set of address regions. MPU-based protection is limited in terms of available hardware, flexibility, granularity and ease of use. Software-based memory protection can provide an alternative or complement MPU-based protection, but has found little attention in the embedded domain. In this thesis, I evaluate qualitative and quantitative advantages and limitations of MPU-based memory protection and software-based protection based on a multi-JVM. I developed a framework composed of the AUTOSAR OS-like operating system CiAO and KESO, a Java implementation for deeply embedded systems. The framework allows choosing from no memory protection, MPU-based protection, software-based protection, and a combination of the two.
    [Show full text]
  • Extracting Compressed Pages from the Windows 10 Virtual Store WHITE PAPER | EXTRACTING COMPRESSED PAGES from the WINDOWS 10 VIRTUAL STORE 2
    white paper Extracting Compressed Pages from the Windows 10 Virtual Store WHITE PAPER | EXTRACTING COMPRESSED PAGES FROM THE WINDOWS 10 VIRTUAL STORE 2 Abstract Windows 8.1 introduced memory compression in August 2013. By the end of 2013 Linux 3.11 and OS X Mavericks leveraged compressed memory as well. Disk I/O continues to be orders of magnitude slower than RAM, whereas reading and decompressing data in RAM is fast and highly parallelizable across the system’s CPU cores, yielding a significant performance increase. However, this came at the cost of increased complexity of process memory reconstruction and thus reduced the power of popular tools such as Volatility, Rekall, and Redline. In this document we introduce a method to retrieve compressed pages from the Windows 10 Memory Manager Virtual Store, thus providing forensics and auditing tools with a way to retrieve, examine, and reconstruct memory artifacts regardless of their storage location. Introduction Windows 10 moves pages between physical memory and the hard disk or the Store Manager’s virtual store when memory is constrained. Universal Windows Platform (UWP) applications leverage the Virtual Store any time they are suspended (as is the case when minimized). When a given page is no longer in the process’s working set, the corresponding Page Table Entry (PTE) is used by the OS to specify the storage location as well as additional data that allows it to start the retrieval process. In the case of a page file, the retrieval is straightforward because both the page file index and the location of the page within the page file can be directly retrieved.
    [Show full text]
  • Memory Protection File B [RWX] File D [RW] File F [R] OS GUEST LECTURE
    11/17/2018 Operating Systems Protection Goal: Ensure data confidentiality + data integrity + systems availability Protection domain = the set of accessible objects + access rights File A [RW] File C [R] Printer 1 [W] File E [RX] Memory Protection File B [RWX] File D [RW] File F [R] OS GUEST LECTURE XIAOWAN DONG Domain 1 Domain 2 Domain 3 11/15/2018 1 2 Private Virtual Address Space Challenges of Sharing Memory Most common memory protection Difficult to share pointer-based data Process A virtual Process B virtual Private Virtual addr space mechanism in current OS structures addr space addr space Private Page table Physical addr space ◦ Data may map to different virtual addresses Each process has a private virtual in different address spaces Physical address space Physical Access addr space ◦ Set of accessible objects: virtual pages frame no Rights Segment 3 mapped … … Segment 2 ◦ Access rights: access permissions to P1 RW each virtual page Segment 1 P2 RW Segment 1 Recorded in per-process page table P3 RW ◦ Virtual-to-physical translation + P4 RW Segment 2 access permissions … … 3 4 1 11/17/2018 Challenges of Sharing Memory Challenges for Memory Sharing Potential duplicate virtual-to-physical Potential duplicate virtual-to-physical translation information for shared Process A page Physical Process B page translation information for shared memory table addr space table memory Processor ◦ Page table is per-process at page ◦ Single copy of the physical memory, multiple granularity copies of the mapping info (even if identical) TLB L1 cache
    [Show full text]
  • Asynchronous Page Faults Aix Did It Red Hat Author Gleb Natapov August 10, 2010
    Asynchronous page faults Aix did it Red Hat Author Gleb Natapov August 10, 2010 Abstract Host memory overcommit may cause guest memory to be swapped. When guest vcpu access memory swapped out by a host its execution is suspended until memory is swapped back. Asynchronous page fault is a way to try and use guest vcpu more efficiently by allowing it to execute other tasks while page is brought back into memory. Part I How KVM Handles Guest Memory and What Inefficiency it Has With Regards to Host Swapping Mapping guest memory into host memory But we do it on demand Page fault happens on first guest access What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn()
    [Show full text]
  • What Is an Operating System III 2.1 Compnents II an Operating System
    Page 1 of 6 What is an Operating System III 2.1 Compnents II An operating system (OS) is software that manages computer hardware and software resources and provides common services for computer programs. The operating system is an essential component of the system software in a computer system. Application programs usually require an operating system to function. Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
    [Show full text]
  • HALO: Post-Link Heap-Layout Optimisation
    HALO: Post-Link Heap-Layout Optimisation Joe Savage Timothy M. Jones University of Cambridge, UK University of Cambridge, UK [email protected] [email protected] Abstract 1 Introduction Today, general-purpose memory allocators dominate the As the gap between memory and processor speeds continues landscape of dynamic memory management. While these so- to widen, efficient cache utilisation is more important than lutions can provide reasonably good behaviour across a wide ever. While compilers have long employed techniques like range of workloads, it is an unfortunate reality that their basic-block reordering, loop fission and tiling, and intelligent behaviour for any particular workload can be highly subop- register allocation to improve the cache behaviour of pro- timal. By catering primarily to average and worst-case usage grams, the layout of dynamically allocated memory remains patterns, these allocators deny programs the advantages of largely beyond the reach of static tools. domain-specific optimisations, and thus may inadvertently Today, when a C++ program calls new, or a C program place data in a manner that hinders performance, generating malloc, its request is satisfied by a general-purpose allocator unnecessary cache misses and load stalls. with no intimate knowledge of what the program does or To help alleviate these issues, we propose HALO: a post- how its data objects are used. Allocations are made through link profile-guided optimisation tool that can improve the fixed, lifeless interfaces, and fulfilled by
    [Show full text]
  • Linux Kernal II 9.1 Architecture
    Page 1 of 7 Linux Kernal II 9.1 Architecture: The Linux kernel is a Unix-like operating system kernel used by a variety of operating systems based on it, which are usually in the form of Linux distributions. The Linux kernel is a prominent example of free and open source software. Programming language The Linux kernel is written in the version of the C programming language supported by GCC (which has introduced a number of extensions and changes to standard C), together with a number of short sections of code written in the assembly language (in GCC's "AT&T-style" syntax) of the target architecture. Because of the extensions to C it supports, GCC was for a long time the only compiler capable of correctly building the Linux kernel. Compiler compatibility GCC is the default compiler for the Linux kernel source. In 2004, Intel claimed to have modified the kernel so that its C compiler also was capable of compiling it. There was another such reported success in 2009 with a modified 2.6.22 version of the kernel. Since 2010, effort has been underway to build the Linux kernel with Clang, an alternative compiler for the C language; as of 12 April 2014, the official kernel could almost be compiled by Clang. The project dedicated to this effort is named LLVMLinxu after the LLVM compiler infrastructure upon which Clang is built. LLVMLinux does not aim to fork either the Linux kernel or the LLVM, therefore it is a meta-project composed of patches that are eventually submitted to the upstream projects.
    [Show full text]
  • Chapter 4 Memory Management and Virtual Memory Asst.Prof.Dr
    Chapter 4 Memory management and Virtual Memory Asst.Prof.Dr. Supakit Nootyaskool IT-KMITL Object • To discuss the principle of memory management. • To understand the reason that memory partitions are importance for system organization. • To describe the concept of Paging and Segmentation. 4.1 Difference in main memory in the system • The uniprogramming system (example in DOS) allows only a program to be present in memory at a time. All resources are provided to a program at a time. Example in a memory has a program and an OS running 1) Operating system (on Kernel space) 2) A running program (on User space) • The multiprogramming system is difference from the mentioned above by allowing programs to be present in memory at a time. All resource must be organize and sharing to programs. Example by two programs are in the memory . 1) Operating system (on Kernel space) 2) Running programs (on User space + running) 3) Programs are waiting signals to execute in CPU (on User space). The multiprogramming system uses memory management to organize location to the programs 4.2 Memory management terms Frame Page Segment 4.2 Memory management terms Frame Page A fixed-lengthSegment block of main memory. 4.2 Memory management terms Frame Page A fixed-length block of data that resides in secondary memory. A page of data may temporarily beSegment copied into a frame of main memory. A variable-lengthMemory management block of data that residesterms in secondary memory. A segment may temporarily be copied into an available region of main memory or the segment may be divided into pages which can be individuallyFrame copied into mainPage memory.
    [Show full text]