APPLICATION NOTE XIP Linux for RZ/A1

Total Page:16

File Type:pdf, Size:1020Kb

APPLICATION NOTE XIP Linux for RZ/A1 APPLICATION NOTE RZ/A1 EU_00181 Rev.1.10 XIP Linux for RZ/A1 Jun 13, 2016 Introduction Target Device Contents 1. Frame of Reference .......................................................................................................................... 2 2. What is an XIP Linux Kernel ............................................................................................................. 2 3. RZ/A1 XIP SPI Flash Hardware Support .......................................................................................... 2 4. Updating the kernel image ................................................................................................................ 3 5. Kernel RAM usage ............................................................................................................................ 3 6. Simple Benchmarks .......................................................................................................................... 3 7. Kernel vs Userland ............................................................................................................................ 6 8. Files Systems and Storage ............................................................................................................... 6 9. u-boot Modifications .......................................................................................................................... 7 EU_00181 Rev.1.10 Page 1 of 8 Jun 13, 2016 RZ/A1 XIP Linux for RZ/A1 1. Frame of Reference Since the Linux kernel and open source community is constantly changing, please keep in mind that this documented was written in August of 2014, and the kernel references were to the Linux-3.14 code base. 2. What is an XIP Linux Kernel When any executable program is compiled and linked, the different portions of the program are combined together in the resulting binary image. For the Linux kernel, the order is basically: text (ie, code) , read only data, initialized data variables, uninitialized BSS variables. You can see this by examining the System.map file. For a traditional standard Linux kernel, this entire image memory map image is placed in RAM. The reason for this is that systems that generally utilize Linux are either PCs or high end embedded MPU designs where code is intended to be run from high speed RAM (DDR memory). A couple years ago, source code and the linker scripts within the kernel were modified such that ROM and RAM sections could be explicitly defined as opposed to just letting the RAM follow the ROM. The main target platform was the Power PC with a parallel NOR Flash. The main purpose was to allow for a faster boot time since the kernel would not have to be decompressed and copied into RAM before execution would begin. Instead, the kernel code could begin execution immediately. The tradeoff however was that NOR execution was slower than DDR execution, and NOR Flash cost more than DDR. Later, some patches were submitted to the mainline kernel for a TI OMAP device (ARM based). Again, the assumption was execution from parallel NOR flash. It should be noted that while traditional kernel utilities like mkimage were modified to produce some level of support for creating XIP kernel images that could be launched using the ‘bootm’ command in u-boot, it was specific to the original Power PC experiment (and a bit of a hack). TI did release an app note related to getting around the Power PC specific nuances, but again, it was somewhat of a work around to for the Power PC specific booting nuances. It is also worth mentioning that there is a section defined in the kernel called ‘init’ who scope is only during boot time. This means that any functions or data structures that are only needed for the boot process and can assumed to only be used once, you can assign them to this unique section. The benefit is that the final operations the Linux kernel will do during its boot process before handing control off to the application space is ‘free’ any init sections, thus reclaiming valuable RAM memory since otherwise it was be wasted taking up space for code that would never be run again. Therefore, one modification of the XIP kernel build was to outline that while the init section RAM data could be freed, the inti section code could not (since it will be located in ROM). For a RAM based kernel for the ARM architecture, the kernel is generally located at a virtual base address of 0xC0000000. Virtual memory mapping is used to remap the RAM’s physical location, say external SDRAM at 0x08000000 (CS2) or internal RAM at 0x20000000 for the RZ/A1. For an XIP kernel for ARM, the kernel’s ROM sections (code and constants) are mapped 16MB below the beginning of RAM, ie, 0xBF000000. This is the same area that is used for loaded modules. See the definition of the macro XIP_VIRT_ADDR in arch/arm/include/asm/memory.h. If you examine the System.map file of an XIP kernel build, you will easily be able to identify what portions are access directly from ROM Flash (0xBFxxxxxxx) and those portions that will be in RAM (0xC0xxxxxx) One more thing to mention is that when driver modules are loaded at run-time, they are loaded in RAM and accesses using address 0xBFxxxxxxx. This may be a little confusing because we just mentioned that 0xBFxxxxxxx was the location of the XIP kernel ROM, but that is the beauty of virtual address mapping. Also, if you have driver code that you need to run as fast as possible, by making it a driver module you can ensure that all the code will execute out of RAM which will give you better performance than a static driver which would be part of an XIP kernel executing out of Flash ROM. 3. RZ/A1 XIP SPI Flash Hardware Support The RZ/A1 has the ability to memory map the contents of SPI Flash into linear accessible/executable memory using peripheral block called “SPI Multi I/O Bus Controller”. Basically this means when the CPU attempts to read data or fetch code from a specific address range, hardware will automatically use the SPI channel to read the corresponding data from within the SPI Flash. Additionally, the hardware has 16 cache lines (8 bytes each) that can be used to prefetch data from flash in order to reduce latency. There is also the option to automatically fill more than 1 cache line with contiguous flash data on a cache miss in order to anticipate future reads. From experiments with the XIP Linux kernel, filling 2 cache lines automatically (ie, always read 16 bytes of SPI Flash), yields the best performance results. EU_00181 Rev.1.10 Page 2 of 8 Jun 13, 2016 RZ/A1 XIP Linux for RZ/A1 Other features of the XIP interface include supporting both 2-bit and 4-bit address/data interfaces to the SPI flash. This greatly increases the speed at which you can retrieve data. Additionally, there are Double Data Rate (DDR) capabilities so that data is read on every clock edge, meaning you effectively send the address and read the data at a rate twice the clock operating speed. Lastly, the 2 channels of this specialized SPI Flash interface can be used in conjunction with each other meaning you can retrieve data twice as fast since both SPI Flash will be responding to the same flash address. Channel 0 holds all the odd addressed bytes and channel 1 holds all the even address bytes. Therefore in theory it is possible to use 2 SPI Flash devices with the 4-bit wide interface and DDR option to retrieve 16-bits of flash data for each 50MHz SPI clock cycle (the maximum clock frequency of the RZ/A1). 4. Updating the kernel image The use of an XIP kernel does not prohibit the file system or media device you would like to use. The only exception is that you cannot reprogram the flash devices that you are currently running out of. For example, if the RZ/A1 is running in XIP mode using the Quad SPI interface, you cannot modify (erase/write) to that SPI Flash device since that would require taking the SPI peripheral out of XIP mode and putting it back into SPI mode, which would in turn crash your system. Instead, to update your kernel you would first have to save it someplace else and then reboot into u-boot or some other customer bootloader that executes out of RAM. It might be possible, however, to create a loadable kernel module where you first load the data you want to program into a memory buffer (in kernel space) and then disable all interrupts. Since we know the entire module will be loaded into system RAM at runtime, we can then change the SPI peripheral from XIP mode to SPI mode and erase/program in our data. Of course at time, we need to make sure no functions are used that are outside of that module, including any kernel utility functions since we would have completely disable any kernel code access. 5. Kernel RAM usage Since the main motivation to moving to an XIP kernel is saving RAM, here are some numbers related to what a XIP kernel uses in terms of RAM. For a traditional kernel, all code and data is kept in RAM. Therefore the majority of the kernel image that gets loaded into RAM is static code which obviously does not require a read/write medium to reside in. There here is a comparison of compiling the same kernel as XIP vs standard. Of course the kernel has many options and drivers that can be turned on and off. For this build, only a small number of drivers were included, the largest being the Ethernet driver and TCP/IP stack. The System.map file was used to determine amount of RAM used by basically looking at the address of the very last symbol ‘_end’. SDRAM kernel: 3,803 KB XIP kernel: 332 KB 6. Simple Benchmarks The following simple benchmarks were performed to understand what the performance difference were between and XIP kernel running out of quad SPI flash vs SDRAM. 6.1 Boot Time To measure boot time, the time was measured from the point in u-boot that the kernel boot process is started to the point at which the log message "Freeing unused kernel memory" is displayed because at that point, the file system is mounted and the rest of the time depends on what apps you want to start.
Recommended publications
  • Study of File System Evolution
    Study of File System Evolution Swaminathan Sundararaman, Sriram Subramanian Department of Computer Science University of Wisconsin {swami, srirams} @cs.wisc.edu Abstract File systems have traditionally been a major area of file systems are typically developed and maintained by research and development. This is evident from the several programmer across the globe. At any point in existence of over 50 file systems of varying popularity time, for a file system, there are three to six active in the current version of the Linux kernel. They developers, ten to fifteen patch contributors but a single represent a complex subsystem of the kernel, with each maintainer. These people communicate through file system employing different strategies for tackling individual file system mailing lists [14, 16, 18] various issues. Although there are many file systems in submitting proposals for new features, enhancements, Linux, there has been no prior work (to the best of our reporting bugs, submitting and reviewing patches for knowledge) on understanding how file systems evolve. known bugs. The problems with the open source We believe that such information would be useful to the development approach is that all communication is file system community allowing developers to learn buried in the mailing list archives and aren’t easily from previous experiences. accessible to others. As a result when new file systems are developed they do not leverage past experience and This paper looks at six file systems (Ext2, Ext3, Ext4, could end up re-inventing the wheel. To make things JFS, ReiserFS, and XFS) from a historical perspective worse, people could typically end up doing the same (between kernel versions 1.0 to 2.6) to get an insight on mistakes as done in other file systems.
    [Show full text]
  • ECE 598 – Advanced Operating Systems Lecture 19
    ECE 598 { Advanced Operating Systems Lecture 19 Vince Weaver http://web.eece.maine.edu/~vweaver [email protected] 7 April 2016 Announcements • Homework #7 was due • Homework #8 will be posted 1 Why use FAT over ext2? • FAT simpler, easy to code • FAT supported on all major OSes • ext2 faster, more robust filename and permissions 2 btrfs • B-tree fs (similar to a binary tree, but with pages full of leaves) • overwrite filesystem (overwite on modify) vs CoW • Copy on write. When write to a file, old data not overwritten. Since old data not over-written, crash recovery better Eventually old data garbage collected • Data in extents 3 • Copy-on-write • Forest of trees: { sub-volumes { extent-allocation { checksum tree { chunk device { reloc • On-line defragmentation • On-line volume growth 4 • Built-in RAID • Transparent compression • Snapshots • Checksums on data and meta-data • De-duplication • Cloning { can make an exact snapshot of file, copy-on- write different than link, different inodles but same blocks 5 Embedded • Designed to be small, simple, read-only? • romfs { 32 byte header (magic, size, checksum,name) { Repeating files (pointer to next [0 if none]), info, size, checksum, file name, file data • cramfs 6 ZFS Advanced OS from Sun/Oracle. Similar in idea to btrfs indirect still, not extent based? 7 ReFS Resilient FS, Microsoft's answer to brtfs and zfs 8 Networked File Systems • Allow a centralized file server to export a filesystem to multiple clients. • Provide file level access, not just raw blocks (NBD) • Clustered filesystems also exist, where multiple servers work in conjunction.
    [Show full text]
  • Hardware-Driven Evolution in Storage Software by Zev Weiss A
    Hardware-Driven Evolution in Storage Software by Zev Weiss A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2018 Date of final oral examination: June 8, 2018 ii The dissertation is approved by the following members of the Final Oral Committee: Andrea C. Arpaci-Dusseau, Professor, Computer Sciences Remzi H. Arpaci-Dusseau, Professor, Computer Sciences Michael M. Swift, Professor, Computer Sciences Karthikeyan Sankaralingam, Professor, Computer Sciences Johannes Wallmann, Associate Professor, Mead Witter School of Music i © Copyright by Zev Weiss 2018 All Rights Reserved ii To my parents, for their endless support, and my cousin Charlie, one of the kindest people I’ve ever known. iii Acknowledgments I have taken what might be politely called a “scenic route” of sorts through grad school. While Ph.D. students more focused on a rapid graduation turnaround time might find this regrettable, I am glad to have done so, in part because it has afforded me the opportunities to meet and work with so many excellent people along the way. I owe debts of gratitude to a large cast of characters: To my advisors, Andrea and Remzi Arpaci-Dusseau. It is one of the most common pieces of wisdom imparted on incoming grad students that one’s relationship with one’s advisor (or advisors) is perhaps the single most important factor in whether these years of your life will be pleasant or unpleasant, and I feel exceptionally fortunate to have ended up iv with the advisors that I’ve had.
    [Show full text]
  • Enterprise Filesystems
    Enterprise Filesystems Eric Sandeen Principal Software Engineer, Red Hat Feb 21, 2013 1 ERIC SANDEEN What We'll Cover ● Local “Enterprise-ready” Linux filesystems ● Ext3 ● Ext4 ● XFS ● BTRFS ● Use cases, features, pros & cons of each ● Recent & future work ● Features ● Scalability ● Benchmarks 2 ERIC SANDEEN Local Filesystems in RHEL6 ● We ship what customers need and can rely on ● We ship what we test and support ● Major on-disk local filesystems ● Ext3, Ext4, XFS, BTRFS* ● Others are available for special purposes ● fat, vfat, msdos, udf, cramfs, squashfs... ● We'll cover the “big four” today 3 ERIC SANDEEN The Ext3 filesystem ● Ext3 is was the most common file system in Linux ● Most distributions historically used it as their default ● Applications tuned to its specific behaviors (fsync...) ● Familiar to most system administrators ● Ext3 challenges ● File system repair (fsck) time can be extremely long ● Limited scalability - maximum file system size of 16TB ● Can be significantly slower than other local file systems ● direct/indirect, bitmaps, no delalloc ... 4 ERIC SANDEEN The Ext4 filesystem ● Ext4 has many compelling new features ● Extent based allocation ● Faster fsck time (up to 10x over ext3) ● Delayed allocation, preallocation ● Higher bandwidth ● Should be relatively familiar for existing ext3 users ● Ext4 challenges ● Large device support not polished in its user space tools ● Limits supported maximum file system size to 16TB* ● Has different behavior over system failure 5 ERIC SANDEEN The XFS filesystem ● XFS is very robust
    [Show full text]
  • CIS Ubuntu Linux 18.04 LTS Benchmark
    CIS Ubuntu Linux 18.04 LTS Benchmark v1.0.0 - 08-13-2018 Terms of Use Please see the below link for our current terms of use: https://www.cisecurity.org/cis-securesuite/cis-securesuite-membership-terms-of-use/ 1 | P a g e Table of Contents Terms of Use ........................................................................................................................................................... 1 Overview ............................................................................................................................................................... 12 Intended Audience ........................................................................................................................................ 12 Consensus Guidance ..................................................................................................................................... 13 Typographical Conventions ...................................................................................................................... 14 Scoring Information ..................................................................................................................................... 14 Profile Definitions ......................................................................................................................................... 15 Acknowledgements ...................................................................................................................................... 17 Recommendations ............................................................................................................................................
    [Show full text]
  • Proceedings of the Linux Symposium Volume
    Proceedings of the Linux Symposium Volume Two July 19th–22nd, 2006 Ottawa, Ontario Canada Contents Evolution in Kernel Debugging using Hardware Virtualization With Xen 1 Nitin A. Kamble Improving Linux Startup Time Using Software Resume (and other techniques) 17 Hiroki Kaminaga Automated Regression Hunting 27 A. Bowen, P. Fox, J. Kenefick, A. Romney, J. Ruesch, J. Wilde, & J. Wilson Hacking the Linux Automounter—Current Limitations and Future Directions 37 Ian Maxwell Kent & Jeff Moyer Why NFS Sucks 51 Olaf Kirch Efficient Use of the Page Cache with 64 KB Pages 65 Dave Kleikamp and Badari Pulavarty Startup Time in the 21st Century: Filesystem Hacks and Assorted Tweaks 71 Benjamin C.R. LaHaise Using Hugetlbfs for Mapping Application Text Regions 75 H.J. Lu, K. Doshi, R. Seth, & J. Tran Towards a Better SCM: Revlog and Mercurial 83 Matt Mackall Roadmap to a GL-based composited desktop for Linux 91 K.E. Martin and K. Packard Probing the Guts of Kprobes 101 A. Mavinakayanahalli, P. Panchamukhi, J. Keniston, A. Keshavamurthy, & M. Hiramatsu Shared Page Tables Redux 117 Dave McCracken Extending RCU for Realtime and Embedded Workloads 123 Paul E. McKenney OSTRA: Experiments With on-the-fly Source Patching 139 Arnaldo Carvalho de Melo Design and Implementation to Support Multiple Key Exchange Protocols for IPsec 143 K. Miyazawa, S. Sakane, K. Kamada, M. Kanda, & A. Fukumoto The State of Linux Power Management 2006 151 Patrick Mochel I/O Workload Fingerprinting in the Genetic-Library 165 Jake Moilanen X86-64 XenLinux: Architecture, Implementation, and Optimizations 173 Jun Nakajima, Asit Mallick GCC—An Architectural Overview, Current Status, and Future Directions 185 Diego Novillo Shared-Subtree Concept, Implementation, and Applications in Linux 201 Al Viro & Ram Pai The Ondemand Governor 215 Venkatesh Pallipadi & Alexey Starikovskiy Linux Bootup Time Reduction for Digital Still Camera 231 Chan-Ju Park A Lockless Pagecache in Linux—Introduction, Progress, Performance 241 Nick Piggin The Ongoing Evolution of Xen 255 I.
    [Show full text]
  • Ted Ts'o on Linux File Systems
    Ted Ts’o on Linux File Systems An Interview RIK FARROW Rik Farrow is the Editor of ;login:. ran into Ted Ts’o during a tutorial luncheon at LISA ’12, and that later [email protected] sparked an email discussion. I started by asking Ted questions that had I puzzled me about the early history of ext2 having to do with the perfor- mance of ext2 compared to the BSD Fast File System (FFS). I had met Rob Kolstad, then president of BSDi, because of my interest in the AT&T lawsuit against the University of California and BSDi. BSDi was being sued for, among other things, Theodore Ts’o is the first having a phone number that could be spelled 800-ITS-UNIX. I thought that it was important North American Linux for the future of open source operating systems that AT&T lose that lawsuit. Kernel Developer, having That said, when I compared the performance of early versions of Linux to the current version started working with Linux of BSDi, I found that they were closely matched, with one glaring exception. Unpacking tar in September 1991. He also archives using Linux (likely .9) was blazingly fast compared to BSDi. I asked Rob, and he served as the tech lead for the MIT Kerberos explained that the issue had to do with synchronous writes, finally clearing up a mystery for me. V5 development team, and was the architect at IBM in charge of bringing real-time Linux Now I had a chance to ask Ted about the story from the Linux side, as well as other questions in support of real-time Java to the US Navy.
    [Show full text]
  • Bull System Backup / Restore N O V a SC a LE
    Bull System Backup / Restore NOVASCALE User's Guide for NovaScale Universal & Intensive REFERENCE 86 A2 73EV 04 NOVASCALE Bull System Backup / Restore User's Guide for NovaScale Universal & Intensive Software November 2008 BULL CEDOC 357 AVENUE PATTON B.P.20845 49008 ANGERS CEDEX 01 FRANCE REFERENCE 86 A2 73EV 04 The following copyright notice protects this book under Copyright laws which prohibit such actions as, but not limited to, copying, distributing, modifying, and making derivative works. Copyright Bull SAS 2008 Printed in France Suggestions and criticisms concerning the form, content, and presentation of this book are invited. A form is provided at the end of this book for this purpose. To order additional copies of this book or other Bull Technical Publications, you are invited to use the Ordering Form also provided at the end of this book. Trademarks and Acknowledgements We acknowledge the right of proprietors of trademarks mentioned in this book. Intel ® and Itanium ® are registered trademarks of Intel Corporation. Windows ® and Microsoft ® software are registered trademarks of Microsoft Corporation. UNIX ® is a registered trademark in the United States of America and other countries licensed exclusively through the Open Group. Linux ® is a registered trademark of Linus Torvalds. NovaScale ® is a registered trademark of Bull The information in this document is subject to change without notice. Bull will not be liable for errors contained herein, or for incidental or consequential damages in connection with the use of this material.
    [Show full text]
  • Table of Contents
    A Comprehensive Introduction to Vista Operating System Table of Contents Chapter 1 - Windows Vista Chapter 2 - Development of Windows Vista Chapter 3 - Features New to Windows Vista Chapter 4 - Technical Features New to Windows Vista Chapter 5 - Security and Safety Features New to Windows Vista Chapter 6 - Windows Vista Editions Chapter 7 - Criticism of Windows Vista Chapter 8 - Windows Vista Networking Technologies Chapter 9 -WT Vista Transformation Pack _____________________ WORLD TECHNOLOGIES _____________________ Abstraction and Closure in Computer Science Table of Contents Chapter 1 - Abstraction (Computer Science) Chapter 2 - Closure (Computer Science) Chapter 3 - Control Flow and Structured Programming Chapter 4 - Abstract Data Type and Object (Computer Science) Chapter 5 - Levels of Abstraction Chapter 6 - Anonymous Function WT _____________________ WORLD TECHNOLOGIES _____________________ Advanced Linux Operating Systems Table of Contents Chapter 1 - Introduction to Linux Chapter 2 - Linux Kernel Chapter 3 - History of Linux Chapter 4 - Linux Adoption Chapter 5 - Linux Distribution Chapter 6 - SCO-Linux Controversies Chapter 7 - GNU/Linux Naming Controversy Chapter 8 -WT Criticism of Desktop Linux _____________________ WORLD TECHNOLOGIES _____________________ Advanced Software Testing Table of Contents Chapter 1 - Software Testing Chapter 2 - Application Programming Interface and Code Coverage Chapter 3 - Fault Injection and Mutation Testing Chapter 4 - Exploratory Testing, Fuzz Testing and Equivalence Partitioning Chapter 5
    [Show full text]
  • CIS Red Hat Enterprise Linux 7 Benchmark
    CIS Red Hat Enterprise Linux 7 Benchmark v2.1.1 - 01-31-2017 This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License. The link to the license terms can be found at https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode To further clarify the Creative Commons license related to CIS Benchmark content, you are authorized to copy and redistribute the content for use by you, within your organization and outside your organization for non-commercial purposes only, provided that (i) appropriate credit is given to CIS, (ii) a link to the license is provided. Additionally, if you remix, transform or build upon the CIS Benchmark(s), you may only distribute the modified materials if they are subject to the same license terms as the original Benchmark license and your derivative will no longer be a CIS Benchmark. Commercial use of CIS Benchmarks is subject to the prior approval of the Center for Internet Security. 1 | P a g e Table of Contents Overview ............................................................................................................................................................... 12 Intended Audience ........................................................................................................................................ 12 Consensus Guidance ..................................................................................................................................... 12 Typographical Conventions .....................................................................................................................
    [Show full text]
  • Embedded Linux Optimizations
    Embedded Linux optimizations Embedded Linux optimizations Size, RAM, speed, power, cost Michael Opdenacker Thomas Petazzoni Free Electrons © Copyright 2004-2009, Free Electrons. Creative Commons BY-SA 3.0 license Latest update: Dec 20, 2010, Document sources, updates and translations: http://free-electrons.com/docs/optimizations Corrections, suggestions, contributions and translations are welcome! 1 Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http//free-electrons.com Penguin weight watchers Make your penguin slimmer, faster, and reduce its consumption of fish! Before 2 weeks after 2 Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http//free-electrons.com CE Linux Forum http://celinuxforum.org/ Non profit organization, whose members are embedded Linux companies and Consumer Electronics (CE) devices makers. Mission: develop the use of Linux in CE devices Hosts many projects to improve the suitability of Linux for CE devices and embedded systems. All patches are meant to be included in the mainline Linux kernel. Most of the ideas introduced in this presentation have been gathered or even implemented by CE Linux Forum projects! 3 Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http//free-electrons.com Contents Ideas for optimizing the Linux kernel and executables Increasing speed Reducing size: disk footprint and RAM Reducing power consumption Global perspective: cost and combined optimization effects The ultimate optimization tool! 4 Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http//free-electrons.com Embedded Linux Optimizations Increasing speed Reducing kernel boot time 5 Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support.
    [Show full text]
  • Non-Volatileメインメモリとファイルシステムの融合
    情報処理学会論文誌 Vol.54 No.3 1153–1164 (Mar. 2013) Non-Volatileメインメモリとファイルシステムの融合 追川 修一1,a) 受付日 2012年7月9日, 採録日 2012年12月7日 概要:近年,不揮発性の non-volatile(NV)メモリの性能向上が著しく,高速化,大容量化,低価格化が 進んでいることから,それらをメインメモリとして用いる研究,またストレージデバイスとして用いる研 究が,それぞれ別個に行われてきた.しかしながら,メインメモリおよびストレージの両方としても用い ることのできる NV メモリは,その両方を融合できることを意味する.融合により,メインメモリとして 使用できるメモリ領域が増加し,これまでメインメモリ容量を超えてメモリ割当て要求があった場合に発 生していたページスワップが不要になることで,システムの処理性能が向上する.本論文は Linux を対象 とし,NV メモリから構成されるメインメモリとファイルシステムの具体的な融合方法を提案する.そし て,Linux をエミュレータ上で実行する評価実験を行い,融合が可能であること,また性能面でも有効で あることを示す. キーワード:オペレーティングシステム,メモリ管理,ファイルシステム,不揮発性メモリ Unification of Non-Volatile Main Memory and a File System Shuichi Oikawa1,a) Received: July 9, 2012, Accepted: December 7, 2012 Abstract: Recent advances of non-volatile (NV) memory technologies make significant improvements on its performance including faster access speed, larger capacity, and cheaper costs. While the active researches on its use for main memory or storage devices have been stimulated by such improvements, they were conducted independently. The fact that NV memory can be used for both main memory and storage devices means that they can be unified. The unification of main memory and a file system based on NV memory enables the improvement of system performance because paging becomes unnecessary. This paper proposes a method of such unification and its implementation for the Linux kernel. The evaluation results performed by executing Linux on a system emulator shows the feasibility of the proposed unification method. Keywords: operating systems, memory management, file systems,
    [Show full text]