Getting U-Boot FIT For

Total Page:16

File Type:pdf, Size:1020Kb

Getting U-Boot FIT For Robbie VanVossen Getting U-Boot Karl Apsite Paul Skentzos FIT for Xen Joshua Whitehead Xen Developers Summit 2015 Presentation Outline □ U-Boot □ FIT Image □ FIT Image Source File □ Loadables □ Building & Booting □ Benefits of Using FIT ..... where hardware and software design meet 2 U-Boot The Universal Bootloader where hardware and software design meet 3 U-Boot - History □ Open Source Bootloader for embedded devices □ Originally 8xxROM □ 1999 - PPCBoot □ 2000 - Publicly released v0.4.1 □ Strictly for Power PC architecture □ 2002 □ Forked into a product called ARMBoot □ Renamed to Das U-Boot (Universal Bootloader) to reflect new architecture support □ 2003 - Added MIPS32, MIPS64, Coldfire, Altera NIOS−32 □ 2008 - Added Flattened Image Tree support ..... where hardware and software design meet 4 U-Boot - Features □ Multiple loading methods □ TFTP, mmc devices, various flash devices, PXE, IDE, SATA, USB □ Lots of supported architectures □ 68k, ARM, AVR32, Blackfin, MicroBlaze, MIPS, Nios, PPC, and x86 □ Filesystem handling □ Including Cramfs, ext2, ext3, ext4, FAT, FDOS, JFFS2, ReiserFS, UBIFS, and ZFS □ Network handling □ ping, DHCP, TFTP □ FDT handling □ Direct memory reads and writes ..... where hardware and software design meet 5 FIT Image Flattened Image Tree where hardware and software design meet 6 FIT Image - Introduction □ Uses a tree-like structure □ Flexible, monolithic binary that includes everything for booting □ Benefits □ Image Hashing □ Multiple Configurations □ FIT can not load more than one kernel ..... where hardware and software design meet 7 FIT Image - Build Requirements □ Utilities □ mkimage □ dtc - device tree compiler □ Binaries □ Device Tree Binaries □ Kernels □ Optional ramdisks □ Optional Loadables (DornerWorks Contribution) □ Image Source File (*.its) ..... where hardware and software design meet 8 FIT Image - Generation Image FIT Source mkimage Image (*. File (*.its) itb) Binaries (kernels, fdts, ramdisks) ..... where hardware and software design meet 9 Image Source File FIT Configuration (Before our updates) where hardware and software design meet 10 Image Source File - Example.its /dts-v1/; fdt@1 { / { description = "Flattened Device Tree description = "Single Linux kernel and FDT blob"; blob"; #address-cells = <1>; data = /incbin/("./target.dtb"); type = "flat_dt"; images { arch = "arm"; compression = "none"; kernel@1 { load = <00700000>; description = "Vanilla Linux kernel"; hash@1 { data = /incbin/("./vmlinux.bin.gz"); algo = "crc32"; type = "kernel"; }; arch = "arm"; hash@2 { os = "linux"; algo = "sha1"; compression = "gzip"; }; load = <00000000>; }; entry = <00000000>; }; hash@1 { fdt@2 { algo = "crc32"; description = "Flattened Device Tree }; blob"; hash@2 { data = /incbin/("./target2.dtb"); algo = "sha1"; type = "flat_dt"; }; arch = "arm"; }; compression = "none"; load = <00700000>; }; }; ..... where hardware and software design meet 11 Image Source File - Example.its (cont) ramdisk@1 { configurations { description = "ramdisk"; default = "conf@1"; data = /incbin/("./ramdisk"); type = "ramdisk"; conf@1 { arch = "arm"; description = "Linux kernel with FDT os = "linux"; blob"; compression = "gzip"; kernel = "kernel@1"; load = <00800000>; fdt = "fdt@1"; entry = <00800000>; }; hash@1 { algo = "sha1"; conf@2 { }; description = "Linux kernel, fdt, & }; ramdisk"; }; kernel = "kernel@1"; ramdisk = “ramdisk@1”; fdt = "fdt@2"; }; }; }; ..... where hardware and software design meet 12 Image Source File - Format □ The image source file format is defined here: http://git.denx.de/?p=u-boot.git;a=blob_plain;f=doc/uImage. FIT/source_file_format.txt;hb=HEAD ..... where hardware and software design meet 13 Loadables New Configuration Property where hardware and software design meet 14 Loadables - Problem □ To use a FIT Image to load a Xen system, we need a configuration for □ Xen kernel □ Xen Device Tree Blob (DTB) □ Dom0 kernel □ The existing configuration properties: □ Allowed only one image per configuration □ Perform specific tasks and checks based on the type □ The kernel property is the image that gets executed, so it needs to be the Xen kernel □ The fdt property needs to be set to our DTB □ The ramdisk property can’t be used for our Dom0 kernel □ Therefore, we need a way to load a generic image (loadable) to a specific location ..... where hardware and software design meet 15 Loadables - Solution □ Created a new configuration property □ Loadables - A list of image sub-nodes of any type □ The loadables property doesn’t have any extra tasks or checks □ Multiple images □ Now there is a configuration property for the Dom0 Linux Kernel ..... where hardware and software design meet 16 Updating U-Boot - U-Boot □ Updated U-Boot to look for and handle the new loadables field □ Added bootm_find_loadables()/boot_get_loadable() □ Refactored relevant functions for readability and simplicity □ Added tests for loadables □ U-Boot mainline at the tag v2015.07 ..... where hardware and software design meet 17 Updating U-Boot - mkimage □ We modified mkimage to include the new loadables field. $ ./u-boot/tools/mkimage -f xen.its /tftpboot/xen.itb FIT description: Configuration to load a Xen Kernel . Default Configuration: 'config@1' Configuration 0 (config@1) Description: Xen 4.6.0-one loadable Kernel: xen_kernel@1 FDT: fdt@1 Loadables: linux_kernel@1 Configuration 1 (config@2) Description: Plain Linux Kernel: linux_kernel@1 FDT: fdt@2 ..... where hardware and software design meet 18 Loadables - xen.its /dts-v1/; fdt@1 { / { description = "Cubietruck Xen tree blob"; description = "Configuration to load a Xen Kernel"; data = /incbin/("./xen.dtb"); #address-cells = <1>; type = "flat_dt"; arch = "arm"; images { compression = "none"; xen_kernel@1 { load = <0xaec00000>; description = "xen-4.6.0-unstable"; hash@1 { data = /incbin/("./xen"); algo = "md5"; type = "kernel"; }; arch = "arm"; }; os = "linux"; compression = "none"; fdt@2 { load = <0xaea00000>; description = "Cubietruck tree blob"; entry = <0xaea00000>; data = /incbin/("./sun7i-a20-cubietruck.dtb"); hash@1 { type = "flat_dt"; algo = "md5"; arch = "arm"; }; compression = "none"; }; load = <0xaec00000>; hash@1 { algo = "md5"; }; }; ..... where hardware and software design meet 19 Loadables - xen.its (cont) linux_kernel@1 { configurations { description = "Linux zImage"; default = "config@1"; data = /incbin/("./vmlinuz"); type = "kernel"; config@1 { arch = "arm"; description = "Xen 4.6.0-one loadable"; os = "linux"; kernel = "xen_kernel@1"; compression = "none"; fdt = "fdt@1"; load = <0xaf600000>; loadables = "linux_kernel@1"; entry = <0xaf600000>; }; hash@1 { algo = "md5"; config@2 { }; description = "Plain Linux"; }; kernel = "linux_kernel@1"; }; fdt = "fdt@2"; }; }; }; ..... where hardware and software design meet 20 Building & Booting Example of how to build and boot from a FIT image where hardware and software design meet 21 Building the FIT Image □ Build U-Boot with FIT support (CONFIG_FIT=y) □ Build images (Dom0 Kernel, Xen Kernel, and DTB) as usual □ Place images as specified in the image source file □ Build the output FIT image $ mkimage -f xen.its xen.itb □ Copy U-Boot to where it needs to be for your board □ Move the FIT image to where you will be booting from □ We will use an SD card, with a boot partition, as an example ..... where hardware and software design meet 22 Booting the FIT Image □ Place the SD Card in your board □ Boot the board (Our example uses the cubietruck) □ Stop the U-Boot autoboot □ Load the FIT Image into Memory sunxi# fatload mmc 0 0x80000000 /xen.itb □ Boot configuration 1 sunxi# bootm 0x80000000#config@1 □ U-Boot will then check the specified hashes for those images, move them to their proper load addresses, and boot into the specified kernel, Xen □ Before booting a configuration, you can get information about your FIT image with the following command sunxi# iminfo 0x80000000 ..... where hardware and software design meet 23 Booting - Example Output sunxi# bootm 0x80000000#config@1 Trying 'fdt@1' fdt subimage ## Loading kernel from FIT Image at 80000000 ... Description: Cubietruck Xen tree blob Using 'config@1' configuration Type: Flat Device Tree Trying 'xen_kernel@1' kernel subimage Compression: uncompressed Description: xen-4.6.0-unstable Data Start: 0x800a84d8 Type: Kernel Image Data Size: 21940 Bytes = 21.4 KiB Compression: uncompressed Architecture: ARM Data Start: 0x800000dc Hash algo: md5 Data Size: 688912 Bytes = 672.8 KiB Hash value: Architecture: ARM 3c27715e5c19226064186193e3f30bc4 OS: Linux Verifying Hash Integrity ... md5+ OK Load Address: 0xaea00000 Loading fdt from 0x800a84d8 to 0xaec00000 Entry Point: 0xaea00000 Booting using the fdt blob at 0xaec00000 Hash algo: md5 ## Loading loadables from FIT Image at Hash value: 80000000 ... 51424697da4d1523a8c87150c7cbad00 Trying 'linux_kernel@1' loadables subimage Verifying Hash Integrity ... md5+ OK Description: Linux zImage ## Loading fdt from FIT Image at 80000000 ... Type: Kernel Image Using 'config@1' configuration Compression: uncompressed Data Start: 0x800b3108 Data Size: 5247832 Bytes = 5 MiB ..... where hardware and software design meet 24 Booting - Example Output (cont) Architecture: ARM - Setting up control registers - OS: Linux - Turning on paging - Load Address: 0xaf600000 - Ready - Entry Point: 0xaf600000 Checking for initrd in /chosen Hash algo: md5 [...] Hash value: 84c9630522c9737f6ded803177c967e8 Placing Xen at 0x00000000bfe00000- Verifying Hash Integrity ... md5+ OK 0x00000000c0000000 Loading loadables from 0x800b3108 to Xen heap: 000000009e000000- 0xaf600000
Recommended publications
  • Study of File System Evolution
    Study of File System Evolution Swaminathan Sundararaman, Sriram Subramanian Department of Computer Science University of Wisconsin {swami, srirams} @cs.wisc.edu Abstract File systems have traditionally been a major area of file systems are typically developed and maintained by research and development. This is evident from the several programmer across the globe. At any point in existence of over 50 file systems of varying popularity time, for a file system, there are three to six active in the current version of the Linux kernel. They developers, ten to fifteen patch contributors but a single represent a complex subsystem of the kernel, with each maintainer. These people communicate through file system employing different strategies for tackling individual file system mailing lists [14, 16, 18] various issues. Although there are many file systems in submitting proposals for new features, enhancements, Linux, there has been no prior work (to the best of our reporting bugs, submitting and reviewing patches for knowledge) on understanding how file systems evolve. known bugs. The problems with the open source We believe that such information would be useful to the development approach is that all communication is file system community allowing developers to learn buried in the mailing list archives and aren’t easily from previous experiences. accessible to others. As a result when new file systems are developed they do not leverage past experience and This paper looks at six file systems (Ext2, Ext3, Ext4, could end up re-inventing the wheel. To make things JFS, ReiserFS, and XFS) from a historical perspective worse, people could typically end up doing the same (between kernel versions 1.0 to 2.6) to get an insight on mistakes as done in other file systems.
    [Show full text]
  • Membrane: Operating System Support for Restartable File Systems Swaminathan Sundararaman, Sriram Subramanian, Abhishek Rajimwale, Andrea C
    Membrane: Operating System Support for Restartable File Systems Swaminathan Sundararaman, Sriram Subramanian, Abhishek Rajimwale, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau, Michael M. Swift Computer Sciences Department, University of Wisconsin, Madison Abstract and most complex code bases in the kernel. Further, We introduce Membrane, a set of changes to the oper- file systems are still under active development, and new ating system to support restartable file systems. Mem- ones are introduced quite frequently. For example, Linux brane allows an operating system to tolerate a broad has many established file systems, including ext2 [34], class of file system failures and does so while remain- ext3 [35], reiserfs [27], and still there is great interest in ing transparent to running applications; upon failure, the next-generation file systems such as Linux ext4 and btrfs. file system restarts, its state is restored, and pending ap- Thus, file systems are large, complex, and under develop- plication requests are serviced as if no failure had oc- ment, the perfect storm for numerous bugs to arise. curred. Membrane provides transparent recovery through Because of the likely presence of flaws in their imple- a lightweight logging and checkpoint infrastructure, and mentation, it is critical to consider how to recover from includes novel techniques to improve performance and file system crashes as well. Unfortunately, we cannot di- correctness of its fault-anticipation and recovery machin- rectly apply previous work from the device-driver litera- ery. We tested Membrane with ext2, ext3, and VFAT. ture to improving file-system fault recovery. File systems, Through experimentation, we show that Membrane in- unlike device drivers, are extremely stateful, as they man- duces little performance overhead and can tolerate a wide age vast amounts of both in-memory and persistent data; range of file system crashes.
    [Show full text]
  • ECE 598 – Advanced Operating Systems Lecture 19
    ECE 598 { Advanced Operating Systems Lecture 19 Vince Weaver http://web.eece.maine.edu/~vweaver [email protected] 7 April 2016 Announcements • Homework #7 was due • Homework #8 will be posted 1 Why use FAT over ext2? • FAT simpler, easy to code • FAT supported on all major OSes • ext2 faster, more robust filename and permissions 2 btrfs • B-tree fs (similar to a binary tree, but with pages full of leaves) • overwrite filesystem (overwite on modify) vs CoW • Copy on write. When write to a file, old data not overwritten. Since old data not over-written, crash recovery better Eventually old data garbage collected • Data in extents 3 • Copy-on-write • Forest of trees: { sub-volumes { extent-allocation { checksum tree { chunk device { reloc • On-line defragmentation • On-line volume growth 4 • Built-in RAID • Transparent compression • Snapshots • Checksums on data and meta-data • De-duplication • Cloning { can make an exact snapshot of file, copy-on- write different than link, different inodles but same blocks 5 Embedded • Designed to be small, simple, read-only? • romfs { 32 byte header (magic, size, checksum,name) { Repeating files (pointer to next [0 if none]), info, size, checksum, file name, file data • cramfs 6 ZFS Advanced OS from Sun/Oracle. Similar in idea to btrfs indirect still, not extent based? 7 ReFS Resilient FS, Microsoft's answer to brtfs and zfs 8 Networked File Systems • Allow a centralized file server to export a filesystem to multiple clients. • Provide file level access, not just raw blocks (NBD) • Clustered filesystems also exist, where multiple servers work in conjunction.
    [Show full text]
  • Ext4 File System and Crash Consistency
    1 Ext4 file system and crash consistency Changwoo Min 2 Summary of last lectures • Tools: building, exploring, and debugging Linux kernel • Core kernel infrastructure • Process management & scheduling • Interrupt & interrupt handler • Kernel synchronization • Memory management • Virtual file system • Page cache and page fault 3 Today: ext4 file system and crash consistency • File system in Linux kernel • Design considerations of a file system • History of file system • On-disk structure of Ext4 • File operations • Crash consistency 4 File system in Linux kernel User space application (ex: cp) User-space Syscalls: open, read, write, etc. Kernel-space VFS: Virtual File System Filesystems ext4 FAT32 JFFS2 Block layer Hardware Embedded Hard disk USB drive flash 5 What is a file system fundamentally? int main(int argc, char *argv[]) { int fd; char buffer[4096]; struct stat_buf; DIR *dir; struct dirent *entry; /* 1. Path name -> inode mapping */ fd = open("/home/lkp/hello.c" , O_RDONLY); /* 2. File offset -> disk block address mapping */ pread(fd, buffer, sizeof(buffer), 0); /* 3. File meta data operation */ fstat(fd, &stat_buf); printf("file size = %d\n", stat_buf.st_size); /* 4. Directory operation */ dir = opendir("/home"); entry = readdir(dir); printf("dir = %s\n", entry->d_name); return 0; } 6 Why do we care EXT4 file system? • Most widely-deployed file system • Default file system of major Linux distributions • File system used in Google data center • Default file system of Android kernel • Follows the traditional file system design 7 History of file system design 8 UFS (Unix File System) • The original UNIX file system • Design by Dennis Ritche and Ken Thompson (1974) • The first Linux file system (ext) and Minix FS has a similar layout 9 UFS (Unix File System) • Performance problem of UFS (and the first Linux file system) • Especially, long seek time between an inode and data block 10 FFS (Fast File System) • The file system of BSD UNIX • Designed by Marshall Kirk McKusick, et al.
    [Show full text]
  • Hardware-Driven Evolution in Storage Software by Zev Weiss A
    Hardware-Driven Evolution in Storage Software by Zev Weiss A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2018 Date of final oral examination: June 8, 2018 ii The dissertation is approved by the following members of the Final Oral Committee: Andrea C. Arpaci-Dusseau, Professor, Computer Sciences Remzi H. Arpaci-Dusseau, Professor, Computer Sciences Michael M. Swift, Professor, Computer Sciences Karthikeyan Sankaralingam, Professor, Computer Sciences Johannes Wallmann, Associate Professor, Mead Witter School of Music i © Copyright by Zev Weiss 2018 All Rights Reserved ii To my parents, for their endless support, and my cousin Charlie, one of the kindest people I’ve ever known. iii Acknowledgments I have taken what might be politely called a “scenic route” of sorts through grad school. While Ph.D. students more focused on a rapid graduation turnaround time might find this regrettable, I am glad to have done so, in part because it has afforded me the opportunities to meet and work with so many excellent people along the way. I owe debts of gratitude to a large cast of characters: To my advisors, Andrea and Remzi Arpaci-Dusseau. It is one of the most common pieces of wisdom imparted on incoming grad students that one’s relationship with one’s advisor (or advisors) is perhaps the single most important factor in whether these years of your life will be pleasant or unpleasant, and I feel exceptionally fortunate to have ended up iv with the advisors that I’ve had.
    [Show full text]
  • AMD Alchemy™ Processors Building a Root File System for Linux® Incorporating Memory Technology Devices
    AMD Alchemy™ Processors Building a Root File System for Linux® Incorporating Memory Technology Devices 1.0 Scope This document outlines a step-by-step process for building and deploying a Flash-based root file system for Linux® on an AMD Alchemy™ processor-based development board, using an approach that incorporates Memory Technology Devices (MTDs) with the JFFS2 file system. Note: This document describes creating a root file system on NOR Flash memory devices, and does not apply to NAND Flash devices. 1.1 Journaling Flash File System JFFS2 is the second generation of the Journaling Flash File System (JFFS). This file system provides a crash-safe and powerdown-safe Linux file system for use with Flash memory devices. The home page for the JFFS project is located at http://developer.axis.com/software/jffs. 1.2 Memory Technology Device The MTD subsystem provides a generic Linux driver for a wide range of memory devices, including Flash memory devices. This driver creates an abstracted device used by JFFS2 to interface to the actual Flash memory hardware. The home page for the MTD project is located at http://www.linux-mtd.infradead.org. 2.0 Building the Root File System Before being deployed to an AMD Alchemy platform, the file system must first be built on an x86 Linux host PC. The pri- mary concern when building a Flash-based root file system is often the size of the image. The file system must be designed so that it fits within the available space of the Flash memory, with enough extra space to accommodate any runtime-created files, such as temporary or log files.
    [Show full text]
  • Filesystem Considerations for Embedded Devices ELC2015 03/25/15
    Filesystem considerations for embedded devices ELC2015 03/25/15 Tristan Lelong Senior embedded software engineer Filesystem considerations ABSTRACT The goal of this presentation is to answer a question asked by several customers: which filesystem should you use within your embedded design’s eMMC/SDCard? These storage devices use a standard block interface, compatible with traditional filesystems, but constraints are not those of desktop PC environments. EXT2/3/4, BTRFS, F2FS are the first of many solutions which come to mind, but how do they all compare? Typical queries include performance, longevity, tools availability, support, and power loss robustness. This presentation will not dive into implementation details but will instead summarize provided answers with the help of various figures and meaningful test results. 2 TABLE OF CONTENTS 1. Introduction 2. Block devices 3. Available filesystems 4. Performances 5. Tools 6. Reliability 7. Conclusion Filesystem considerations ABOUT THE AUTHOR • Tristan Lelong • Embedded software engineer @ Adeneo Embedded • French, living in the Pacific northwest • Embedded software, free software, and Linux kernel enthusiast. 4 Introduction Filesystem considerations Introduction INTRODUCTION More and more embedded designs rely on smart memory chips rather than bare NAND or NOR. This presentation will start by describing: • Some context to help understand the differences between NAND and MMC • Some typical requirements found in embedded devices designs • Potential filesystems to use on MMC devices 6 Filesystem considerations Introduction INTRODUCTION Focus will then move to block filesystems. How they are supported, what feature do they advertise. To help understand how they compare, we will present some benchmarks and comparisons regarding: • Tools • Reliability • Performances 7 Block devices Filesystem considerations Block devices MMC, EMMC, SD CARD Vocabulary: • MMC: MultiMediaCard is a memory card unveiled in 1997 by SanDisk and Siemens based on NAND flash memory.
    [Show full text]
  • Enterprise Filesystems
    Enterprise Filesystems Eric Sandeen Principal Software Engineer, Red Hat Feb 21, 2013 1 ERIC SANDEEN What We'll Cover ● Local “Enterprise-ready” Linux filesystems ● Ext3 ● Ext4 ● XFS ● BTRFS ● Use cases, features, pros & cons of each ● Recent & future work ● Features ● Scalability ● Benchmarks 2 ERIC SANDEEN Local Filesystems in RHEL6 ● We ship what customers need and can rely on ● We ship what we test and support ● Major on-disk local filesystems ● Ext3, Ext4, XFS, BTRFS* ● Others are available for special purposes ● fat, vfat, msdos, udf, cramfs, squashfs... ● We'll cover the “big four” today 3 ERIC SANDEEN The Ext3 filesystem ● Ext3 is was the most common file system in Linux ● Most distributions historically used it as their default ● Applications tuned to its specific behaviors (fsync...) ● Familiar to most system administrators ● Ext3 challenges ● File system repair (fsck) time can be extremely long ● Limited scalability - maximum file system size of 16TB ● Can be significantly slower than other local file systems ● direct/indirect, bitmaps, no delalloc ... 4 ERIC SANDEEN The Ext4 filesystem ● Ext4 has many compelling new features ● Extent based allocation ● Faster fsck time (up to 10x over ext3) ● Delayed allocation, preallocation ● Higher bandwidth ● Should be relatively familiar for existing ext3 users ● Ext4 challenges ● Large device support not polished in its user space tools ● Limits supported maximum file system size to 16TB* ● Has different behavior over system failure 5 ERIC SANDEEN The XFS filesystem ● XFS is very robust
    [Show full text]
  • CIS Ubuntu Linux 18.04 LTS Benchmark
    CIS Ubuntu Linux 18.04 LTS Benchmark v1.0.0 - 08-13-2018 Terms of Use Please see the below link for our current terms of use: https://www.cisecurity.org/cis-securesuite/cis-securesuite-membership-terms-of-use/ 1 | P a g e Table of Contents Terms of Use ........................................................................................................................................................... 1 Overview ............................................................................................................................................................... 12 Intended Audience ........................................................................................................................................ 12 Consensus Guidance ..................................................................................................................................... 13 Typographical Conventions ...................................................................................................................... 14 Scoring Information ..................................................................................................................................... 14 Profile Definitions ......................................................................................................................................... 15 Acknowledgements ...................................................................................................................................... 17 Recommendations ............................................................................................................................................
    [Show full text]
  • F2punifycr: a Flash-Friendly Persistent Burst-Buffer File System
    F2PUnifyCR: A Flash-friendly Persistent Burst-Buffer File System ThanOS Department of Computer Science Florida State University Tallahassee, United States I. ABSTRACT manifold depending on the workloads it is handling for With the increased amount of supercomputing power, applications. In order to leverage the capabilities of burst it is now possible to work with large scale data that buffers to the utmost level, it is very important to have a pose a continuous opportunity for exascale computing standardized software interface across systems. It has to that puts immense pressure on underlying persistent data deal with an immense amount of data during the runtime storage. Burst buffers, a distributed array of node-local of the applications. persistent flash storage devices deployed on most of Using node-local burst buffer can achieve scalable the leardership supercomputers, are means to efficiently write bandwidth as it lets each process write to the handling the bursty I/O invoked through cutting-edge local flash drive, but when the files are shared across scientific applications. In order to manage these burst many processes, it puts the management of metadata buffers, many ephemeral user level file system solutions, and object data of the files under huge challenge. In like UnifyCR, are present in the research and industry order to handle all the challenges posed by the bursty arena. Because of the intrinsic nature of the flash devices and random I/O requests by the Scientific Applica- due to background processing overhead, like Garbage tions running on leadership Supercomputing clusters, Collection, peak write bandwidth is hard to get.
    [Show full text]
  • Foot Prints Feel the Freedom of Fedora!
    The Fedora Project: Foot Prints Feel The Freedom of Fedora! RRaahhuull SSuunnddaarraamm SSuunnddaarraamm@@ffeeddoorraapprroojjeecctt..oorrgg FFrreeee ((aass iinn ssppeeeecchh aanndd bbeeeerr)) AAddvviiccee 101011:: KKeeeepp iitt iinntteerraaccttiivvee!! Credit: Based on previous Fedora presentations from Red Hat and various community members. Using the age old wisdom and Indian, Free software tradition of standing on the shoulders of giants. Who the heck is Rahul? ( my favorite part of this presentation) ✔ Self elected Fedora project monkey and noisemaker ✔ Fedora Project Board Member ✔ Fedora Ambassadors steering committee member. ✔ Fedora Ambassador for India.. ✔ Editor for Fedora weekly reports. ✔ Fedora Websites, Documentation and Bug Triaging projects volunteer and miscellaneous few grunt work. Agenda ● Red Hat Linux to Fedora & RHEL - Why? ● What is Fedora ? ● What is the Fedora Project ? ● Who is behind the Fedora Project ? ● Primary Principles. ● What are the Fedora projects? ● Features, Future – Fedora Core 5 ... The beginning: Red Hat Linux 1994-2003 ● Released about every 6 months ● More stable “ .2” releases about every 18 months ● Rapid innovation ● Problems with retail channel sales model ● Impossible to support long-term ● Community Participation: ● Upstream Projects ● Beta Team / Bug Reporting The big split: Fedora and RHEL Red Hat had two separate, irreconcilable goals: ● To innovate rapidly. To provide stability for the long-term ● Red Hat Enterprise Linux (RHEL) ● Stable and supported for 7 years plus. A platform for 3rd party standardization ● Free as in speech ● Fedora Project / Fedora Core ● Rapid releases of Fedora Core, every 6 months ● Space to innovate. Fedora Core in the tradition of Red Hat Linux (“ FC1 == RHL10” ) Free as in speech, free as in beer, free as in community support ● Built and sponsored by Red Hat ● ...with increased community contributions.
    [Show full text]
  • Proceedings of the Linux Symposium Volume
    Proceedings of the Linux Symposium Volume Two July 19th–22nd, 2006 Ottawa, Ontario Canada Contents Evolution in Kernel Debugging using Hardware Virtualization With Xen 1 Nitin A. Kamble Improving Linux Startup Time Using Software Resume (and other techniques) 17 Hiroki Kaminaga Automated Regression Hunting 27 A. Bowen, P. Fox, J. Kenefick, A. Romney, J. Ruesch, J. Wilde, & J. Wilson Hacking the Linux Automounter—Current Limitations and Future Directions 37 Ian Maxwell Kent & Jeff Moyer Why NFS Sucks 51 Olaf Kirch Efficient Use of the Page Cache with 64 KB Pages 65 Dave Kleikamp and Badari Pulavarty Startup Time in the 21st Century: Filesystem Hacks and Assorted Tweaks 71 Benjamin C.R. LaHaise Using Hugetlbfs for Mapping Application Text Regions 75 H.J. Lu, K. Doshi, R. Seth, & J. Tran Towards a Better SCM: Revlog and Mercurial 83 Matt Mackall Roadmap to a GL-based composited desktop for Linux 91 K.E. Martin and K. Packard Probing the Guts of Kprobes 101 A. Mavinakayanahalli, P. Panchamukhi, J. Keniston, A. Keshavamurthy, & M. Hiramatsu Shared Page Tables Redux 117 Dave McCracken Extending RCU for Realtime and Embedded Workloads 123 Paul E. McKenney OSTRA: Experiments With on-the-fly Source Patching 139 Arnaldo Carvalho de Melo Design and Implementation to Support Multiple Key Exchange Protocols for IPsec 143 K. Miyazawa, S. Sakane, K. Kamada, M. Kanda, & A. Fukumoto The State of Linux Power Management 2006 151 Patrick Mochel I/O Workload Fingerprinting in the Genetic-Library 165 Jake Moilanen X86-64 XenLinux: Architecture, Implementation, and Optimizations 173 Jun Nakajima, Asit Mallick GCC—An Architectural Overview, Current Status, and Future Directions 185 Diego Novillo Shared-Subtree Concept, Implementation, and Applications in Linux 201 Al Viro & Ram Pai The Ondemand Governor 215 Venkatesh Pallipadi & Alexey Starikovskiy Linux Bootup Time Reduction for Digital Still Camera 231 Chan-Ju Park A Lockless Pagecache in Linux—Introduction, Progress, Performance 241 Nick Piggin The Ongoing Evolution of Xen 255 I.
    [Show full text]