Kdump: Smarter, Easier, Trustier

Total Page:16

File Type:pdf, Size:1020Kb

Kdump: Smarter, Easier, Trustier Kdump: Smarter, Easier, Trustier Vivek Goyal Neil Horman Ken’ichi Ohmichi IBM Red Hat, Inc. NEC Soft [email protected] [email protected] [email protected] Maneesh Soni Ankita Garg IBM IBM [email protected] [email protected] Abstract a tera-byte of RAM. Capturing the contents of the entire RAM would result in a proportionately large core file Kdump, a kexec based kernel crash dumping mecha- and managing a tera-byte file can be difficult. One does nism, has witnessed a lot of new significant development not need the contents of entire RAM to debug a ker- activities recently. Features like, the relocatable kernel, nel problem and many pages like userspace pages can dump filtering, and initrd based dumping have enhanced be filtered out. Now an open source userspace utility is kdump capabilities are important steps towards making available for dump filtering and Section 3 discusses the it a more reliable and easy to use solution. New tools working and internals of the utility. like Linux Kernel Dump Test Module (LKDTM) pro- vide an opportunity to automate kdump testing. It can Currently, a kernel crash dump is captured with the be especially useful for distributions to detect and weed help of init scripts in the userspace in the capture ker- out regressions relatively quickly. This paper presents nel. This approach has some drawbacks. Firstly, it as- various newly introduced features and provides imple- sumes that the root filesystem did not get corrupted and mentation details wherever appropriate. It also briefly is still mountable in the second kernel. Secondly, mini- discusses the future directions, such as early boot crash mal work should be done in second kernel and one need dumping. not have to run various user space init scripts. This led to the idea of building a custom initial ram disk (initrd) to 1 Introduction capture the dump and improve the reliability of the op- eration. Various implementation details of initrd based Kdump is a kernel crash dumping mechanism where a dumping are presented in Section 4. pre-loaded kernel is booted in, to capture the crash dump after a system crash [12]. This pre-loaded kernel, often Section 5 discusses the Linux Kernel Dump Test Mod- called as capture kernel, runs from a different physical ule (LKDTM), a kernel module, which allows one to set memory area than the production kernel or regular ker- up and trigger crashes from various kernel code paths at nel. As of today, a capture kernel is specifically com- run time. It can be used to automate kdump testing pro- piled and linked for a specific memory location, and is cedure to identify bugs and eliminate regressions with shipped as an extra kernel to capture the dump. A re- lesser efforts. This paper also gives a brief update on de- locatable kernel implementation gets rid of the require- vice driver hardening efforts in Section 6 and concludes ment to run the kernel from the address it has been com- with future work in Section 7. piled for, instead one can load the kernel at a different address and run it from there. Effectively the distribu- 2 Relocatable bzImage tions and kdump users don’t have to build an extra ker- nel to capture the dump, enhancing ease of use. Section Generally, the Linux R kernel is compiled for a fixed 2 provides the details of relocatable kernel implementa- address and it runs from that address. Traditionally, tion. for i386 and x86_64 architectures, it has been com- Modern machines are being shipped with bigger and piled and run from 1MB physical memory location. bigger RAMs and a high end configuration can possess Later, Eric W. Biederman introduced a config option, • 167 • 168 • Kdump: Smarter, Easier, Trustier CONFIG_PHYSICAL_START, which allowed a kernel to and packed into the bzImage. The uncompressed be compiled for a different address. This option effec- kernel code decompresses the kernel, performs the tively shifted the kernel in virtual address space and one relocations, and transfers control to the protected could compile and run the kernel from a physical ad- mode kernel. This approach has been adopted by dress say, 16MB. Kdump used this feature and built a the i386 implementation. separate kernel for capturing the dump. This kernel was specifically built to run from a reserved memory loca- tion. 2.2 Design Details (i386) The requirement of building an extra kernel for dump capturing has not gone over too well with distributions, In i386, kernel text and data are part of linearly mapped as they end up shipping an extra kernel binary. Apart region which has got hard-coded assumptions about vir- from disk space requirements, it also led to increased tual to physical address mapping. Hence, it is probably efforts in terms of supporting and testing this extra ker- difficult to adopt the modifying the page table approach nel. Also from a user’s perspective, building an extra for implementing a relocatable kernel. Instead, a sim- kernel is cumbersome. pler, non-intrusive approach is to ask the linker to gen- erate relocation information, pack this relocation infor- The solution to the above problem is a relocatable ker- mation into bzImage, and the uncompressed kernel code nel, where the same kernel binary can be loaded and can process these relocations before jumping to the 32- run from a different address than what it has been com- bit kernel entry point (startup_32()). piled for. Jan Kratochvil had posted a set of proto- type patches to kick- off the discussion on the fastboot mailing list [6]. Later, Eric W. Biederman came up 2.2.1 Relocation Information Generation with another set of patches and posted them to LKML for review [8]. Finally, Vivek Goyal picked up Eric’s patches, cleaned those up, fixed a number of bugs, in- Relocation information can be generated in many ways. corporated various review comments, and went through The initial experiment was to compile the kernel as multiple rounds of reposting to LKML for inclusion into shared object file (-shared) which generated the re- the mainline kernel. Relocatable kernel implementation location entries. Eric had posted the patches for this is very architecture dependent and support for i386 ar- approach [9] but it was found that the linker also gen- chitecture has been merged with version 2.6.20 of the erated the relocation entries for absolute symbols (for mainline kernel. Patches for x86_64 have been posted some historical reason) [1]. By definition, absolute sym- on LKML [11] and are now part of -mm. Hopefully, bols are not to be relocated, but, with this approach, ab- these will be merged with mainline kernel soon. solute symbols also ended up being relocated. Hence this method did not prove to be a viable one. 2.1 Design Approaches Later, a different approach was taken where The following design approaches have been discussed the i386 kernel is built with the linker option for the relocatable bzImage implementation. --emit-relocs. This option builds an exe- cutable vmlinux and still retains relocation information in a fully linked executable. This increases the size • Modify kernel text/data mapping at run time of vmlinux by around 10%, though this information is At run time, the kernel determines where it has discarded at runtime. The kernel build process goes been loaded by the boot-loader and it updates its through these relocation entries and filters out PC page tables to reflect the right mapping between relative relocations, as these don’t have to be adjusted kernel virtual and physical addresses for kernel text if bzImage is loaded at a different physical address. It and data. This approach has been adopted for the also filters out the relocations generated with respect to x86_64 implementation. absolute symbols because absolute symbols don’t have • Relocate using relocation information to be relocated. The rest of the relocation offsets are This approach forces the linker to generate reloca- packed into the compressed vmlinux. Figure 1 depicts tion information. These relocations are processed the new i386 bzImage build process. 2007 Linux Symposium, Volume One • 169 Link vmlinux with --emit-relocs option Compressed bzImage Process Relocations Move kernel Uncompressed to a safe Kernel (relocs.c) location (1) Decompress Kernel (2) vmlinux.relocs Compressed Location where Concatenate with bzImage kernel is run from vmlinux.bin Location where vmlinux.bin.all kernel is loaded by boot-loader, Compress Usually, 1MB vmlinux.bin.gz Figure 2: In-place bzImage decompression piggy.o 2.2.3 Perform Relocations Link with misc.o and head.o After decompression, all the relocations are performed. vmlinux Uncompressed code calculates the difference between Strip the address for which the kernel was compiled and the address at which it is loaded. This difference is added vmlinux.bin to the locations as specified by relocation offsets and Append Real mode control is transferred to the 32-bit kernel entry point. code bzImage 2.2.4 Kernel Config Options Figure 1: i386 bzImage build process Several new config options have been introduced for relocatable kernel implementation. CONFIG_ RELOCATABLE controls whether the resulting bzImage 2.2.2 In-place Decompression is relocatable or not. If this option is not set, no reloca- tion information is generated in the vmlinux. Generally, bzImage decompresses itself to the address it has been compiled for (CONFIG_PHYSICAL_START) The code for decompressing the kernel has been and runs from there. But if CONFIG_RELOCATABLE is changed so that decompression can be done in-place. set, then it runs from the address it has been loaded at by Now the kernel is not first decompressed to any other the boot-loader and it ignores the compile time address.
Recommended publications
  • An Event-Driven Embedded Operating System for Miniature Robots
    This is a repository copy of OpenSwarm: an event-driven embedded operating system for miniature robots. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/110437/ Version: Accepted Version Proceedings Paper: Trenkwalder, S.M., Lopes, Y.K., Kolling, A. et al. (3 more authors) (2016) OpenSwarm: an event-driven embedded operating system for miniature robots. In: Proceedings of IROS 2016. 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, 09-14 Oct 2016, Daejeon, Korea. IEEE . ISBN 978-1-5090-3762-9 https://doi.org/10.1109/IROS.2016.7759660 © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. Reuse Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of the full text version. This is indicated by the licence information on the White Rose Research Online record for the item. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request.
    [Show full text]
  • The Linux Kernel Module Programming Guide
    The Linux Kernel Module Programming Guide Peter Jay Salzman Michael Burian Ori Pomerantz Copyright © 2001 Peter Jay Salzman 2007−05−18 ver 2.6.4 The Linux Kernel Module Programming Guide is a free book; you may reproduce and/or modify it under the terms of the Open Software License, version 1.1. You can obtain a copy of this license at http://opensource.org/licenses/osl.php. This book is distributed in the hope it will be useful, but without any warranty, without even the implied warranty of merchantability or fitness for a particular purpose. The author encourages wide distribution of this book for personal or commercial use, provided the above copyright notice remains intact and the method adheres to the provisions of the Open Software License. In summary, you may copy and distribute this book free of charge or for a profit. No explicit permission is required from the author for reproduction of this book in any medium, physical or electronic. Derivative works and translations of this document must be placed under the Open Software License, and the original copyright notice must remain intact. If you have contributed new material to this book, you must make the material and source code available for your revisions. Please make revisions and updates available directly to the document maintainer, Peter Jay Salzman <[email protected]>. This will allow for the merging of updates and provide consistent revisions to the Linux community. If you publish or distribute this book commercially, donations, royalties, and/or printed copies are greatly appreciated by the author and the Linux Documentation Project (LDP).
    [Show full text]
  • What Are the Problems with Embedded Linux?
    What Are the Problems with Embedded Linux? Every Operating System Has Benefits and Drawbacks Linux is ubiquitous. It runs most internet servers, is inside Android* smartphones, and is used on millions of embedded systems that, in the past, ran Real-Time Operating Systems (RTOSes). Linux can (and should) be used were possible for embedded projects, but while it gives you extreme choice, it also presents the risk of extreme complexity. What, then, are the trade-offs between embedded Linux and an RTOS? In this article, we cover some key considerations when evaluating Linux for a new development: ■ Design your system architecture first ■ What is Linux? ■ Linux vs. RTOSes ■ Free software is about liberty—not price ■ How much does Embedded Linux cost? ■ Why pay for Embedded Linux? ■ Should you buy Embedded Linux or roll-your-own (RYO)? ■ To fork or not to fork? ■ Software patching ■ Open source licensing ■ Making an informed decision The important thing is not whether Linux or an RTOS is “the best,” but whether either operating system—or both together—makes the most technical and financial sense for your project. We hope this article helps you make an informed decision. Design Your System Architecture First It is important to design your system architecture first—before choosing either Linux or an RTOS—because both choices can limit architectural freedom. You may discover that aspects of your design require neither Linux nor an RTOS, making your design a strong candidate for a heterogeneous approach that includes one or more bare-metal environments (with possibly a Linux and/or RTOS environment as well).
    [Show full text]
  • Co-Optimizing Performance and Memory Footprint Via Integrated CPU/GPU Memory Management, an Implementation on Autonomous Driving Platform
    2020 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS) Co-Optimizing Performance and Memory Footprint Via Integrated CPU/GPU Memory Management, an Implementation on Autonomous Driving Platform Soroush Bateni*, Zhendong Wang*, Yuankun Zhu, Yang Hu, and Cong Liu The University of Texas at Dallas Abstract—Cutting-edge embedded system applications, such as launches the OpenVINO toolkit for the edge-based deep self-driving cars and unmanned drone software, are reliant on learning inference on its integrated HD GPUs [4]. integrated CPU/GPU platforms for their DNNs-driven workload, Despite the advantages in SWaP features presented by the such as perception and other highly parallel components. In this work, we set out to explore the hidden performance im- integrated CPU/GPU architecture, our community still lacks plication of GPU memory management methods of integrated an in-depth understanding of the architectural and system CPU/GPU architecture. Through a series of experiments on behaviors of integrated GPU when emerging autonomous and micro-benchmarks and real-world workloads, we find that the edge intelligence workloads are executed, particularly in multi- performance under different memory management methods may tasking fashion. Specifically, in this paper we set out to explore vary according to application characteristics. Based on this observation, we develop a performance model that can predict the performance implications exposed by various GPU memory system overhead for each memory management method based management (MM) methods of the integrated CPU/GPU on application characteristics. Guided by the performance model, architecture. The reason we focus on the performance impacts we further propose a runtime scheduler.
    [Show full text]
  • Performance Study of Real-Time Operating Systems for Internet Of
    IET Software Research Article ISSN 1751-8806 Performance study of real-time operating Received on 11th April 2017 Revised 13th December 2017 systems for internet of things devices Accepted on 13th January 2018 E-First on 16th February 2018 doi: 10.1049/iet-sen.2017.0048 www.ietdl.org Rafael Raymundo Belleza1 , Edison Pignaton de Freitas1 1Institute of Informatics, Federal University of Rio Grande do Sul, Av. Bento Gonçalves, 9500, CP 15064, Porto Alegre CEP: 91501-970, Brazil E-mail: [email protected] Abstract: The development of constrained devices for the internet of things (IoT) presents lots of challenges to software developers who build applications on top of these devices. Many applications in this domain have severe non-functional requirements related to timing properties, which are important concerns that have to be handled. By using real-time operating systems (RTOSs), developers have greater productivity, as they provide native support for real-time properties handling. Some of the key points in the software development for IoT in these constrained devices, like task synchronisation and network communications, are already solved by this provided real-time support. However, different RTOSs offer different degrees of support to the different demanded real-time properties. Observing this aspect, this study presents a set of benchmark tests on the selected open source and proprietary RTOSs focused on the IoT. The benchmark results show that there is no clear winner, as each RTOS performs well at least on some criteria, but general conclusions can be drawn on the suitability of each of them according to their performance evaluation in the obtained results.
    [Show full text]
  • The Xen Port of Kexec / Kdump a Short Introduction and Status Report
    The Xen Port of Kexec / Kdump A short introduction and status report Magnus Damm Simon Horman VA Linux Systems Japan K.K. www.valinux.co.jp/en/ Xen Summit, September 2006 Magnus Damm ([email protected]) Kexec / Kdump Xen Summit, September 2006 1 / 17 Outline Introduction to Kexec What is Kexec? Kexec Examples Kexec Overview Introduction to Kdump What is Kdump? Kdump Kernels The Crash Utility Xen Porting Effort Kexec under Xen Kdump under Xen The Dumpread Tool Partial Dumps Current Status Magnus Damm ([email protected]) Kexec / Kdump Xen Summit, September 2006 2 / 17 Introduction to Kexec Outline Introduction to Kexec What is Kexec? Kexec Examples Kexec Overview Introduction to Kdump What is Kdump? Kdump Kernels The Crash Utility Xen Porting Effort Kexec under Xen Kdump under Xen The Dumpread Tool Partial Dumps Current Status Magnus Damm ([email protected]) Kexec / Kdump Xen Summit, September 2006 3 / 17 Kexec allows you to reboot from Linux into any kernel. as long as the new kernel doesn’t depend on the BIOS for setup. Introduction to Kexec What is Kexec? What is Kexec? “kexec is a system call that implements the ability to shutdown your current kernel, and to start another kernel. It is like a reboot but it is indepedent of the system firmware...” Configuration help text in Linux-2.6.17 Magnus Damm ([email protected]) Kexec / Kdump Xen Summit, September 2006 4 / 17 . as long as the new kernel doesn’t depend on the BIOS for setup. Introduction to Kexec What is Kexec? What is Kexec? “kexec is a system call that implements the ability to shutdown your current kernel, and to start another kernel.
    [Show full text]
  • Anatomy of Linux Loadable Kernel Modules a 2.6 Kernel Perspective
    Anatomy of Linux loadable kernel modules A 2.6 kernel perspective Skill Level: Intermediate M. Tim Jones ([email protected]) Consultant Engineer Emulex Corp. 16 Jul 2008 Linux® loadable kernel modules, introduced in version 1.2 of the kernel, are one of the most important innovations in the Linux kernel. They provide a kernel that is both scalable and dynamic. Discover the ideas behind loadable modules, and learn how these independent objects dynamically become part of the Linux kernel. The Linux kernel is what's known as a monolithic kernel, which means that the majority of the operating system functionality is called the kernel and runs in a privileged mode. This differs from a micro-kernel, which runs only basic functionality as the kernel (inter-process communication [IPC], scheduling, basic input/output [I/O], memory management) and pushes other functionality outside the privileged space (drivers, network stack, file systems). You'd think that Linux is then a very static kernel, but in fact it's quite the opposite. Linux can be dynamically altered at run time through the use of Linux kernel modules (LKMs). More in Tim's Anatomy of... series on developerWorks • Anatomy of Linux flash file systems • Anatomy of Security-Enhanced Linux (SELinux) • Anatomy of real-time Linux architectures • Anatomy of the Linux SCSI subsystem • Anatomy of the Linux file system • Anatomy of the Linux networking stack Anatomy of Linux loadable kernel modules © Copyright IBM Corporation 1994, 2008. All rights reserved. Page 1 of 11 developerWorks® ibm.com/developerWorks • Anatomy of the Linux kernel • Anatomy of the Linux slab allocator • Anatomy of Linux synchronization methods • All of Tim's Anatomy of..
    [Show full text]
  • Kdump, a Kexec-Based Kernel Crash Dumping Mechanism
    Kdump, A Kexec-based Kernel Crash Dumping Mechanism Vivek Goyal Eric W. Biederman Hariprasad Nellitheertha IBM Linux NetworkX IBM [email protected] [email protected] [email protected] Abstract important consideration for the success of a so- lution has been the reliability and ease of use. Kdump is a crash dumping solution that pro- Kdump is a kexec based kernel crash dump- vides a very reliable dump generation and cap- ing mechanism, which is being perceived as turing mechanism [01]. It is simple, easy to a reliable crash dumping solution for Linux R . configure and provides a great deal of flexibility This paper begins with brief description of what in terms of dump device selection, dump saving kexec is and what it can do in general case, and mechanism, and plugging-in filtering mecha- then details how kexec has been modified to nism. boot a new kernel even in a system crash event. The idea of kdump has been around for Kexec enables booting into a new kernel while quite some time now, and initial patches for preserving the memory contents in a crash sce- kdump implementation were posted to the nario, and kdump uses this feature to capture Linux kernel mailing list last year [03]. Since the kernel crash dump. Physical memory lay- then, kdump has undergone significant design out and processor state are encoded in ELF core changes to ensure improved reliability, en- format, and these headers are stored in a re- hanced ease of use and cleaner interfaces. This served section of memory. Upon a crash, new paper starts with an overview of the kdump de- kernel boots up from reserved memory and pro- sign and development history.
    [Show full text]
  • Proceedings of the Linux Symposium
    Proceedings of the Linux Symposium Volume One June 27th–30th, 2007 Ottawa, Ontario Canada Contents The Price of Safety: Evaluating IOMMU Performance 9 Ben-Yehuda, Xenidis, Mostrows, Rister, Bruemmer, Van Doorn Linux on Cell Broadband Engine status update 21 Arnd Bergmann Linux Kernel Debugging on Google-sized clusters 29 M. Bligh, M. Desnoyers, & R. Schultz Ltrace Internals 41 Rodrigo Rubira Branco Evaluating effects of cache memory compression on embedded systems 53 Anderson Briglia, Allan Bezerra, Leonid Moiseichuk, & Nitin Gupta ACPI in Linux – Myths vs. Reality 65 Len Brown Cool Hand Linux – Handheld Thermal Extensions 75 Len Brown Asynchronous System Calls 81 Zach Brown Frysk 1, Kernel 0? 87 Andrew Cagney Keeping Kernel Performance from Regressions 93 T. Chen, L. Ananiev, and A. Tikhonov Breaking the Chains—Using LinuxBIOS to Liberate Embedded x86 Processors 103 J. Crouse, M. Jones, & R. Minnich GANESHA, a multi-usage with large cache NFSv4 server 113 P. Deniel, T. Leibovici, & J.-C. Lafoucrière Why Virtualization Fragmentation Sucks 125 Justin M. Forbes A New Network File System is Born: Comparison of SMB2, CIFS, and NFS 131 Steven French Supporting the Allocation of Large Contiguous Regions of Memory 141 Mel Gorman Kernel Scalability—Expanding the Horizon Beyond Fine Grain Locks 153 Corey Gough, Suresh Siddha, & Ken Chen Kdump: Smarter, Easier, Trustier 167 Vivek Goyal Using KVM to run Xen guests without Xen 179 R.A. Harper, A.N. Aliguori & M.D. Day Djprobe—Kernel probing with the smallest overhead 189 M. Hiramatsu and S. Oshima Desktop integration of Bluetooth 201 Marcel Holtmann How virtualization makes power management different 205 Yu Ke Ptrace, Utrace, Uprobes: Lightweight, Dynamic Tracing of User Apps 215 J.
    [Show full text]
  • Communicating Between the Kernel and User-Space in Linux Using Netlink Sockets
    SOFTWARE—PRACTICE AND EXPERIENCE Softw. Pract. Exper. 2010; 00:1–7 Prepared using speauth.cls [Version: 2002/09/23 v2.2] Communicating between the kernel and user-space in Linux using Netlink sockets Pablo Neira Ayuso∗,∗1, Rafael M. Gasca1 and Laurent Lefevre2 1 QUIVIR Research Group, Departament of Computer Languages and Systems, University of Seville, Spain. 2 RESO/LIP team, INRIA, University of Lyon, France. SUMMARY When developing Linux kernel features, it is a good practise to expose the necessary details to user-space to enable extensibility. This allows the development of new features and sophisticated configurations from user-space. Commonly, software developers have to face the task of looking for a good way to communicate between kernel and user-space in Linux. This tutorial introduces you to Netlink sockets, a flexible and extensible messaging system that provides communication between kernel and user-space. In this tutorial, we provide fundamental guidelines for practitioners who wish to develop Netlink-based interfaces. key words: kernel interfaces, netlink, linux 1. INTRODUCTION Portable open-source operating systems like Linux [1] provide a good environment to develop applications for the real-world since they can be used in very different platforms: from very small embedded devices, like smartphones and PDAs, to standalone computers and large scale clusters. Moreover, the availability of the source code also allows its study and modification, this renders Linux useful for both the industry and the academia. The core of Linux, like many modern operating systems, follows a monolithic † design for performance reasons. The main bricks that compose the operating system are implemented ∗Correspondence to: Pablo Neira Ayuso, ETS Ingenieria Informatica, Department of Computer Languages and Systems.
    [Show full text]
  • Our Tas Top 11 Technologies of the Decade Tinyos and Nesc Hardware Evolu On
    Our TAs Top 11 Technologies of the Decade Mo Sha Yong Fu 1.! Smartphones 7.! Drone Aircra • [email protected]" •! [email protected]" •! TinyOS tutorial." •! Grade critiques." 2.! Social Networking 8.! Planetary Rovers •! Help students with projects." •! Office Hour: by appointment." 3.! Voice over IP 9.! Flexible AC •! Manage motes." •! Bryan 502D Transmission •! Grade projects." 4.! LED Lighng •! Office Hour: Tue/Thu 5:30-6." 5.! Mul@core CPUs 10.!Digital Photography •! Bryan 502A 6.! Cloud Compung 11.!Class-D Audio Chenyang Lu 1 Chenyang Lu 2 TinyOS and nesC Hardware Evoluon ! TinyOS: OS for wireless sensor networks. ! Miniature devices manufactured economically ! nesC: programming language for TinyOS. ! Microprocessors ! Sensors/actuators ! Wireless chips 4.5’’X2.4’’ 1’’X1’’ 1 mm2 1 nm2 Chenyang Lu 4 Chenyang Lu 3 Mica2 Mote Hardware Constraints ! Processor ! Microcontroller: 7.4 MHz, 8 bit Severe constraints on power, size, and cost ! Memory: 4KB data, 128 KB program ! slow microprocessor ! Radio ! low-bandwidth radio ! Max 38.4 Kbps ! limited memory ! Sensors ! limited hardware parallelism CPU hit by many interrupts! ! Light, temperature, acceleraon, acous@c, magne@c… ! manage sleep modes in hardware components ! Power ! <1 week on two AA baeries in ac@ve mode ! >1 year baery life on sleep modes! Chenyang Lu 5 Chenyang Lu 6 Soware Challenges Tradional OS ! Small memory footprint ! Mul@-threaded ! Efficiency - power and processing ! Preemp@ve scheduling ! Concurrency-intensive operaons ! Threads: ! Diversity in applicaons & plaorm efficient modularity ! ready to run; ! Support reconfigurable hardware and soJware executing ! execu@ng on the CPU; ! wai@ng for data. gets CPU preempted needs data gets data ready waiting needs data Chenyang Lu 7 Chenyang Lu 8 Pros and Cons of Tradional OS Example: Preempve Priority Scheduling ! Each process has a fixed priority (1 highest); ! Mul@-threaded + preemp@ve scheduling ! P1: priority 1; P2: priority 2; P3: priority 3.
    [Show full text]
  • Container-Based Virtualization for Byte-Addressable NVM Data Storage
    2016 IEEE International Conference on Big Data (Big Data) Container-Based Virtualization for Byte-Addressable NVM Data Storage Ellis R. Giles Rice University Houston, Texas [email protected] Abstract—Container based virtualization is rapidly growing Storage Class Memory, or SCM, is an exciting new in popularity for cloud deployments and applications as a memory technology with the potential of replacing hard virtualization alternative due to the ease of deployment cou- drives and SSDs as it offers high-speed, byte-addressable pled with high-performance. Emerging byte-addressable, non- volatile memories, commonly called Storage Class Memory or persistence on the main memory bus. Several technologies SCM, technologies are promising both byte-addressability and are currently under research and development, each with dif- persistence near DRAM speeds operating on the main memory ferent performance, durability, and capacity characteristics. bus. These new memory alternatives open up a new realm of These include a ReRAM by Micron and Sony, a slower, but applications that no longer have to rely on slow, block-based very large capacity Phase Change Memory or PCM by Mi- persistence, but can rather operate directly on persistent data using ordinary loads and stores through the cache hierarchy cron and others, and a fast, smaller spin-torque ST-MRAM coupled with transaction techniques. by Everspin. High-speed, byte-addressable persistence will However, SCM presents a new challenge for container-based give rise to new applications that no longer have to rely on applications, which typically access persistent data through slow, block based storage devices and to serialize data for layers of block based file isolation.
    [Show full text]