Designing a True Direct-Access File System with Devfs

Total Page:16

File Type:pdf, Size:1020Kb

Designing a True Direct-Access File System with Devfs Designing a True Direct-Access File System with DevFS Sudarsun Kannan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau, University of Wisconsin—Madison; Yuangang Wang, Jun Xu, and Gopinath Palani, Huawei Technologies https://www.usenix.org/conference/fast18/presentation/kannan This paper is included in the Proceedings of the 16th USENIX Conference on File and Storage Technologies. February 12–15, 2018 • Oakland, CA, USA ISBN 978-1-931971-42-3 Open access to the Proceedings of the 16th USENIX Conference on File and Storage Technologies is sponsored by USENIX. Designing a True Direct-Access File System with DevFS Sudarsun Kannan, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau University of Wisconsin–Madison Yuangang Wang, Jun Xu, Gopinath Palani Huawei Technologies Abstract To reduce OS-level overheads and provide direct stor- We present DevFS, a direct-access file system embed- age access for applications, prior work such as Ar- ded completely within a storage device. DevFS provides rakis [34], Moneta-D [8], Strata [26], and others [20, 49, direct, concurrent access without compromising file sys- 50] split the file system into user-level and kernel-level tem integrity, crash consistency, and security. A novel components. The user-level component handles all data- reverse-caching mechanism enables the usage of host plane operations (thus bypassing the OS), and the trusted memory for inactive objects, thus reducing memory load kernel is used only for control-plane operations such as upon the device. Evaluation of an emulated DevFS pro- permission checking. However, prior approaches fail to totype shows more than 2x higher I/O throughput with deliver several important file-system properties. First, direct access and up to a 5x reduction in device RAM using untrusted user-level libraries to maintain file sys- utilization. tem metadata shared across multiple applications can se- riously compromise file-system integrity and crash con- 1 Introduction sistency. Second, unlike user-level networking [51], in file systems, data-plane operations (e.g., read or write to The world of storage, after decades of focus on hard- a file) are closely intertwined with control-plane opera- drive technologies, is finally opening up towards a new tions (e.g., block allocation); bypassing the OS during era of fast solid-state storage devices. Flash-based SSDs data-plane operations can compromise the security guar- have become standard technology, forming a new perfor- antees of a file system. Third, most of these approaches mance tier in the modern datacenter [7, 32]. New, faster require OS support when sharing data across applications flash memory technologies such as NVMe [20] and stor- even for data-plane operations. age class memory (SCM) such as Intel’s 3D X-point [1] To address these limitations, and realize a true user- promise to revolutionize how we access and store persis- level direct-access file system, we propose DevFS, a tent data [10, 13, 50, 53]. State-of-the-art flash memory device-level file system inside the storage hardware. technologies have reduced storage-access latency to tens The DevFS design uses the compute capability and of microseconds compared to milliseconds in the hard- device-level RAM to provide applications with a high- drive era [34, 52, 58]. performance direct-access file system that does not com- To fully realize the potential of these storage devices, promise integrity, concurrency, crash consistency, or se- a careful reconsideration of the software storage stack curity. With DevFS, applications use a traditional POSIX is required. The traditional storage stack requires ap- interface without trapping into the OS for control-plane plications to trap into theOS and interact with multiple and data-plane operations. In addition to providing di- software layers such as the in-memory buffer cache, file rect storage access, a file system inside the storage hard- system, and device drivers. While spending millions of ware provides direct visibility to hardware features such cycles is not a significant problem for slow storage de- as device-level capacitance and support for processing vices such as hard drives [3,13,58], for modern ultra-fast data from multiple I/O queues. With capacitance, DevFS storage, software interactions substantially amplify ac- can safely commit data even after a system crash and also cess latencies, thus preventing applications from exploit- reduce file system overhead for supporting crash consis- ing hardware benefits [3, 9, 34, 50]. Even the simple act tency. With knowledge of multiple I/O queues, DevFS of trapping into and returning from the OS is too costly can increase file system concurrency by providing each for modern storage hardware [14, 49, 58]. file with its own I/O queue and journal. USENIX Association 16th USENIX Conference on File and Storage Technologies 241 A file system inside device hardware also introduces large pool of I/O queues to which software can concur- new challenges. First, even modern SSDs have limited rently submit requests for higher parallelism. RAM capacity due to cost ($/GB) and power constraints. With advancements in SSD hardware performance, In DevFS, we address this dilemma by introducing re- bottlenecks have shifted to software. To reduce software verse caching, an approach that aggressively moves in- overheads on the data path and exploit device-level par- active file system data structures off the device to the allelism, new standards such as NVMe [54] have been host memory. Second, a file system inside a device is adopted. NVMe allows software to bypass device driver a separate runtime and lacks visibility to OS state (such software and directly program device registers with sim- as process credentials) required for secured file access. ple commands for reading and writing the device [18]. To overcome this limitation, we extend the OS to co- Storage class memory technologies. Storage class ordinate with DevFS: the OS performs down-calls and memory (SCM), such as Intel’s 3D Xpoint [1] and HP’s shares process-level credentials without impacting direct memristors, are an emerging class of nonvolatile mem- storage access for applications. ory (NVM) that provide byte-addressable persistence To the best of our knowledge, DevFS is the first design and provide access via the memory controller. SCMs to explore the benefits and implications of a file system have properties that resemble DRAM more than a block inside the device to provide direct user-level access to ap- device. SCMs can provide 2-4x higher capacity than plications. Due to a lack of real hardware, we implement DRAMs, with variable read (100-200ns) and write la- and emulate DevFS at the device-driver level. Evalua- tency (400-800ns) latency. SCM bandwidth ranges from tion of benchmarks on the emulated DevFS prototype 8 GB/s to 20 GB/s, which is significantly faster than with direct storage access shows more than 2x higher state-of-the-art SSDs. Importantly, read (loads) and write and 1.6x higher read throughput as compared to a writes (stores) to SCMs happen via the processor cache kernel-level file system. DevFS memory-reduction tech- which plays a vital role in application performance. niques reduce file system memory usage by up to 5x. Several software solutions that include kernel-level Evaluation of a real-world application, Snappy compres- file systems [10, 13, 55, 56], user-level file systems [49], sion [11], shows 22% higher throughput. and object storage [17, 50] libraries have been pro- In Section 2, we first categorize file systems, and then posed for SCM. Kernel-level file systems retain the discuss the limitations of state-of-the-art user-level file POSIX-based block interface and focus on thinning the systems. In Section 3, we make a case for a device-level OS stack by replacing page cache and block layers with file system. In Section 4, we detail the DevFS design and simple byte-level load and store operations. Alterna- implementation, followed by experimental evaluations in tive approaches completely bypass the kernel by us- Section 5. In Section 6, we describe the related literature, ing an object-based or POSIX-compatible interface over and finally present our conclusions in Section 7. memory-mapped files [49]. 2 Motivation 2.2 File System Architectures We broadly categorize file systems into three types: (1) Advancements in storage hardware performance have kernel-level file systems, (2) hybrid user-level file sys- motivated the need to bypass the OS stack and provide tems, and (3) true user-level direct-access file systems. applications with direct access to storage. We first dis- Figure 1 shows these categories, and how control-plane cuss hardware and software trends, followed by a brief and data-plane operations are managed by each. The history of user-level file systems and their limitations. figure additionally shows a hybrid user-level file system with a trusted server and a Fuse-based file system. 2.1 H/W and S/W for User-Level Access Kernel-level traditional file systems. Kernel-level file Prior work has explored user-level access for PCI-based systems act as a central entity managing data and meta- solid state drives (SSD) and nonvolatile memory tech- data operations as well as control-plane operations [13, nologies. 28, 56]. As shown in Figure 1, kernel-level file sys- Solid-state drives. Solid-state drives (SSD) have be- tems preserve the integrity of metadata and provide crash come the de facto storage device for consumer elec- consistency. Applications using kernel-level file systems tronics as well as enterprise computing. As SSDs trap into the OS for both data-plane and control-plane have evolved, their bandwidth has significantly increased plane operations. along with a reduction in access latencies [6, 57]. To Hybrid user-level file systems.
Recommended publications
  • Copy on Write Based File Systems Performance Analysis and Implementation
    Copy On Write Based File Systems Performance Analysis And Implementation Sakis Kasampalis Kongens Lyngby 2010 IMM-MSC-2010-63 Technical University of Denmark Department Of Informatics Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673 [email protected] www.imm.dtu.dk Abstract In this work I am focusing on Copy On Write based file systems. Copy On Write is used on modern file systems for providing (1) metadata and data consistency using transactional semantics, (2) cheap and instant backups using snapshots and clones. This thesis is divided into two main parts. The first part focuses on the design and performance of Copy On Write based file systems. Recent efforts aiming at creating a Copy On Write based file system are ZFS, Btrfs, ext3cow, Hammer, and LLFS. My work focuses only on ZFS and Btrfs, since they support the most advanced features. The main goals of ZFS and Btrfs are to offer a scalable, fault tolerant, and easy to administrate file system. I evaluate the performance and scalability of ZFS and Btrfs. The evaluation includes studying their design and testing their performance and scalability against a set of recommended file system benchmarks. Most computers are already based on multi-core and multiple processor architec- tures. Because of that, the need for using concurrent programming models has increased. Transactions can be very helpful for supporting concurrent program- ming models, which ensure that system updates are consistent. Unfortunately, the majority of operating systems and file systems either do not support trans- actions at all, or they simply do not expose them to the users.
    [Show full text]
  • The Linux Kernel Module Programming Guide
    The Linux Kernel Module Programming Guide Peter Jay Salzman Michael Burian Ori Pomerantz Copyright © 2001 Peter Jay Salzman 2007−05−18 ver 2.6.4 The Linux Kernel Module Programming Guide is a free book; you may reproduce and/or modify it under the terms of the Open Software License, version 1.1. You can obtain a copy of this license at http://opensource.org/licenses/osl.php. This book is distributed in the hope it will be useful, but without any warranty, without even the implied warranty of merchantability or fitness for a particular purpose. The author encourages wide distribution of this book for personal or commercial use, provided the above copyright notice remains intact and the method adheres to the provisions of the Open Software License. In summary, you may copy and distribute this book free of charge or for a profit. No explicit permission is required from the author for reproduction of this book in any medium, physical or electronic. Derivative works and translations of this document must be placed under the Open Software License, and the original copyright notice must remain intact. If you have contributed new material to this book, you must make the material and source code available for your revisions. Please make revisions and updates available directly to the document maintainer, Peter Jay Salzman <[email protected]>. This will allow for the merging of updates and provide consistent revisions to the Linux community. If you publish or distribute this book commercially, donations, royalties, and/or printed copies are greatly appreciated by the author and the Linux Documentation Project (LDP).
    [Show full text]
  • File Systems and Disk Layout I/O: the Big Picture
    File Systems and Disk Layout I/O: The Big Picture Processor interrupts Cache Memory Bus I/O Bridge Main I/O Bus Memory Disk Graphics Network Controller Controller Interface Disk Disk Graphics Network 1 Rotational Media Track Sector Arm Cylinder Platter Head Access time = seek time + rotational delay + transfer time seek time = 5-15 milliseconds to move the disk arm and settle on a cylinder rotational delay = 8 milliseconds for full rotation at 7200 RPM: average delay = 4 ms transfer time = 1 millisecond for an 8KB block at 8 MB/s Bandwidth utilization is less than 50% for any noncontiguous access at a block grain. Disks and Drivers Disk hardware and driver software provide basic facilities for nonvolatile secondary storage (block devices). 1. OS views the block devices as a collection of volumes. A logical volume may be a partition ofasinglediskora concatenation of multiple physical disks (e.g., RAID). 2. OS accesses each volume as an array of fixed-size sectors. Identify sector (or block) by unique (volumeID, sector ID). Read/write operations DMA data to/from physical memory. 3. Device interrupts OS on I/O completion. ISR wakes up process, updates internal records, etc. 2 Using Disk Storage Typical operating systems use disks in three different ways: 1. System calls allow user programs to access a “raw” disk. Unix: special device file identifies volume directly. Any process that can open thedevicefilecanreadorwriteany specific sector in the disk volume. 2. OS uses disk as backing storage for virtual memory. OS manages volume transparently as an “overflow area” for VM contents that do not “fit” in physical memory.
    [Show full text]
  • Linux Scheduler Documentation
    Linux Scheduler Documentation The kernel development community Jul 14, 2020 CONTENTS i ii CHAPTER ONE COMPLETIONS - “WAIT FOR COMPLETION”BARRIER APIS 1.1 Introduction: If you have one or more threads that must wait for some kernel activity to have reached a point or a specific state, completions can provide a race-free solution to this problem. Semantically they are somewhat like a pthread_barrier() and have similar use-cases. Completions are a code synchronization mechanism which is preferable to any misuse of locks/semaphores and busy-loops. Any time you think of using yield() or some quirky msleep(1) loop to allow something else to proceed, you probably want to look into using one of the wait_for_completion*() calls and complete() instead. The advantage of using completions is that they have a well defined, focused pur- pose which makes it very easy to see the intent of the code, but they also result in more efficient code as all threads can continue execution until the result isactually needed, and both the waiting and the signalling is highly efficient using low level scheduler sleep/wakeup facilities. Completions are built on top of the waitqueue and wakeup infrastructure of the Linux scheduler. The event the threads on the waitqueue are waiting for is reduced to a simple flag in ‘struct completion’, appropriately called “done”. As completions are scheduling related, the code can be found in ker- nel/sched/completion.c. 1.2 Usage: There are three main parts to using completions: • the initialization of the ‘struct completion’synchronization object • the waiting part through a call to one of the variants of wait_for_completion(), • the signaling side through a call to complete() or complete_all().
    [Show full text]
  • Devicelock® DLP 8.3 User Manual
    DeviceLock® DLP 8.3 User Manual © 1996-2020 DeviceLock, Inc. All Rights Reserved. Information in this document is subject to change without notice. No part of this document may be reproduced or transmitted in any form or by any means for any purpose other than the purchaser’s personal use without the prior written permission of DeviceLock, Inc. Trademarks DeviceLock and the DeviceLock logo are registered trademarks of DeviceLock, Inc. All other product names, service marks, and trademarks mentioned herein are trademarks of their respective owners. DeviceLock DLP - User Manual Software version: 8.3 Updated: March 2020 Contents About This Manual . .8 Conventions . 8 DeviceLock Overview . .9 General Information . 9 Managed Access Control . 13 DeviceLock Service for Mac . 17 DeviceLock Content Security Server . 18 How Search Server Works . 18 ContentLock and NetworkLock . 20 ContentLock and NetworkLock Licensing . 24 Basic Security Rules . 25 Installing DeviceLock . .26 System Requirements . 26 Deploying DeviceLock Service for Windows . 30 Interactive Installation . 30 Unattended Installation . 35 Installation via Microsoft Systems Management Server . 36 Installation via DeviceLock Management Console . 36 Installation via DeviceLock Enterprise Manager . 37 Installation via Group Policy . 38 Installation via DeviceLock Enterprise Server . 44 Deploying DeviceLock Service for Mac . 45 Interactive Installation . 45 Command Line Utility . 47 Unattended Installation . 48 Installing Management Consoles . 49 Installing DeviceLock Enterprise Server . 52 Installation Steps . 52 Installing and Accessing DeviceLock WebConsole . 65 Prepare for Installation . 65 Install the DeviceLock WebConsole . 66 Access the DeviceLock WebConsole . 67 Installing DeviceLock Content Security Server . 68 Prepare to Install . 68 Start Installation . 70 Perform Configuration and Complete Installation . 71 DeviceLock Consoles and Tools .
    [Show full text]
  • Advances in Mobile Cloud Computing and Big Data in the 5G Era Studies in Big Data
    Studies in Big Data 22 Constandinos X. Mavromoustakis George Mastorakis Ciprian Dobre Editors Advances in Mobile Cloud Computing and Big Data in the 5G Era Studies in Big Data Volume 22 Series editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland e-mail: [email protected] About this Series The series “Studies in Big Data” (SBD) publishes new developments and advances in the various areas of Big Data-quickly and with a high quality. The intent is to cover the theory, research, development, and applications of Big Data, as embedded in the fields of engineering, computer science, physics, economics and life sciences. The books of the series refer to the analysis and understanding of large, complex, and/or distributed data sets generated from recent digital sources coming from sensors or other physical instruments as well as simulations, crowd sourcing, social networks or other internet transactions, such as emails or video click streams and other. The series contains monographs, lecture notes and edited volumes in Big Data spanning the areas of computational intelligence incl. neural networks, evolutionary computation, soft computing, fuzzy systems, as well as artificial intelligence, data mining, modern statistics and Operations research, as well as self-organizing systems. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output. More information about this series at http://www.springer.com/series/11970 Constandinos X. Mavromoustakis George Mastorakis ⋅ Ciprian Dobre Editors Advances in Mobile Cloud Computing and Big Data in the 5G Era 123 Editors Constandinos X.
    [Show full text]
  • The Linux Device File-System
    The Linux Device File-System Richard Gooch EMC Corporation [email protected] Abstract 1 Introduction All Unix systems provide access to hardware via de- vice drivers. These drivers need to provide entry points for user-space applications and system tools to access the hardware. Following the \everything is a file” philosophy of Unix, these entry points are ex- posed in the file name-space, and are called \device The Device File-System (devfs) provides a power- special files” or \device nodes". ful new device management mechanism for Linux. Unlike other existing and proposed device manage- This paper discusses how these device nodes are cre- ment schemes, it is powerful, flexible, scalable and ated and managed in conventional Unix systems and efficient. the limitations this scheme imposes. An alternative mechanism is then presented. It is an alternative to conventional disc-based char- acter and block special devices. Kernel device drivers can register devices by name rather than de- vice numbers, and these device entries will appear in the file-system automatically. 1.1 Device numbers Devfs provides an immediate benefit to system ad- ministrators, as it implements a device naming scheme which is more convenient for large systems Conventional Unix systems have the concept of a (providing a topology-based name-space) and small \device number". Each instance of a driver and systems (via a device-class based name-space) alike. hardware component is assigned a unique device number. Within the kernel, this device number is Device driver authors can benefit from devfs by used to refer to the hardware and driver instance.
    [Show full text]
  • High Velocity Kernel File Systems with Bento
    High Velocity Kernel File Systems with Bento Samantha Miller, Kaiyuan Zhang, Mengqi Chen, and Ryan Jennings, University of Washington; Ang Chen, Rice University; Danyang Zhuo, Duke University; Thomas Anderson, University of Washington https://www.usenix.org/conference/fast21/presentation/miller This paper is included in the Proceedings of the 19th USENIX Conference on File and Storage Technologies. February 23–25, 2021 978-1-939133-20-5 Open access to the Proceedings of the 19th USENIX Conference on File and Storage Technologies is sponsored by USENIX. High Velocity Kernel File Systems with Bento Samantha Miller Kaiyuan Zhang Mengqi Chen Ryan Jennings Ang Chen‡ Danyang Zhuo† Thomas Anderson University of Washington †Duke University ‡Rice University Abstract kernel-level debuggers and kernel testing frameworks makes this worse. The restricted and different kernel programming High development velocity is critical for modern systems. environment also limits the number of trained developers. This is especially true for Linux file systems which are seeing Finally, upgrading a kernel module requires either rebooting increased pressure from new storage devices and new demands the machine or restarting the relevant module, either way on storage systems. However, high velocity Linux kernel rendering the machine unavailable during the upgrade. In the development is challenging due to the ease of introducing cloud setting, this forces kernel upgrades to be batched to meet bugs, the difficulty of testing and debugging, and the lack of cloud-level availability goals. support for redeployment without service disruption. Existing Slow development cycles are a particular problem for file approaches to high-velocity development of file systems for systems.
    [Show full text]
  • On the Defectiveness of SCHED DEADLINE W.R.T. Tardiness and Afinities, and a Partial Fix Stephen Tang Luca Abeni James H
    On the Defectiveness of SCHED_DEADLINE w.r.t. Tardiness and Afinities, and a Partial Fix Stephen Tang Luca Abeni James H. Anderson [email protected] {sytang,anderson}@cs.unc.edu Scuola Superiore Sant’Anna University of North Carolina at Chapel Hill Pisa, Italy Chapel Hill, USA ABSTRACT create new DL tasks until existing DL tasks reduce their bandwidth SCHED_DEADLINE (DL for short) is an Earliest-Deadline-First consumption. Note that preventing over-utilization is only a neces- sary condition for guaranteeing that tasks meet deadlines. (EDF) scheduler included in the Linux kernel. A question motivated Instead of guaranteeing that deadlines will be met, AC satisfes by DL is how EDF should be implemented in the presence of CPU afnities to maintain optimal bounded tardiness guarantees. Recent two goals [3]. The frst goal is to provide performance guarantees. works have shown that under arbitrary afnities, DL does not main- Specifcally, AC guarantees to each DL task that its long-run con- tain such guarantees. Such works have also shown that repairing sumption of CPU bandwidth remains within a bounded margin DL to maintain these guarantees would likely require an impractical of error from a requested rate. The second goal is to avoid the overhaul of the existing code. In this work, we show that for the starvation of lower-priority non-DL workloads. Specifcally, AC special case where afnities are semi-partitioned, DL can be modi- guarantees that some user-specifed amount of CPU bandwidth fed to maintain tardiness guarantees with minor changes. We also will remain for the execution of non-DL workloads.
    [Show full text]
  • Deviceinstaller User Guide
    Device Installer User Guide Part Number 900-325 Revision C 03/18 Table of Contents 1. Overview ...................................................................................................................................... 1 2. Devices ........................................................................................................................................ 2 Choose the Network Adapter for Communication ....................................................................... 2 Search for All Devices on the Network ........................................................................................ 2 Change Views .............................................................................................................................. 2 Add a Device to the List ............................................................................................................... 3 View Device Details ..................................................................................................................... 3 Device Lists ................................................................................................................................. 3 Save the Device List ................................................................................................................ 3 Open the Device List ............................................................................................................... 4 Print the Device List ................................................................................................................
    [Show full text]
  • Oracle® Linux 7 Managing File Systems
    Oracle® Linux 7 Managing File Systems F32760-07 August 2021 Oracle Legal Notices Copyright © 2020, 2021, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • RTEMS C User Documentation Release 4.11.3 ©Copyright 2016, RTEMS Project (Built 15Th February 2018)
    RTEMS C User Documentation Release 4.11.3 ©Copyright 2016, RTEMS Project (built 15th February 2018) CONTENTS I RTEMS C User’s Guide1 1 Preface 5 2 Overview 9 2.1 Introduction...................................... 10 2.2 Real-time Application Systems............................ 11 2.3 Real-time Executive.................................. 12 2.4 RTEMS Application Architecture........................... 13 2.5 RTEMS Internal Architecture............................. 14 2.6 User Customization and Extensibility......................... 15 2.7 Portability....................................... 16 2.8 Memory Requirements................................. 17 2.9 Audience........................................ 18 2.10 Conventions...................................... 19 2.11 Manual Organization................................. 20 3 Key Concepts 23 3.1 Introduction...................................... 24 3.2 Objects......................................... 25 3.2.1 Object Names................................. 25 3.2.2 Object IDs................................... 25 3.2.2.1 Thirty-Two Object ID Format.................... 25 3.2.2.2 Sixteen Bit Object ID Format.................... 26 3.2.3 Object ID Description............................. 26 3.3 Communication and Synchronization........................ 27 3.4 Time.......................................... 28 3.5 Memory Management................................. 29 4 RTEMS Data Types 31 4.1 Introduction...................................... 32 4.2 List of Data Types..................................
    [Show full text]