AOSP Mini-Conference

Total Page:16

File Type:pdf, Size:1020Kb

AOSP Mini-Conference AOSP Mini-Conference Linaro Welcome ● Main difference between the miniconference and regular Connect talks: Let’s be more interactive! ● One additional purpose of the miniconference: Bring together the various groups inside Linaro that work on the AOSP codebase: ○ LMG -- probably the most obvious use of AOSP ○ LHG -- Android TV ○ Potentially LITE -- Brillo ○ Kernel, Toolchain, … -- need to support both regular Linux and AOSP use ENGINEERS ○ Are there other groups (member engineering teams, maybe) here? AND DEVICES What is your use of the AOSP code base? WORKING TOGETHER Filesystem analysis Satish Patel <[email protected]> File System analysis ● Filesystems investigated: ext4, btrfs, f2fs, nilfs, squashfs ● Variants: encryption enabled/disabled, compression off/zlib/lz4 ● File system analysis briefing (ongoing changes) ○ https://docs.google.com/a/linaro.org/document/d/1jam-PlV9iefnOqujzYWZoY8U9d9GnmPwda 3MItxsPsU/edit?usp=sharing ● Challenges ○ Fixed build support for f2fs image generation (core.mk & image size alignment to 4096) ○ Fixed sparse raw image generation issue ■ Need to use for btrfs and nilfs ○ Image generation for btrfs, nilfs, squashfs etc.. ○ Benchmark porting - bonnie, iozone ○ Partition overload scripts and long run impact scripts ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - A Brief Feature/FS ext4 f2fs btrfs nilfs squashfs Introduction Most used in linux Flash Friendly B/Better/Butter New Compress read based system File System File System Implementation only File System of LFS I-node Hashed B-Tree Linear B+ Tree B-Tree Block Size Extent Fixed Extent Fixed Fixed Type Unix like File Log File Copy On Write Log File UFS Structure Structure Structure Allocation Delayed Immediate Delayed Immediate NA Journal Ordered, WriteBack NA NA NA NA Ubuntu, Most Moto Series Suse Enterprise Ubuntu,NixOs Live mobiles CDs,Android ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - A Traditional Layer WebKit Sqlite Video/Image Application - file access - dir operations Memory Management - file indexing and management - security operations Logical File System(ext4, f2fs, btrfs etc..) OS - data operation on Basic File System physical device - buffering if required - no management Device Driver 1 Device Driver 2 Device Driver n Storage 1 Storage 2 Storage n ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - Basic Types LSF- Log File Structure COW - Copy On Write Image courtesy: http://dboptimizer.com/wp-content/uploads/2013/06/Screen-Shot-2013-06-03-at-10.28.44-AM.png http://tinou.blogs.com/.a/6a00d83451ce5a69e2016302fe0458970d-500wi ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - Test Environment ● Hikey - 96Board ○ 1GB RAM ○ Cortex-A53 Octa Core ○ eMMC ■ Popular on embedded Device ■ Cheap & Flexible http://www.96boards.org/product/hikey/ ■ Fast read & random seek ■ Domains - navigation, eReaders, smartphones, industrial loggers, entertainment devices etc.. ● AOSP + Linaro PatchSet (branch : r55, kernel 4.4) ● F2FS, Ext4, Squashfs, btrfs, nilfs ● Benchmarks ○ Vellamo, RL bench, androbench ○ Bonnie (ported for Android) ○ Iozone (ported for Android) ○ Overload and long run test - in progress!! ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - Challenges ● Fixed build support for f2fs image generation (core.mk & image size alignment to 4096) ● Fixed sparse raw image generation issue ● Need to use for btrfs and nilfs ● Image generation for btrfs, nilfs, squashfs etc. (raw -> format -> sparse) ● Benchmark porting - bonnie, iozone ● Partition overload scripts and long run impact scripts ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - Results • Given ranking based on performance for each benchmark and test – Average rank for iozone test (span over various record length) • Few more points to consider – Performance impact as filesystem ages – CPU utilization • O_SYNC (-+r option iozone) : requires that any write operations block until all data and all metadata have been written to persistent storage. This ensure file integrity (rather than data integrity with O_DSYNC flag) ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - iozone average (full test) ● Write - btrfs (lzo/zlib) wins ● Read - ext4 performance is comparable to btrfs Note: nilfs failed to complete full iozone test ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - small read and write (64K) ● Small records/file F2FS wins with sync option ● For read NILFS has better performance on cache read ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - 1MB file test ● Ext4 outperform on all read operations ● F2FS has good score (with sync flag) ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - 512MB, 4MB ● Write - btrfs (lZO), with sync flag ZLIB wins the race - not sure why? ● 4MB file read EXT4 ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - bonnie results Low the better ● Btrfs (lzo, zlib) gives good number but.. ○ At the cost of CPU eating.. ○ No of kworker threads are more. Coming up next ● F2FS/Ext4 has fair amount of CPU usage on read/write ENGINEERS AND DEVICES ● F2FS outperform on char operation - do we have usecase ? WORKING TOGETHER Filesystems - hdparm ● Squashfs is better ( after btrfs ) ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - speed variation Low the better ● Btrs wins for avg. speed ● But, speed (read/write) deviation is very less for f2fs ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - disk access ● Disk reads are more for f2fs ( use of less buffered i/o) ● Nilfs disk read are less ● More writes for btrfs ( might be due background write activities, for snapshot handling) ● High disk utilization in case of nilfs ● NilFS if we do not run gc - 1000 runs, system went to out of disk space ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - btrfs low lights Though BTRFS has good performance • High CPU Utilization: More kernel threads • For small data (<1MB), btrfs under perform over f2fs and ext4. Not recommended where small i/o transaction with sync is expected. E.g. frequent calls to DB entries. • Btrfs does not force all dirty data to disk on every fsync or O_SYNC operation ( risk on power/crash recovery) • Yet to test effect on long run test ?? ENGINEERS AND DEVICES WORKING TOGETHER File System analysis - Summary ● All relative rank graphs is available at ○ https://docs.google.com/a/linaro.org/spreadsheets/d/1ctknBBVWUjrIZwS8OQcb5L8gCd CLuzktJgx-K_CMgt0/edit?usp=sharing ● F2FS/Ext4 Wins for ○ Small File Access (4K-1MB) + DB Access with disk data integrity ○ Potential use case: Industrial monitoring system, Consumer Phone, Health monitoring system ● NilFS outperforms for SQLite operations ○ Only cache here is, metadata/data gets updated later once get written to log file ( kind of extended version of fdatasync over fsync) ○ Can be useful for power backed system and continuous log recording of small data (upto 4K) but with good amount of storage ○ It quickly fill up the space if GC is not called in between. On 5GB space, it just went out of space for 1000 runs of iozone test. Do not recommended for “Embedded System” ● SquashFS : Good buffered I/O read ○ Can be used for read only partitions ( system libraries and ro database) ENGINEERS AND DEVICES WORKING TOGETHER File System analysis - Summary ● BTRFS : Large file + large RAM ○ LZO - Outperforms for block write/read operations ( > 4MB) ○ Potential use case: ■ In flight entertainment system ( mostly for movies/songs/images etc..) ■ Portable streaming & recording devices ( should be power backed up) ○ Low lights: ■ High cpu utilization ( more no# of threads) ■ Not recommended where small i/o transaction with sync is expected ■ Risk on power failure recovery (Not high, but sometimes corrupt itsself) ● Hybrid use of different file systems on multiple partitions can improve overall performance e.g. ○ large read/write (movies, extra download) on BTRFS partition ○ All small read/write (docs, images) on f2fs/ext4 partition ○ All database access insert/update/delete on f2fs/nilfs partition ● Note: Yet to perform impact on file system as it ages ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - Todo List ● Perform long run test (3-4 days, with various operations) and measure the impact ● Partition overload testing - impact on low disk availability ● Encryption impact ● Overhead of overlayfs etc. if we need to add drivers, HALs etc. for a specific piece of hardware to /system when otherwise using a common /system with HAL consolidation ● Any other ? ENGINEERS AND DEVICES WORKING TOGETHER Filesystems - Some points of discussion • Any other filesystems (out-of-tree, perhaps) we should look into? • Impact of storage technology (devices might start using NVMe) • Best way to measure filesystem longevity ENGINEERS AND DEVICES WORKING TOGETHER Thanks! Questions? <[email protected]> HAL Consolidation Rob Herring <[email protected]> HAL Consolidation - one build, many devices ● Goal is one Android build/filesystem per cpu architecture while maintaining configurability for device specific builds: http://tinyurl.com/zscbbrx ● A directory per feature for features more than just a config variable ● KConfig based configuration for features ● Supporting DB410c, HiKey, Nexus 7, QEMU, RaspberryPi 3 ● Tablet/phone or TV targets ● Next platforms or targets to add? ● Possible next config features: ○ Anything the next device needs ○ Any feature Linaro is working on ○ Custom compiler and compiler flags ○ Kernel build integration ○ malloc selection ENGINEERS AND DEVICES ○ f2fs filesystem WORKING TOGETHER HAL Consolidation - Graphics (Done) ● CI job for Mesa Android builds ● GBM based
Recommended publications
  • CERIAS Tech Report 2017-5 Deceptive Memory Systems by Christopher N
    CERIAS Tech Report 2017-5 Deceptive Memory Systems by Christopher N. Gutierrez Center for Education and Research Information Assurance and Security Purdue University, West Lafayette, IN 47907-2086 DECEPTIVE MEMORY SYSTEMS ADissertation Submitted to the Faculty of Purdue University by Christopher N. Gutierrez In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy December 2017 Purdue University West Lafayette, Indiana ii THE PURDUE UNIVERSITY GRADUATE SCHOOL STATEMENT OF DISSERTATION APPROVAL Dr. Eugene H. Spa↵ord, Co-Chair Department of Computer Science Dr. Saurabh Bagchi, Co-Chair Department of Computer Science Dr. Dongyan Xu Department of Computer Science Dr. Mathias Payer Department of Computer Science Approved by: Dr. Voicu Popescu by Dr. William J. Gorman Head of the Graduate Program iii This work is dedicated to my wife, Gina. Thank you for all of your love and support. The moon awaits us. iv ACKNOWLEDGMENTS Iwould liketothank ProfessorsEugeneSpa↵ord and SaurabhBagchi for their guidance, support, and advice throughout my time at Purdue. Both have been instru­ mental in my development as a computer scientist, and I am forever grateful. I would also like to thank the Center for Education and Research in Information Assurance and Security (CERIAS) for fostering a multidisciplinary security culture in which I had the privilege to be part of. Special thanks to Adam Hammer and Ronald Cas­ tongia for their technical support and Thomas Yurek for his programming assistance for the experimental evaluation. I am grateful for the valuable feedback provided by the members of my thesis committee, Professor Dongyen Xu, and Professor Math­ ias Payer.
    [Show full text]
  • F2FS) Overview
    Flash Friendly File System (F2FS) Overview Leon Romanovsky [email protected] www.leon.nu November 17, 2012 Leon Romanovsky [email protected] F2FS Overview Disclaimer Everything in this lecture shall not, under any circumstances, hold any legal liability whatsoever. Any usage of the data and information in this document shall be solely on the responsibility of the user. This lecture is not given on behalf of any company or organization. Leon Romanovsky [email protected] F2FS Overview Introduction: Flash Memory Definition Flash memory is a non-volatile storage device that can be electrically erased and reprogrammed. Challenges block-level access wear leveling read disturb bad blocks management garbage collection different physics different interfaces Leon Romanovsky [email protected] F2FS Overview Introduction: Flash Memory Definition Flash memory is a non-volatile storage device that can be electrically erased and reprogrammed. Challenges block-level access wear leveling read disturb bad blocks management garbage collection different physics different interfaces Leon Romanovsky [email protected] F2FS Overview Introduction: General System Architecture Leon Romanovsky [email protected] F2FS Overview Introduction: File Systems Optimized for disk storage EXT2/3/4 BTRFS VFAT Optimized for flash, but not aware of FTL JFFS/JFFS2 YAFFS LogFS UbiFS NILFS Leon Romanovsky [email protected] F2FS Overview Background: LFS vs. Unix FS Leon Romanovsky [email protected] F2FS Overview Background: LFS Overview Leon Romanovsky [email protected] F2FS Overview Background: LFS Garbage Collection 1 A victim segment is selected through referencing segment usage table. 2 It loads parent index structures of all the data in the victim identified by segment summary blocks.
    [Show full text]
  • In Search of Optimal Data Placement for Eliminating Write Amplification in Log-Structured Storage
    In Search of Optimal Data Placement for Eliminating Write Amplification in Log-Structured Storage Qiuping Wangy, Jinhong Liy, Patrick P. C. Leey, Guangliang Zhao∗, Chao Shi∗, Lilong Huang∗ yThe Chinese University of Hong Kong ∗Alibaba Group ABSTRACT invalidated by a live block; a.k.a. the death time [12]) to achieve Log-structured storage has been widely deployed in various do- the minimum possible WA. However, without obtaining the fu- mains of storage systems for high performance. However, its garbage ture knowledge of the BIT pattern, how to design an optimal data collection (GC) incurs severe write amplification (WA) due to the placement scheme with the minimum WA remains an unexplored frequent rewrites of live data. This motivates many research stud- issue. Existing temperature-based data placement schemes that ies, particularly on data placement strategies, that mitigate WA in group blocks by block temperatures (e.g., write/update frequencies) log-structured storage. We show how to design an optimal data [7, 16, 22, 27, 29, 35, 36] are arguably inaccurate to capture the BIT placement scheme that leads to the minimum WA with the fu- pattern and fail to group the blocks with similar BITs [12]. ture knowledge of block invalidation time (BIT) of each written To this end, we propose SepBIT, a novel data placement scheme block. Guided by this observation, we propose SepBIT, a novel data that aims to minimize the WA in log-structured storage. It infers placement algorithm that aims to minimize WA in log-structured the BITs of written blocks from the underlying storage workloads storage.
    [Show full text]
  • Ext4 File System and Crash Consistency
    1 Ext4 file system and crash consistency Changwoo Min 2 Summary of last lectures • Tools: building, exploring, and debugging Linux kernel • Core kernel infrastructure • Process management & scheduling • Interrupt & interrupt handler • Kernel synchronization • Memory management • Virtual file system • Page cache and page fault 3 Today: ext4 file system and crash consistency • File system in Linux kernel • Design considerations of a file system • History of file system • On-disk structure of Ext4 • File operations • Crash consistency 4 File system in Linux kernel User space application (ex: cp) User-space Syscalls: open, read, write, etc. Kernel-space VFS: Virtual File System Filesystems ext4 FAT32 JFFS2 Block layer Hardware Embedded Hard disk USB drive flash 5 What is a file system fundamentally? int main(int argc, char *argv[]) { int fd; char buffer[4096]; struct stat_buf; DIR *dir; struct dirent *entry; /* 1. Path name -> inode mapping */ fd = open("/home/lkp/hello.c" , O_RDONLY); /* 2. File offset -> disk block address mapping */ pread(fd, buffer, sizeof(buffer), 0); /* 3. File meta data operation */ fstat(fd, &stat_buf); printf("file size = %d\n", stat_buf.st_size); /* 4. Directory operation */ dir = opendir("/home"); entry = readdir(dir); printf("dir = %s\n", entry->d_name); return 0; } 6 Why do we care EXT4 file system? • Most widely-deployed file system • Default file system of major Linux distributions • File system used in Google data center • Default file system of Android kernel • Follows the traditional file system design 7 History of file system design 8 UFS (Unix File System) • The original UNIX file system • Design by Dennis Ritche and Ken Thompson (1974) • The first Linux file system (ext) and Minix FS has a similar layout 9 UFS (Unix File System) • Performance problem of UFS (and the first Linux file system) • Especially, long seek time between an inode and data block 10 FFS (Fast File System) • The file system of BSD UNIX • Designed by Marshall Kirk McKusick, et al.
    [Show full text]
  • Open Source Licensing Information for Cisco IP Phone 8800 Series
    Open Source Used In Cisco IP Phone 8800 Series 12.1(1) Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at www.cisco.com/go/offices. Text Part Number: 78EE117C99-163803748 Open Source Used In Cisco IP Phone 8800 Series 12.1(1) 1 This document contains licenses and notices for open source software used in this product. With respect to the free/open source software listed in this document, if you have any questions or wish to receive a copy of any source code to which you may be entitled under the applicable free/open source license(s) (such as the GNU Lesser/General Public License), please contact us at [email protected]. In your requests please include the following reference number 78EE117C99-163803748 Contents 1.1 bluez 4.101 :MxC-1.1C R4.0 1.1.1 Available under license 1.2 BOOST C++ Library 1.63.0 1.2.1 Available under license 1.3 busybox 1.21.0 1.3.1 Available under license 1.4 Busybox 1.23.1 1.4.1 Available under license 1.5 cjose 0.4.1 1.5.1 Available under license 1.6 cppformat 2.0.0 1.6.1 Available under license 1.7 curl 7.26.0 1.7.1 Available under license 1.8 dbus 1.4.1 :MxC-1.1C R4.0 1.8.1 Available under license 1.9 DirectFB library and utilities 1.4.5 1.9.1 Available under license 1.10 dnsmasq 2.46 1.10.1 Available under license 1.11 flite 2.0.0 1.11.1 Available under license 1.12 glibc 2.13 1.12.1 Available under license 1.13 hostapd 2.0 :MxC-1.1C R4.0 1.13.1 Available under license Open Source Used
    [Show full text]
  • De-Anonymizing Live Cds Through Physical Memory Analysis
    De-Anonymizing Live CDs through Physical Memory Analysis Andrew Case [email protected] Digital Forensics Solutions Abstract Traditional digital forensics encompasses the examination of data from an offline or “dead” source such as a disk image. Since the filesystem is intact on these images, a number of forensics techniques are available for analysis such as file and metadata examination, timelining, deleted file recovery, indexing, and searching. Live CDs present a serious problem for this investigative model, however, since the OS and applications execute in a RAM-only environment and do not save data on non-volatile storage devices such as the local disk. In order to solve this problem, we present a number of techniques that support complete recovery of a live CD’s in-memory filesystem and partial recovery of its deleted contents. We also present memory analysis of the popular Tor application, since it is used by a number of live CDs in an attempt to keep network communications encrypted and anonymous. 1 Introduction Traditional digital forensics encompasses the examination of data from an offline or “dead” source such as a disk image. Under normal circumstances, evidence is obtained by first creating an exact, bit-for-bit copy of the target disk, followed by hashing of both the target disk and the new copy. If these hashes match then it is known that an exact copy has been made, and the hash is recorded to later prove that evidence was not modified during the investigation. Besides satisfying legal requirements, obtaining a bit-for-bit copy of data provides investigators with a wealth of information to examine and makes available a number of forensics techniques.
    [Show full text]
  • CS 152: Computer Systems Architecture Storage Technologies
    CS 152: Computer Systems Architecture Storage Technologies Sang-Woo Jun Winter 2019 Storage Used To be a Secondary Concern Typically, storage was not a first order citizen of a computer system o As alluded to by its name “secondary storage” o Its job was to load programs and data to memory, and disappear o Most applications only worked with CPU and system memory (DRAM) o Extreme applications like DBMSs were the exception Because conventional secondary storage was very slow o Things are changing! Some (Pre)History Magnetic core memory Rope memory (ROM) 1960’s Drum memory 1950~1970s 72 KiB per cubic foot! 100s of KiB (1024 bits in photo) Hand-woven to program the 1950’s Apollo guidance computer Photos from Wikipedia Some (More Recent) History Floppy disk drives 1970’s~2000’s 100 KiBs to 1.44 MiB Hard disk drives 1950’s to present MBs to TBs Photos from Wikipedia Some (Current) History Solid State Drives Non-Volatile Memory 2000’s to present 2010’s to present GB to TBs GBs Hard Disk Drives Dominant storage medium for the longest time o Still the largest capacity share Data organized into multiple magnetic platters o Mechanical head needs to move to where data is, to read it o Good sequential access, terrible random access • 100s of MB/s sequential, maybe 1 MB/s 4 KB random o Time for the head to move to the right location (“seek time”) may be ms long • 1000,000s of cycles! Typically “ATA” (Including IDE and EIDE), and later “SATA” interfaces o Connected via “South bridge” chipset Ding Yuan, “Operating Systems ECE344 Lecture 11: File
    [Show full text]
  • Hardware-Driven Evolution in Storage Software by Zev Weiss A
    Hardware-Driven Evolution in Storage Software by Zev Weiss A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2018 Date of final oral examination: June 8, 2018 ii The dissertation is approved by the following members of the Final Oral Committee: Andrea C. Arpaci-Dusseau, Professor, Computer Sciences Remzi H. Arpaci-Dusseau, Professor, Computer Sciences Michael M. Swift, Professor, Computer Sciences Karthikeyan Sankaralingam, Professor, Computer Sciences Johannes Wallmann, Associate Professor, Mead Witter School of Music i © Copyright by Zev Weiss 2018 All Rights Reserved ii To my parents, for their endless support, and my cousin Charlie, one of the kindest people I’ve ever known. iii Acknowledgments I have taken what might be politely called a “scenic route” of sorts through grad school. While Ph.D. students more focused on a rapid graduation turnaround time might find this regrettable, I am glad to have done so, in part because it has afforded me the opportunities to meet and work with so many excellent people along the way. I owe debts of gratitude to a large cast of characters: To my advisors, Andrea and Remzi Arpaci-Dusseau. It is one of the most common pieces of wisdom imparted on incoming grad students that one’s relationship with one’s advisor (or advisors) is perhaps the single most important factor in whether these years of your life will be pleasant or unpleasant, and I feel exceptionally fortunate to have ended up iv with the advisors that I’ve had.
    [Show full text]
  • NOVA: a Log-Structured File System for Hybrid Volatile/Non
    NOVA: A Log-structured File System for Hybrid Volatile/Non-volatile Main Memories Jian Xu and Steven Swanson, University of California, San Diego https://www.usenix.org/conference/fast16/technical-sessions/presentation/xu This paper is included in the Proceedings of the 14th USENIX Conference on File and Storage Technologies (FAST ’16). February 22–25, 2016 • Santa Clara, CA, USA ISBN 978-1-931971-28-7 Open access to the Proceedings of the 14th USENIX Conference on File and Storage Technologies is sponsored by USENIX NOVA: A Log-structured File System for Hybrid Volatile/Non-volatile Main Memories Jian Xu Steven Swanson University of California, San Diego Abstract Hybrid DRAM/NVMM storage systems present a host of opportunities and challenges for system designers. These sys- Fast non-volatile memories (NVMs) will soon appear on tems need to minimize software overhead if they are to fully the processor memory bus alongside DRAM. The result- exploit NVMM’s high performance and efficiently support ing hybrid memory systems will provide software with sub- more flexible access patterns, and at the same time they must microsecond, high-bandwidth access to persistent data, but provide the strong consistency guarantees that applications managing, accessing, and maintaining consistency for data require and respect the limitations of emerging memories stored in NVM raises a host of challenges. Existing file sys- (e.g., limited program cycles). tems built for spinning or solid-state disks introduce software Conventional file systems are not suitable for hybrid mem- overheads that would obscure the performance that NVMs ory systems because they are built for the performance char- should provide, but proposed file systems for NVMs either in- acteristics of disks (spinning or solid state) and rely on disks’ cur similar overheads or fail to provide the strong consistency consistency guarantees (e.g., that sector updates are atomic) guarantees that applications require.
    [Show full text]
  • Elinos Product Overview
    SYSGO Product Overview ELinOS 7 Industrial Grade Linux ELinOS is a SYSGO Linux distribution to help developers save time and effort by focusing on their application. Our Industrial Grade Linux with user-friendly IDE goes along with the best selection of software packages to meet our cog linux Qt LOCK customers needs, and with the comfort of world-class technical support. ELinOS now includes Docker support Feature LTS Qt Open SSH Configurator Kernel embedded Open VPN in order to isolate applications running on the same system. laptop Q Bug Shield-Virus Docker Eclipse-based QEMU-based Application Integrated Docker IDE HW Emulators Debugging Firewall Support ELINOS FEATURES MANAGING EMBEDDED LINUX VERSATILITY • Industrial Grade Creating an Embedded Linux based system is like solving a puzzle and putting • Eclipse-based IDE for embedded the right pieces together. This requires a deep knowledge of Linux’s versatility Systems (CODEO) and takes time for the selection of components, development of Board Support • Multiple Linux kernel versions Packages and drivers, and testing of the whole system – not only for newcomers. incl. Kernel 4.19 LTS with real-time enhancements With ELinOS, SYSGO offers an ‘out-of-the-box’ experience which allows to focus • Quick and easy target on the development of competitive applications itself. ELinOS incorporates the system configuration appropriate tools, such as a feature configurator to help you build the system and • Hardware Emulation (QEMU) boost your project success, including a graphical configuration front-end with a • Extensive file system support built-in integrity validation. • Application debugging • Target analysis APPLICATION & CONFIGURATION ENVIRONMENT • Runs out-of-the-box on PikeOS • Validated and tested for In addition to standard tools, remote debugging, target system monitoring and PowerPC, x86, ARM timing behaviour analyses are essential for application development.
    [Show full text]
  • Filesystem Considerations for Embedded Devices ELC2015 03/25/15
    Filesystem considerations for embedded devices ELC2015 03/25/15 Tristan Lelong Senior embedded software engineer Filesystem considerations ABSTRACT The goal of this presentation is to answer a question asked by several customers: which filesystem should you use within your embedded design’s eMMC/SDCard? These storage devices use a standard block interface, compatible with traditional filesystems, but constraints are not those of desktop PC environments. EXT2/3/4, BTRFS, F2FS are the first of many solutions which come to mind, but how do they all compare? Typical queries include performance, longevity, tools availability, support, and power loss robustness. This presentation will not dive into implementation details but will instead summarize provided answers with the help of various figures and meaningful test results. 2 TABLE OF CONTENTS 1. Introduction 2. Block devices 3. Available filesystems 4. Performances 5. Tools 6. Reliability 7. Conclusion Filesystem considerations ABOUT THE AUTHOR • Tristan Lelong • Embedded software engineer @ Adeneo Embedded • French, living in the Pacific northwest • Embedded software, free software, and Linux kernel enthusiast. 4 Introduction Filesystem considerations Introduction INTRODUCTION More and more embedded designs rely on smart memory chips rather than bare NAND or NOR. This presentation will start by describing: • Some context to help understand the differences between NAND and MMC • Some typical requirements found in embedded devices designs • Potential filesystems to use on MMC devices 6 Filesystem considerations Introduction INTRODUCTION Focus will then move to block filesystems. How they are supported, what feature do they advertise. To help understand how they compare, we will present some benchmarks and comparisons regarding: • Tools • Reliability • Performances 7 Block devices Filesystem considerations Block devices MMC, EMMC, SD CARD Vocabulary: • MMC: MultiMediaCard is a memory card unveiled in 1997 by SanDisk and Siemens based on NAND flash memory.
    [Show full text]
  • Unionfs: User- and Community-Oriented Development of a Unification File System
    Unionfs: User- and Community-Oriented Development of a Unification File System David Quigley, Josef Sipek, Charles P. Wright, and Erez Zadok Stony Brook University {dquigley,jsipek,cwright,ezk}@cs.sunysb.edu Abstract If a file exists in multiple branches, the user sees only the copy in the higher-priority branch. Unionfs allows some branches to be read-only, Unionfs is a stackable file system that virtually but as long as the highest-priority branch is merges a set of directories (called branches) read-write, Unionfs uses copy-on-write seman- into a single logical view. Each branch is as- tics to provide an illusion that all branches are signed a priority and may be either read-only writable. This feature allows Live-CD develop- or read-write. When the highest priority branch ers to give their users a writable system based is writable, Unionfs provides copy-on-write se- on read-only media. mantics for read-only branches. These copy- on-write semantics have lead to widespread There are many uses for namespace unifica- use of Unionfs by LiveCD projects including tion. The two most common uses are Live- Knoppix and SLAX. In this paper we describe CDs and diskless/NFS-root clients. On Live- our experiences distributing and maintaining CDs, by definition, the data is stored on a read- an out-of-kernel module since November 2004. only medium. However, it is very convenient As of March 2006 Unionfs has been down- for users to be able to modify the data. Uni- loaded by over 6,700 unique users and is used fying the read-only CD with a writable RAM by over two dozen other projects.
    [Show full text]