Bootloader and Linux Kernel 於嵌入式系統發展與重要性

Total Page:16

File Type:pdf, Size:1020Kb

Bootloader and Linux Kernel 於嵌入式系統發展與重要性 Android多核心嵌入式多媒體系統設計與實作 Bootloader and Linux Kernel 於嵌入式系統發展與重要性 賴槿峰 (Chin-Feng Lai) Assistant Professor, institute of CSIE, National Ilan University Sep 22th 2012 © 2012 MMN Lab. All Rights Reserved 2012 資訊軟體技術人才培訓 Outline • Bootloader • Bootloader Introduction • Bootloader overview • Das U-Boot • OMAP boot sequence • Kernel • Kernel Introduction • Kernel architecture overview • Kernel boot step • Kernel configure 2 • Bootloader • Kernel • Lab 3 Bootloader Introduction • Role of android bootloader File System Linux Kernel Hardware Bootloader 4 Bootloader Introduction • What is Bootloader • The first software program that runs when a computer starts • Derived from the phrase “to pull oneself up by one's bootstraps 5 Bootloader Introduction • Role of a Bootloader • Initialize hardware for programs • Read/Write non-volatile storage devices • Provide interfaces for loading/running programs • Memory dumping/modifying • Bootloader Challenges • DRAM Controller • Flash and RAM • Image Complexity • Execution Context 6 Bootloader Introduction • Bootloader Challenges - DRAM Controller • Require detailed knowledge of DRAM architecture, controller, chips, and overall hardware design. • Bootloader needs to enable the memory subsystem at first to provide resource for execution environment. • Bootloader Challenges - Flash and RAM • Bootloader is stored in nonvolatile storage but is usually loaded into RAM for faster execution. • Bootloader must create its own operational context and move itself. 7 Bootloader Introduction • Bootloader Challenges - Image Complexity • Complete control over how the image is constructed and linked. • Need to organize the startup code for specific processor’s boot sequence. • Linker for constructing a binary executable image. 8 Bootloader Overview 9 Bootloader Overview - bios • Boot firmware(able to be updated) • Identify, test, and initialize system devices • Other complex functions • Booting device • System clock • ACPI (Advanced Configuration and Power Interface) • Tools for overclocking • Hardware configuration • …etc 10 Bootloader Overview – GUN GRUB • GRand Unified Bootloader • Feature • Provides the choice to boot one of multiple OS • Download OS images from a network • Support diskless systems\ • Provides a simple, bash-like, command line interface 11 Bootloader Overview – Das U-boot • Open source universal bootloader • Support many processor architectures • ARM . MIPS . X86 • http://sourceforge.net/projects/u-boot • Easy to port to new architectures • Used for interacting with users and updating images and leading the kernel 12 Das U-boot • Das U-Boot source tree . |-- api |-- doc |-- lib_mips |-- net |-- api_examples |-- drivers |-- lib_nios |-- nios2_config.mk |-- arm_config.mk |-- examples |-- lib_nios2 |-- nios_config.mk |-- avr32_config.mk |-- fs |-- lib_ppc |-- onenand_ipl |-- blackfin_config.mk |-- i386_config.mk |-- lib_sh |-- post |-- board |-- include |-- lib_sparc |-- ppc_config.mk |-- built |-- lib_arm |-- m68k_config.mk |-- README |-- CHANGELOG |-- lib_avr32 |-- MAINTAINERS |-- rules.mk |-- common |-- lib_blackfin |-- MAKEALL |-- sh_config.mk |-- config.mk |-- libfdt |-- Makefile |-- sparc_config.mk |-- COPYING |-- lib_generic |-- microblaze_config.mk `-- tools |-- cpu |-- lib_i386 |-- mips_config.mk |-- CREDITS |-- lib_m68k |-- mkconfig |-- disk |-- lib_microblaze |-- nand_spl 13 Das U-boot • Das U-Boot source tree Directory Description api U-Boot machine/arch independent API for external applications board Board dependent files/directories common Misc architecture independent functions cpu CPU specific files Disk Code for disk drive partition handling Doc Basic documentation files drivers Device drivers for common peripherals include Header files (.h) lib_xxx Files generic to the XXX architecture net Networking support (bootp,tftp, rarp, nfs, and so on) nand_spl Support for NAND Flash boot with stage 0 boot loader 14 Das U-boot • Das U-boot Porting Flow Get the U-boot source code Configure u-boot by modifying the <boardname>.h header toolchain for arm file Modify Makefile Define a physical memory map u-boot-1.X-XXX.bin Define a memory link map with the following sections load file (.bin ) , Execute 15 Das U-boot • Das U-boot Porting Flow 1. Get the source from Get the U-boot source code • ftp://ftp.denx.de/pub/u-boot/ 2. Get the toolchain from • http://www.codesourcery.com/ toolchain for arm 3. Uncompress toolchain and modify $PATH • Host $ tar -xvf arm-none-linux-guneabi.src.tar • Host $ vim ~/.bashrc ( add export PATH=$PATH:ToolPATH) • Host $ source ~/.bashrc 16 Das U-boot – Porting flow • The include directory: • Configs/<boardname>.h • includes hardware configuration for memory map and peripherals. • <core>.h for example arm926ejs.hh • contains the processor specific , register definitions. 17 Das U-boot – Porting flow • The cpu directory: • <core>/cpu.c • This file contains the cpu specific code. • <core>/interrupt.c • contains the function related to interrupt and timer • <core>/Start.S • contains startup code for ARM cpu core. <core>/cpu.c 18 Das U-boot – Porting flow <core>/interrupt.c <core>/Start.s 19 Das U-boot – Porting flow • The lib_<arch> directory: • board.c • memory map initialization , u-boot peripheral initialization. • div0.c • contains division-by zero exception handler 20 Das U-boot – Porting flow • The board directory: • < boardname >/ < boardname >.c • contains functions like board initialization routine, Timer, Serial Port, NAND flash initialization etc 21 Das U-boot – Porting flow • Modify the top level makefile Modify Makefile • to specify the new configuration. <boardname>_config : unconfig @./mkconfig $(@:_config=) <architecture> <core> <boardname> • Host $ make distclean u-boot-1.X-XXX.bin • Host $ make <board_name>_config • Host $ make 22 Das U-boot Command Command Function boot boot application image from memory bootp boot image via network using BootP/TFTP protocol bootd boot default, i.e., run 'bootcmd' go start application at address 'addr' help (or ?) print online help loadb load binary file over serial line (kermit mode) printenv print environment variable run run commands in an environment variable saveenv save environment variables to persistent storage setenv set environment variables tftpboot boot image via network using TFTP protocol and env variables ipaddr and serverip 23 OMAP Boot Sequence • Comparison of Personal Computer and Embedded system BIOS MBR Bootloader Kernel Rootfs User area Personal Computer Bootloader Kernel Rootfs User area Embedded system 24 OMAP Boot Sequence • Boot loader initial is divided into two stage • Stage1 • Initial hardware • Prepare the RAM space for Stage2 • Initial stack • Copy the bootloader Stage2 to RAM • Set the PC to the entry of stage2 • Stage2 • Enhanced initialization. • Copy Kernel image to RAM. • Give the execution permission to the kernel. 25 OMAP Boot Sequence • Bootloader of OMAP3530 contain X-loader and U-boot X-loader U-boot Kernel Rootfs User area Bootloader 1 Run Bootloader progress on the SD card 2 Run Bootloader progress on the Nand Flash 26 OMAP Boot Sequence – x-loader • a first level bootstrap program. • the ROM will copy the x-loader to internal RAM . • initialize the CPU • copy u-boot into the memory • give the control power to u-boot • a tiny first stage minimal bootloader. 27 OMAP Boot Sequence – x-loader(cont.) • The internal ROM Code will attempt to boot. Rom code loader 28 OMAP Boot Sequence – x-loader(cont.) • SD Card Boot • ROM looks for an SD Card on the first MMC controller. • The ROM then looks for the first FAT32 partition within the partition table. • Scanned for a signed file called “MLO”. • It is transfered into the internal SRAM and control is passed to it. • Nand Boot • The ROM attempts to load the first sector of Nand. • If the sector is bad, corrupt, or blank, the ROM will try the next sector (up to 4) . • The ROM transfers the contents to SRAM and transfers control to it. 29 OMAP Boot Sequence – U-boot • SD Card Boot • Looks for a first FAT32 partition on MMC. • Scans root dir for a file named "u-boot.bin". stage 2 for SD card 30 OMAP Boot Sequence – U-boot • NAND Boot • u-boot to be loaded at the 5th sector. • Transfers the image into main memory. stage 2 for NAND Flash 31 OMAP Boot Sequence – Boot Device Priority • Devkit8000 • Hold the BOOT_KEY push button, then apply power Boot priority Device 0 NAND FLASH 1 SD Card Boot key button 32 OMAP Boot Sequence – memory map Omap3530 memory map 33 • Bootloader • Kernel • Lab 34 Kernel Introduction • Role of android kernel File System Linux Kernel Hardware Bootloader 35 Kernel Introduction • What is Kernel • The "core" of any computer system. • The "software" which allows users to share computer resources. • The kernel can be thought as the main software of the Operating System • which may also include graphics management. 36 Kernel Introduction • Role of a Kernel • Provides a consistent interface to managed devices. • Protects the hardware from users of the computer. • Protects important software from users. Protect Ring 37 Kernel Introduction • Kernel Mode & User Mode • To avoid having applications that constantly crashed • Designed with 2 different operative modes: • Kernel Mode • User Mode User Implementation Mode Abstraction Detail Kernel Mode 38 Kernel Introduction • User mode • Access user-mode memory • Illegal attempts will result in • faults/exceptions • Kernel mode • I/O instructions • Access both user- and kernel-mode • memory • An instruction to change to user mode 39 Kernel Introduction • How to call kernel mode
Recommended publications
  • Uebayasi's Simple Template
    eXecute In Place support in NetBSD Masao “uebs” Uebayashi <[email protected]> BSDCan 2010 2010.5.13 Who am I NetBSD developer Japanese Living in Yokohama Self-employed Since Dec. 15 2008 Tombi Inc. Agenda Demonstration Introduction Program execution VM Virtual memory management Physical memory management Fault handler, pager : Design of XIP Demonstration XIP on NetBSD/arm (i.MX35) Introduction What is XIP? Execute programs directly from devices No memory copy Only about userland programs (Kernel XIP is another story) Introduction Who needs XIP? Embedded devices Memory saving for less power consumption Boot time Mainframes (Linux) Memory saving for virtualized instances : “Nothing in between” Introduction How to achieve XIP? Don't copy programs to memory when executing it “Execute” == mmap() : : : : : What does that *actually* mean? Goals No hacks Keep code cleanliness Keep abstraction Including device handling : : Performance Latency Memory efficiency Program execution execve(2) → sys_execve() Prepare Read program header using I/O Map sections Set program entry point Execute Page fault is triggered Load pages using VM Execution is resumed Program execution I/O part needs no changes If block device interface (d_strategy()) is provided VM part needs changes!!! Virtual memory management http://en.wikipedia.org/wiki/Virtual_memory Virtual memory is a computer system technique which gives an application program the impression that it has contiguous working memory (an address space), while in fact it may be physically fragmented and may even overflow on to disk storage. Developed for multitasking kernels, virtual memory provides two primary functions: Each process has its own address space, thereby not required to be relocated nor required to use relative addressing mode.
    [Show full text]
  • NOVA: a Log-Structured File System for Hybrid Volatile/Non
    NOVA: A Log-structured File System for Hybrid Volatile/Non-volatile Main Memories Jian Xu and Steven Swanson, University of California, San Diego https://www.usenix.org/conference/fast16/technical-sessions/presentation/xu This paper is included in the Proceedings of the 14th USENIX Conference on File and Storage Technologies (FAST ’16). February 22–25, 2016 • Santa Clara, CA, USA ISBN 978-1-931971-28-7 Open access to the Proceedings of the 14th USENIX Conference on File and Storage Technologies is sponsored by USENIX NOVA: A Log-structured File System for Hybrid Volatile/Non-volatile Main Memories Jian Xu Steven Swanson University of California, San Diego Abstract Hybrid DRAM/NVMM storage systems present a host of opportunities and challenges for system designers. These sys- Fast non-volatile memories (NVMs) will soon appear on tems need to minimize software overhead if they are to fully the processor memory bus alongside DRAM. The result- exploit NVMM’s high performance and efficiently support ing hybrid memory systems will provide software with sub- more flexible access patterns, and at the same time they must microsecond, high-bandwidth access to persistent data, but provide the strong consistency guarantees that applications managing, accessing, and maintaining consistency for data require and respect the limitations of emerging memories stored in NVM raises a host of challenges. Existing file sys- (e.g., limited program cycles). tems built for spinning or solid-state disks introduce software Conventional file systems are not suitable for hybrid mem- overheads that would obscure the performance that NVMs ory systems because they are built for the performance char- should provide, but proposed file systems for NVMs either in- acteristics of disks (spinning or solid state) and rely on disks’ cur similar overheads or fail to provide the strong consistency consistency guarantees (e.g., that sector updates are atomic) guarantees that applications require.
    [Show full text]
  • Container-Based Virtualization for Byte-Addressable NVM Data Storage
    2016 IEEE International Conference on Big Data (Big Data) Container-Based Virtualization for Byte-Addressable NVM Data Storage Ellis R. Giles Rice University Houston, Texas [email protected] Abstract—Container based virtualization is rapidly growing Storage Class Memory, or SCM, is an exciting new in popularity for cloud deployments and applications as a memory technology with the potential of replacing hard virtualization alternative due to the ease of deployment cou- drives and SSDs as it offers high-speed, byte-addressable pled with high-performance. Emerging byte-addressable, non- volatile memories, commonly called Storage Class Memory or persistence on the main memory bus. Several technologies SCM, technologies are promising both byte-addressability and are currently under research and development, each with dif- persistence near DRAM speeds operating on the main memory ferent performance, durability, and capacity characteristics. bus. These new memory alternatives open up a new realm of These include a ReRAM by Micron and Sony, a slower, but applications that no longer have to rely on slow, block-based very large capacity Phase Change Memory or PCM by Mi- persistence, but can rather operate directly on persistent data using ordinary loads and stores through the cache hierarchy cron and others, and a fast, smaller spin-torque ST-MRAM coupled with transaction techniques. by Everspin. High-speed, byte-addressable persistence will However, SCM presents a new challenge for container-based give rise to new applications that no longer have to rely on applications, which typically access persistent data through slow, block based storage devices and to serialize data for layers of block based file isolation.
    [Show full text]
  • Storage Solutions for Embedded Applications
    White Paper Storage Solutions Brian Skerry Sr. Software Architect Intel Corporation for Embedded Applications December 2008 1 321054 Storage Solutions for Embedded Applications Executive Summary Any embedded system needs reliable access to storage. This may be provided by a hard disk drive or access to a remote storage device. Alternatively there are many flash solutions available on the market today. When considering flash, there are a number of important criteria to consider with capacity, cost, and reliability being foremost. This paper considers hardware, software, and other considerations in choosing a storage solution. Wear leveling is an important factor affecting the expected lifetime of any flash solution, and it can be implemented in a number of ways. Depending on the choices made, software changes may be necessary. Solid state drives offer the most straight forward replacement option for Hard disk drives, but may not be cost-effective for some applications. The Intel® X-25M Mainstream SATA Solid State Drive is one solution suitable for a high performance environment. For smaller storage requirements, CompactFlash* and USB flash are very attractive. Downward pressure continues to be applied to flash solutions, and there are a number of new technologies on the horizon. As a result of reading this paper, the reader will be able to take into consideration all the relevant factors in choosing a storage solution for an embedded system. Intel® architecture can benefit the embedded system designer as they can be assured of widespread
    [Show full text]
  • Methods to Improve Bootup Time in Linux
    Methods to Improve Bootup Time in Linux Tim Bird Sony Electronics Bootup Time Working Group Chair CE Linux Forum July 21, 2004 Ottawa Linux Symposium 1 Overview Characterization of the problem space Current reduction techniques Work in progress Resources July 21, 2004 Ottawa Linux Symposium 2 Characterizing the Problem Space July 21, 2004 Ottawa Linux Symposium 3 The Problem Linux doesn’t boot very fast Current Linux desktop systems take about 90- 120 seconds to boot Most CE products must be ready for operation within seconds of boot CELF requirements: boot kernel in 500 milliseconds first available for use in 1 second July 21, 2004 Ottawa Linux Symposium 4 Boot Process Overview 1. power on 2. firmware (boot loader) starts 3. kernel decompression starts 4. kernel start 5. user space start 6. RC script start 7. application start 8. first available use July 21, 2004 Ottawa Linux Symposium 5 Delay Areas Major delay areas in startup: Firmware Kernel/driver initialization User space initialization RC Scripts Application startup July 21, 2004 Ottawa Linux Symposium 6 Overview of delays (on a sample desktop system) Startup Area Delay Firmware 15 seconds Kernel/driver initialization 7 seconds RC scripts 35 seconds X initialization 9 seconds Graphical Environment start 45 seconds Total: 111 seconds For laptop with Pentium III at 600 MHZ July 21, 2004 Ottawa Linux Symposium 7 Firmware July 21, 2004 Ottawa Linux Symposium 8 Firmware/Pre-kernel delays X86 firmware (BIOS) is notorious for superfluous delays (memory checking, hardware probing, etc.)
    [Show full text]
  • Practical Migration from X86 to Linuxone
    Front cover Practical Migration from x86 to LinuxONE Michel M Beaulieu Felipe Cardeneti Mendes Guilherme Nogueira Lena Roesch Redbooks International Technical Support Organization Practical Migration from x86 to LinuxONE December 2020 SG24-8377-01 Note: Before using this information and the product it supports, read the information in “Notices” on page ix. Second Edition (December 2020) This edition applies to LinuxONE on IBM LinuxONE III LT1 and IBM LinuxONE III LT2 © Copyright International Business Machines Corporation 2020. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . ix Trademarks . .x Preface . xi Authors. xi Now you can become a published author, too! . xii Comments welcome. xiii Stay connected to IBM Redbooks . xiii Part 1. Decision-making . 1 Chapter 1. Benefits of migrating workloads to LinuxONE . 3 1.1 Benefits . 4 1.2 IBM LinuxONE servers . 5 1.2.1 IBM LinuxONE III LT1 . 5 1.2.2 IBM LinuxONE III LT2 . 7 1.3 Reasons to choose LinuxONE . 8 1.3.1 Best of Enterprise Linux and Open Source. 9 1.3.2 Hardware strengths. 9 1.4 Enabling a microservices architecture on LinuxONE . 12 1.5 Cloud computing blueprint for IBM LinuxONE . 13 1.5.1 Cloud Solutions on IBM LinuxONE . 14 Chapter 2. Analyze and understand . 17 2.1 Total cost of ownership analysis . 18 2.2 Financial benefits of a migration . 18 2.3 Migration planning. 19 2.4 Choosing workloads to migrate. 20 2.5 Analysis of how to size workloads for migration .
    [Show full text]
  • Linux Kernel Series
    Linux Kernel Series By Devyn Collier Johnson More Linux Related Stuff on: http://www.linux.org Linux Kernel – The Series by Devyn Collier Johnson (DevynCJohnson) [email protected] Introduction In 1991, a Finnish student named Linus Benedict Torvalds made the kernel of a now popular operating system. He released Linux version 0.01 on September 1991, and on February 1992, he licensed the kernel under the GPL license. The GNU General Public License (GPL) allows people to use, own, modify, and distribute the source code legally and free of charge. This permits the kernel to become very popular because anyone may download it for free. Now that anyone can make their own kernel, it may be helpful to know how to obtain, edit, configure, compile, and install the Linux kernel. A kernel is the core of an operating system. The operating system is all of the programs that manages the hardware and allows users to run applications on a computer. The kernel controls the hardware and applications. Applications do not communicate with the hardware directly, instead they go to the kernel. In summary, software runs on the kernel and the kernel operates the hardware. Without a kernel, a computer is a useless object. There are many reasons for a user to want to make their own kernel. Many users may want to make a kernel that only contains the code needed to run on their system. For instance, my kernel contains drivers for FireWire devices, but my computer lacks these ports. When the system boots up, time and RAM space is wasted on drivers for devices that my system does not have installed.
    [Show full text]
  • Fastboot BIOS Embedded and Communications Group an Investigation of BIOS Speed Enhancement Featuring the Intel® Atom™ Processor
    White Paper Intel Corporation Fastboot BIOS Embedded and Communications Group An Investigation of BIOS Speed Enhancement Featuring the Intel® Atom™ Processor Abstract: In order to meet the needs of today’s embedded customers, faster BIOS boot times are required. This paper documents the investigation/POC involving enabling a sub 2-second EFI BIOS boot time on embedded platforms featuring the Intel® Atom™ processor. This document discusses background, methods, data gathered, and next steps. Mike Kartoz, Pete Dice, and Gabe Hattaway, Intel Corporation Data Measurements coordinated by Wade Zehr, Intel Corporation September 2008 Document number 320497 Fastboot BIOS Contents Background ......................................................................................................... 2 Proof of Concept................................................................................................... 3 Data Collected................................................................................................................. 3 1. Initial Configuration ................................................................................................... 4 2. Turn Off Debugging.................................................................................................... 4 3. Decrease Flash Size ................................................................................................... 4 4. Caching of PEI Phase................................................................................................. 5 5. Intel SpeedStep®
    [Show full text]
  • J. Parallel Distrib. Comput. HMVFS: a Versioning File System on DRAM/NVM Hybrid Memory
    J. Parallel Distrib. Comput. 120 (2018) 355–368 Contents lists available at ScienceDirect J. Parallel Distrib. Comput. journal homepage: www.elsevier.com/locate/jpdc HMVFS: A Versioning File System on DRAM/NVM Hybrid Memory Shengan Zheng, Hao Liu, Linpeng Huang *, Yanyan Shen, Yanmin Zhu Shanghai Jiao Tong University, China h i g h l i g h t s • A Hybrid Memory Versioning File System on DRAM/NVM is proposed. • A stratified file system tree maintains the consistency among snapshots. • Snapshots are created automatically, transparently and space-efficiently. • Byte-addressability is utilized to improve the performance of snapshotting. • Snapshot efficiency of HMVFS outperforms existing file systems by 7.9x and 6.6x. article info a b s t r a c t Article history: The byte-addressable Non-Volatile Memory (NVM) offers fast, fine-grained access to persistent storage, Received 9 January 2017 and a large volume of recent researches are conducted on developing NVM-based in-memory file systems. Received in revised form 12 October 2017 However, existing approaches focus on low-overhead access to the memory and only guarantee the Accepted 27 October 2017 consistency between data and metadata. In this paper, we address the problem of maintaining consistency Available online 16 November 2017 among continuous snapshots for NVM-based in-memory file systems. We propose an efficient versioning Keywords: mechanism and implement it in Hybrid Memory Versioning File System (HMVFS), which achieves fault Versioning tolerance efficiently and has low impact on I/O performance. Our results show that HMVFS provides better Checkpoint performance on snapshotting and recovering compared with the traditional versioning file systems for Snapshot many workloads.
    [Show full text]
  • Flash File Systems Overview
    White Paper Flash File Systems Overview Intel Confidential Table of Contents 1.0 Overview .................................................................................................................................................................................................3 1.1 Flash Architecture.........................................................................................................................................................................3 1.1.1 Partitions .................................................................................................................................................................................3 1.1.2 Blocks.........................................................................................................................................................................................3 1.2 Programming Data........................................................................................................................................................................3 1.3 Data Integrity ..................................................................................................................................................................................3 2.0 Flash File System Functions.........................................................................................................................................................4 2.1 Wear Leveling..................................................................................................................................................................................4
    [Show full text]
  • The DENX U-Boot and Linux Guide (DULG) for V38B
    The DENX U-Boot and Linux Guide (DULG) for V38B Table of contents: • 1. Abstract • 2. Introduction ♦ 2.1. Copyright ♦ 2.2. Disclaimer ♦ 2.3. Availability ♦ 2.4. Credits ♦ 2.5. Translations ♦ 2.6. Feedback ♦ 2.7. Conventions • 3. Embedded Linux Development Kit ♦ 3.1. ELDK Availability ♦ 3.2. ELDK Getting Help ♦ 3.3. Supported Host Systems ♦ 3.4. Supported Target Architectures ♦ 3.5. Installation ◊ 3.5.1. Product Packaging ◊ 3.5.2. Downloading the ELDK ◊ 3.5.3. Initial Installation ◊ 3.5.4. Installation and Removal of Individual Packages ◊ 3.5.5. Removal of the Entire Installation ♦ 3.6. Working with ELDK ◊ 3.6.1. Switching Between Multiple Installations ♦ 3.7. Mounting Target Components via NFS ♦ 3.8. Rebuilding ELDK Components ◊ 3.8.1. ELDK Source Distribution ◊ 3.8.2. Rebuilding Target Packages ◊ 3.8.3. Rebuilding ELDT Packages ♦ 3.9. ELDK Packages ◊ 3.9.1. List of ELDT Packages ◊ 3.9.2. List of Target Packages ♦ 3.10. Rebuilding the ELDK from Scratch ◊ 3.10.1. ELDK Build Process Overview ◊ 3.10.2. Setting Up ELDK Build Environment ◊ 3.10.3. build.sh Usage ◊ 3.10.4. Format of the cpkgs.lst and tpkgs.lst Files ♦ 3.11. Notes for Solaris 2.x Host Environment • 4. System Setup ♦ 4.1. Serial Console Access ♦ 4.2. Configuring the "cu" command ♦ 4.3. Configuring the "kermit" command ♦ 4.4. Using the "minicom" program ♦ 4.5. Permission Denied Problems ♦ 4.6. Configuration of a TFTP Server ♦ 4.7. Configuration of a BOOTP / DHCP Server ♦ 4.8. Configuring a NFS Server • 5.
    [Show full text]
  • Linux Kernel Series.Pdf
    Linux Kernel Series By Devyn Collier Johnson Linux Kernel – The Series by Devyn Collier Johnson (DevynCJohnson) [email protected] Introduction In 1991, a Finnish student named Linus Benedict Torvalds made the kernel of a now popular operating system. He released Linux version 0.01 on September 1991, and on February 1992, he licensed the kernel under the GPL license. The GNU General Public License (GPL) allows people to use, own, modify, and distribute the source code legally and free of charge. This permits the kernel to become very popular because anyone may download it for free. Now that anyone can make their own kernel, it may be helpful to know how to obtain, edit, configure, compile, and install the Linux kernel. A kernel is the core of an operating system. The operating system is all of the programs that manages the hardware and allows users to run applications on a computer. The kernel controls the hardware and applications. Applications do not communicate with the hardware directly, instead they go to the kernel. In summary, software runs on the kernel and the kernel operates the hardware. Without a kernel, a computer is a useless object. There are many reasons for a user to want to make their own kernel. Many users may want to make a kernel that only contains the code needed to run on their system. For instance, my kernel contains drivers for FireWire devices, but my computer lacks these ports. When the system boots up, time and RAM space is wasted on drivers for devices that my system does not have installed.
    [Show full text]