Linux Userís Guide

Total Page:16

File Type:pdf, Size:1020Kb

Linux Userís Guide LinuxÆ Userís Guide 0898004-8.2a March 2021 Copyright 2021 by Concurrent Real-Time, Inc. All rights reserved. This publication or any part thereof is intended for use with Concurrent Real-Time products by Concurrent Real-Time personnel, customers, and end–users. It may not be reproduced in any form without the written permission of the publisher. The information contained in this document is believed to be correct at the time of publication. It is subject to change without notice. Concurrent Real-Time makes no warranties, expressed or implied, concerning the information contained in this document. To report an error or comment on a specific portion of the manual, photocopy the page in question and mark the correction or comment on the copy. Mail the copy (and any additional comments) to Concurrent Real-Time, Inc., 800 NW 33rd Street, Pompano Beach, Florida 33064. Mark the envelope “Attention: Publications Department.” This publication may not be reproduced for any other reason in any form without written permission of the publisher. Concurrent Real-Time, Inc. and its logo are registered trademarks of Concurrent Real-Time, Inc. All other Concurrent Real-Time product names are trademarks of Concurrent Real-Time while all other product names are trademarks or registered trademarks of their respective owners. Linux® is used pursuant to a sub-license from the Linux Mark Institute. Printed in U. S. A. Revision History: Date Level Effective With August 2002 000 RedHawk Linux Release 1.1 September 2002 100 RedHawk Linux Release 1.1 December 2002 200 RedHawk Linux Release 1.2 April 2003 300 RedHawk Linux Release 1.3, 1.4 December 2003 400 RedHawk Linux Release 2.0 March 2004 410 RedHawk Linux Release 2.1 July 2004 420 RedHawk Linux Release 2.2 May 2005 430 RedHawk Linux Release 2.3 March 2006 500 RedHawk Linux Release 4.1 May 2006 510 RedHawk Linux Release 4.1 May 2007 520 RedHawk Linux Release 4.2 April 2008 600 RedHawk Linux Release 5.1 June 2008 610 RedHawk Linux Release 5.1 October 2008 620 RedHawk Linux Release 5.2 December 2009 630 RedHawk Linux Release 5.4 May 2011 640 RedHawk Linux Release 6.0 March 2012 650 RedHawk Linux Release 6.0 September 2012 660 RedHawk Linux Release 6.3 January 2013 670 RedHawk Linux Release 6.3 August 2013 680 RedHawk Linux Release 6.3 May 2014 700 RedHawk Linux Release 6.5 August 2014 710 RedHawk Linux Release 6.5 March 2015 750 RedHawk Linux Release 7.0 March 2016 780 RedHawk Linux Release 7.2 June 2016 800 RedHawk Linux Release 7.2A June 2017 810 RedHawk Linux Release 7.3 June 2018 820 RedHawk Linux Release 7.5 October 2019 900 RedHawk Linux Release 8.0 November 2020 920 RedHawk Linux Release 8.2 Preface Scope of Manual This manual consists of three parts. The information in Part 1 is directed towards real-time users. Part 2 is directed towards system administrators. Part 3 consists of backmatter: appendixes, glossary and index. An overview of the contents of the manual follows. Structure of Manual This guide consists of the following sections: Part 1 - Real-Time User • Chapter 1, Introduction, provides an introduction to the RedHawk Linux operating system and an overview of the real-time features included. • Chapter 2, Real-Time Performance, discusses issues involved with achieving real-time performance including interrupt response, process dispatch latency and deterministic program execution. The shielded CPU model is described. • Chapter 3, Real-Time Interprocess Communication, discusses procedures for using the POSIX® and System V message-passing and shared memory facilities. • Chapter 4, Process Scheduling, provides an overview of process scheduling and describes POSIX scheduling policies and priorities. • Chapter 5, Interprocess Synchronization, describes the interfaces provided by RedHawk Linux for cooperating processes to synchronize access to shared resources. Included are: POSIX counting semaphores, System V semaphores, rescheduling control tools and condition synchronization tools. • Chapter 6, Programmable Clocks and Timers, provides an overview of some of the RCIM and POSIX timing facilities available under RedHawk Linux. • Chapter 7, System Clocks and Timers, describes system timekeeping and the per-CPU local timer. • Chapter 8, File Systems and Disk I/O, explains the xfs journaling file system and procedures for performing direct disk I/O on the RedHawk Linux operating system. • Chapter 9, Memory Mapping, describes the methods provided by RedHawk Linux for a process to access the contents of another process’ address space. • Chapter 10, Non-Uniform Memory Access (NUMA), describes the NUMA support available on certain systems. Part 2 - Administrator • Chapter 11, Configuring and Building the Kernel, provides information on how to configure and build a RedHawk Linux kernel. iii RedHawk Linux User’s Guide • Chapter 12, Kernel Debugging, provides guidelines for saving, restoring and analyzing the kernel memory image using kdump and crash and basic use of the kdb kernel debugger. • Chapter 13, PAM Capabilities, describes the PAM authentication capabilities of RedHawk Linux. • Chapter 14, Device Drivers, describes RedHawk functionality and real-time issues involved with writing device drivers. • Chapter 15, PCI-to-VME Support, describes RedHawk’s support for a PCI- to-VME bridge. • Chapter 16, Optional PRT Kernels, describes RedHawk’s optional PRT series of kernels that provide PREEMPT_RT real-time semantics. Part 3 - Common Material • Appendix A, Example Message Queue Programs, contains example programs illustrating the POSIX and System V message queue facilities. • Appendix B, Kernel Tunables for Real-time Features, contains a listing of the kernel tunables that control unique features in RedHawk Linux and their default values in pre-built kernels. • Appendix C, Capabilities, lists the capabilities included in RedHawk Linux and the permissions provided by each. • Appendix D, Migrating 32-bit Code to 64-bit Code, provides information needed to migrate 32-bit code to 64-bit processing on an x86_64 processor. • Appendix E, Kernel-level Daemons on Shielded CPUs, describes how kernel-level daemons execute on shielded CPUs and provides methods for improving performance. • Appendix F, Cross Processor Interrupts on Shielded CPUs, describes how cross-processor interrupts execute on shielded CPUs and provides methods for improving performance. • Appendix G, Serial Console Setup, provides instructions for configuring a serial console. • Appendix H, Boot Command Line Parameters, discusses the boot parameters unique to RedHawk. • The Glossary provides definitions for terms used throughout this Guide. • The Index contains an alphabetical reference to key terms and concepts and the pages where they occur in the text. Syntax Notation The following notation is used throughout this manual: italic Books, reference cards, and items that the user must specify appear in italic type. Special terms may also appear in italic. iv Preface list bold User input appears in list bold type and must be entered exactly as shown. Names of directories, files, commands, options and man page references also appear in list bold type. list Operating system and program output such as prompts, messages and listings of files and programs appears in list type. [] Brackets enclose command options and arguments that are optional. You do not type the brackets if you choose to specify these options or arguments. hypertext links When viewing this document online, clicking on chapter, section, fig- ure, table and page number references will display the corresponding text. Clicking on Internet URLs provided in blue type will launch your web browser and display the web site. Clicking on publication names and numbers in red type will display the corresponding manual PDF, if accessible. Related Publications The following table lists RedHawk Linux documentation. Click on the red entry to display the document PDF (optional product documentation is available for viewing only if the optional product has been installed). These documents are also available by clicking on the “Documents” icon on the desktop and from Concurrent Real-Time’s web site at www.concurrent-rt.com. RedHawk Linux Operating System Documentation Pub. Number RedHawk Linux Release Notes 0898003 RedHawk Linux User’s Guide 0898004 Real-Time Clock & Interrupt Module (RCIM) User’s Guide 0898007 RedHawk Linux FAQ N/A Optional RedHawk Product Documentation RedHawk Linux Frequency-Based Scheduler (FBS) User’s Guide 0898005 v RedHawk Linux User’s Guide vi Chapter 0Contents Preface . iii Chapter 1 Introduction Overview. 1-1 RedHawk Linux Kernels . 1-3 System Updates. 1-4 Real-Time Features. 1-4 Processor Shielding . 1-4 Processor Affinity . 1-4 User-level Preemption Control . 1-5 Fast Block/Wake Services . 1-5 RCIM Driver . 1-5 Frequency-Based Scheduler . 1-5 /proc Modifications . 1-6 Kernel Trace Facility . 1-6 ptrace Extensions. 1-6 Kernel Preemption. 1-6 Real-Time Scheduler . 1-6 Low Latency Enhancements . 1-7 Priority Inheritance . 1-7 High Resolution Process Accounting . 1-7 Capabilities Support . 1-7 Kernel Debuggers . 1-8 Kernel Core Dumps/Crash and Live Analysis . 1-8 User-level Spin Locks . 1-8 usermap and /proc mmap. 1-8 Hyper-threading. 1-8 XFS Journaling File System . 1-9 POSIX Real-Time Extensions . 1-9 User Priority Scheduling . 1-9 Memory Resident Processes. 1-9 Memory Mapping and Data Sharing . 1-10 Process Synchronization. 1-10 Asynchronous Input/Output . 1-10 Synchronized Input/Output . 1-10 Real-Time Signal Behavior . 1-11 Clocks and Timers . 1-11 Message Queues . 1-11 Chapter 2 Real-Time Performance Overview of the Shielded CPU Model . 2-1 Overview of Determinism . 2-2 Process Dispatch Latency . 2-2 Effect of Disabling Interrupts . 2-4 Effect of Interrupts. 2-5 Effect of Disabling Preemption . 2-8 vii RedHawk Linux User’s Guide Effect of Open Source Device Drivers . 2-9 How Shielding Improves Real-Time Performance. 2-9 Shielding From Background Processes . 2-9 Shielding From Interrupts .
Recommended publications
  • University of California at Berkeley College of Engineering Department of Electrical Engineering and Computer Science
    University of California at Berkeley College of Engineering Department of Electrical Engineering and Computer Science EECS 61C, Fall 2003 Lab 2: Strings and pointers; the GDB debugger PRELIMINARY VERSION Goals To learn to use the gdb debugger to debug string and pointer programs in C. Reading Sections 5.1-5.5, in K&R GDB Reference Card (linked to class page under “resources.”) Optional: Complete GDB documentation (http://www.gnu.org/manual/gdb-5.1.1/gdb.html) Note: GDB currently only works on the following machines: • torus.cs.berkeley.edu • rhombus.cs.berkeley.edu • pentagon.cs.berkeley.edu Please ssh into one of these machines before starting the lab. Basic tasks in GDB There are two ways to start the debugger: 1. In EMACS, type M-x gdb, then type gdb <filename> 2. Run gdb <filename> from the command line The following are fundamental operations in gdb. Please make sure you know the gdb commands for the following operations before you proceed. 1. How do you run a program in gdb? 2. How do you pass arguments to a program when using gdb? 3. How do you set a breakpoint in a program? 4. How do you set a breakpoint which which only occurs when a set of conditions is true (eg when certain variables are a certain value)? 5. How do you execute the next line of C code in the program after a break? 1 6. If the next line is a function call, you'll execute the call in one step. How do you execute the C code, line by line, inside the function call? 7.
    [Show full text]
  • Compiling and Debugging Basics
    Compiling and Debugging Basics Service CoSiNus IMFT P. Elyakime H. Neau A. Pedrono A. Stoukov Avril 2015 Outline ● Compilers available at IMFT? (Fortran, C and C++) ● Good practices ● Debugging Why? Compilation errors and warning Run time errors and wrong results Fortran specificities C/C++ specificities ● Basic introduction to gdb, valgrind and TotalView IMFT - CoSiNus 2 Compilers on linux platforms ● Gnu compilers: gcc, g++, gfortran ● Intel compilers ( 2 licenses INPT): icc, icpc, ifort ● PGI compiler fortran only (2 licenses INPT): pgf77, pgf90 ● Wrappers mpich2 for MPI codes: mpicc, mpicxx, mpif90 IMFT - CoSiNus 3 Installation ● Gnu compilers: included in linux package (Ubuntu 12.04 LTS, gcc/gfortran version 4.6.3) ● Intel and PGI compilers installed on a centralized server (/PRODCOM), to use it: source /PRODCOM/bin/config.sh # in bash source /PRODCOM/bin/config.csh # in csh/tcsh ● Wrappers mpich2 installed on PRODCOM: FORTRAN : mympi intel # (or pgi or gnu) C/C++ : mympi intel # (or gnu) IMFT - CoSiNus 4 Good practices • Avoid too long source files! • Use a makefile if you have more than one file to compile • In Fortran : ” implicit none” mandatory at the beginning of each program, module and subroutine! • Use compiler’s check options IMFT - CoSiNus 5 Why talk about debugging ? Yesterday, my program was running well: % gfortran myprog.f90 % ./a.out % vmax= 3.3e-2 And today: % gfortran myprog.f90 % ./a.out % Segmentation fault Yet I have not changed anything… Because black magic is not the reason most often, debugging could be helpful! (If you really think that the cause of your problem is evil, no need to apply to CoSiNus, we are not God!) IMFT - CoSiNus 6 Debugging Methodical process to find and fix flows in a code.
    [Show full text]
  • Lecture 15 15.1 Paging
    CMPSCI 377 Operating Systems Fall 2009 Lecture 15 Lecturer: Emery Berger Scribe: Bruno Silva,Jim Partan 15.1 Paging In recent lectures, we have been discussing virtual memory. The valid addresses in a process' virtual address space correspond to actual data or code somewhere in the system, either in physical memory or on the disk. Since physical memory is fast and is a limited resource, we use the physical memory as a cache for the disk (another way of saying this is that the physical memory is \backed by" the disk, just as the L1 cache is \backed by" the L2 cache). Just as with any cache, we need to specify our policies for when to read a page into physical memory, when to evict a page from physical memory, and when to write a page from physical memory back to the disk. 15.1.1 Reading Pages into Physical Memory For reading, most operating systems use demand paging. This means that pages are only read from the disk into physical memory when they are needed. In the page table, there is a resident status bit, which says whether or not a valid page resides in physical memory. If the MMU tries to get a physical page number for a valid page which is not resident in physical memory, it issues a pagefault to the operating system. The OS then loads that page from disk, and then returns to the MMU to finish the translation.1 In addition, many operating systems make some use of pre-fetching, which is called pre-paging when used for pages.
    [Show full text]
  • NASM for Linux
    1 NASM for Linux Microprocessors II 2 NASM for Linux Microprocessors II NASM Package nasm package available as source or as executables Typically /usr/bin/nasm and /usr/bin/ndisasm Assembly NASM Linux requires elf format for object files ELF = Executable and Linking Format Typical header size = 330h bytes for nasm −f elf [−o <output>] <filename> Linking Linux Object files can be linked with gcc gcc [−options] <filename.o> [other_files.o] Disassembly View executable as 32-bit assembly code ndisasm −e 330h –b 32 a.out | less objdump –d a.out | less Fall 2007 Hadassah College Dr. Martin Land Fall 2007 Hadassah College Dr. Martin Land 3 NASM for Linux Microprocessors II 4 NASM for Linux Microprocessors II gcc Stages Example — 1 Stages of Gnu C compilation factorial2.c #include <math.h> main #include <stdio.h> sets j = 12 main() Source Translation Assembly Object Executable calls factorial 10,000,000 times Code Unit Code Code File { int times; prog.c prog.i prog.s prog.o a.out int i , j = 12; preprocess compile assemble link for (times = 0 ; times < 10000000 ; ++times){ i = factorial(j); gcc -E } gcc -S printf("%d\n",i); gcc -c } gcc int factorial(n) int n; factorial calculates n! by recursion { if (n == 0) return 1; else return n * factorial(n-1); } Fall 2007 Hadassah College Dr. Martin Land Fall 2007 Hadassah College Dr. Martin Land 5 NASM for Linux Microprocessors II 6 NASM for Linux Microprocessors II Example — 2 Example — 3 ~/gcc$ gcc factorial2.c Compile program as separate files produces executable a.out factorial2a.c ~/gcc$ time a.out main() { 479001600 int times; int i,j=12; for (times = 0 ; times < 10000000 ; ++times){ real 0m9.281s i = factorial(j); factorial2b.c } #include <math.h> printf("%d\n",i); user 0m8.339s #include <stdio.h> } sys 0m0.008s int factorial(n) int n; { Program a.out runs in 8.339 seconds on 300 MHz if (n == 0) Pentium II return 1; else return n * factorial(n-1); } Fall 2007 Hadassah College Dr.
    [Show full text]
  • Memory Management
    Memory management Virtual address space ● Each process in a multi-tasking OS runs in its own memory sandbox called the virtual address space. ● In 32-bit mode this is a 4GB block of memory addresses. ● These virtual addresses are mapped to physical memory by page tables, which are maintained by the operating system kernel and consulted by the processor. ● Each process has its own set of page tables. ● Once virtual addresses are enabled, they apply to all software running in the machine, including the kernel itself. ● Thus a portion of the virtual address space must be reserved to the kernel Kernel and user space ● Kernel might not use 1 GB much physical memory. ● It has that portion of address space available to map whatever physical memory it wishes. ● Kernel space is flagged in the page tables as exclusive to privileged code (ring 2 or lower), hence a page fault is triggered if user-mode programs try to touch it. ● In Linux, kernel space is constantly present and maps the same physical memory in all processes. ● Kernel code and data are always addressable, ready to handle interrupts or system calls at any time. ● By contrast, the mapping for the user-mode portion of the address space changes whenever a process switch happens Kernel virtual address space ● Kernel address space is the area above CONFIG_PAGE_OFFSET. ● For 32-bit, this is configurable at kernel build time. The kernel can be given a different amount of address space as desired. ● Two kinds of addresses in kernel virtual address space – Kernel logical address – Kernel virtual address Kernel logical address ● Allocated with kmalloc() ● Holds all the kernel data structures ● Can never be swapped out ● Virtual addresses are a fixed offset from their physical addresses.
    [Show full text]
  • SQL Server 2005 on the IBM Eserver Xseries 460 Enterprise Server
    IBM Front cover SQL Server 2005 on the IBM Eserver xSeries 460 Enterprise Server Describes how “scale-up” solutions suit enterprise databases Explains the XpandOnDemand features of the xSeries 460 Introduces the features of the new SQL Server 2005 Michael Lawson Yuichiro Tanimoto David Watts ibm.com/redbooks Redpaper International Technical Support Organization SQL Server 2005 on the IBM Eserver xSeries 460 Enterprise Server December 2005 Note: Before using this information and the product it supports, read the information in “Notices” on page vii. First Edition (December 2005) This edition applies to Microsoft SQL Server 2005 running on the IBM Eserver xSeries 460. © Copyright International Business Machines Corporation 2005. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . vii Trademarks . viii Preface . ix The team that wrote this Redpaper . ix Become a published author . .x Comments welcome. xi Chapter 1. The x460 enterprise server . 1 1.1 Scale up versus scale out . 2 1.2 X3 Architecture . 2 1.2.1 The X3 Architecture servers . 2 1.2.2 Key features . 3 1.2.3 IBM XA-64e third-generation chipset . 4 1.2.4 XceL4v cache . 6 1.3 XpandOnDemand . 6 1.4 Memory subsystem . 7 1.5 Multi-node configurations . 9 1.5.1 Positioning the x460 and the MXE-460. 11 1.5.2 Node connectivity . 12 1.5.3 Partitioning . 13 1.6 64-bit with EM64T . 14 1.6.1 What makes a 64-bit processor . 14 1.6.2 Intel Xeon with EM64T .
    [Show full text]
  • Support for NUMA Hardware in Helenos
    Charles University in Prague Faculty of Mathematics and Physics MASTER THESIS Vojtˇech Hork´y Support for NUMA hardware in HelenOS Department of Distributed and Dependable Systems Supervisor: Mgr. Martin Dˇeck´y Study Program: Software Systems 2011 I would like to thank my supervisor, Martin Dˇeck´y,for the countless advices and the time spent on guiding me on the thesis. I am grateful that he introduced HelenOS to me and encouraged me to add basic NUMA support to it. I would also like to thank other developers of HelenOS, namely Jakub Jerm´aˇrand Jiˇr´ı Svoboda, for their assistance on the mailing list and for creating HelenOS. I would like to thank my family and my closest friends too { for their patience and continuous support. I declare that I carried out this master thesis independently, and only with the cited sources, literature and other professional sources. I understand that my work relates to the rights and obligations under the Act No. 121/2000 Coll., the Copyright Act, as amended, in particular the fact that the Charles University in Prague has the right to conclude a license agreement on the use of this work as a school work pursuant to Section 60 paragraph 1 of the Copyright Act. Prague, August 2011 Vojtˇech Hork´y 2 N´azevpr´ace:Podpora NUMA hardwaru pro HelenOS Autor: Vojtˇech Hork´y Katedra (´ustav): Matematicko-fyzik´aln´ıfakulta univerzity Karlovy Vedouc´ıpr´ace:Mgr. Martin Dˇeck´y e-mail vedouc´ıho:[email protected]ff.cuni.cz Abstrakt: C´ılemt´etodiplomov´epr´aceje rozˇs´ıˇritoperaˇcn´ısyst´emHelenOS o podporu ccNUMA hardwaru.
    [Show full text]
  • A Novel Hard-Soft Processor Affinity Scheduling for Multicore Architecture Using Multiagents
    European Journal of Scientific Research ISSN 1450-216X Vol.55 No.3 (2011), pp.419-429 © EuroJournals Publishing, Inc. 2011 http://www.eurojournals.com/ejsr.htm A Novel Hard-Soft Processor Affinity Scheduling for Multicore Architecture using Multiagents G. Muneeswari Research Scholar, Computer Science and Engineering Department RMK Engineering College, Anna University-Chennai, Tamilnadu, India E-mail: [email protected] K. L. Shunmuganathan Professor & Head Computer Science and Engineering Department RMK Engineering College, Anna University-Chennai, Tamilnadu, India E-mail: [email protected] Abstract Multicore architecture otherwise called as SoC consists of large number of processors packed together on a single chip uses hyper threading technology. This increase in processor core brings new advancements in simultaneous and parallel computing. Apart from enormous performance enhancement, this multicore platform adds lot of challenges to the critical task execution on the processor cores, which is a crucial part from the operating system scheduling point of view. In this paper we combine the AMAS theory of multiagent system with the affinity based processor scheduling and this will be best suited for critical task execution on multicore platforms. This hard-soft processor affinity scheduling algorithm promises in minimizing the average waiting time of the non critical tasks in the centralized queue and avoids the context switching of critical tasks. That is we assign hard affinity for critical tasks and soft affinity for non critical tasks. The algorithm is applicable for the system that consists of both critical and non critical tasks in the ready queue. We actually modified and simulated the linux 2.6.11 kernel process scheduler to incorporate this scheduling concept.
    [Show full text]
  • ENCM 335 Fall 2018 Lab 3 for the Week of October 1
    page 1 of 11 ENCM 335 Fall 2018 Lab 3 for the Week of October 1 Steve Norman Department of Electrical & Computer Engineering University of Calgary September 2018 Lab instructions and other documents for ENCM 335 can be found at https://people.ucalgary.ca/~norman/encm335fall2018/ Administrative details Each student must hand in their own assignment Later in the course, you may be allowed to work in pairs on some assignments. Due Dates The Due Date for this assignment is 3:30pm Friday, October 5. The Late Due Date is 3:30pm Tuesday, October 9 (not Monday the 8th, because that is a holiday). The penalty for handing in an assignment after the Due Date but before the Late Due Date is 3 marks. In other words, X/Y becomes (X{3)/Y if the assignment is late. There will be no credit for assignments turned in after the Late Due Date; they will be returned unmarked. Marking scheme A 4 marks B 8 marks C unmarked D 2 marks E 8 marks F 2 marks G 4 marks total 28 marks How to package and hand in your assignments Please see the information in the Lab 1 instructions. Function interface comments, continued For Lab 2, you were asked to read a document called \Function Interface Com- ments". ENCM 335 Fall 2018 Lab 3 page 2 of 11 Figure 1: Sketch of a program with a function to find the average element value of an array of ints. The function can be given only one argument, and the function is supposed to work correctly for whatever number of elements the array has.
    [Show full text]
  • Theory of Operating Systems
    Exam Review ● booting ● I/O hardware, DMA, I/O software ● device drivers ● virtual memory 1 booting ● hardware is configured to execute a program in Read-Only Memory (ROM) or flash memory: – the BIOS, basic I/O system – UEFI is the current equivalent ● BIOS knows how to access all the disk drives, chooses one to boot (perhaps with user assistance), loads the first sector (512 bytes) into memory, and starts to execute it (jmp) – first sector often includes a partition table 2 I/O hardware and DMA ● electronics, and sometimes moving parts, e.g. for disks or printers ● status registers and control registers read and set by CPU software – registers can directly control hardware, or be read and set by the device controller ● device controller can be instructed to do Direct Memory Access to transfer data to and from the CPU's memory – DMA typically uses physical addresses 3 Structure of I/O software ● user programs request I/O: read/write, send/recv, etc – daemons and servers work autonomously ● device-independent software converts the request to a device-dependent operation, and also handles requests from device drivers – e.g file systems and protocol stacks – e.g. servers in Minix ● one device driver may manage multiple devices – and handles interrupts ● buffer management required! 4 Device Drivers ● configure the device or device controller – i.e. must know specifics about the hardware ● respond to I/O requests from higher-level software: read, write, ioctl ● respond to interrupts, usually by reading status registers, writing to control registers, and transferring data (either via DMA, or by reading and writing data registers) 5 Memory Management ● linear array of memory locations ● memory is either divided into fixed-sized units (e.g.
    [Show full text]
  • Theory of Operating Systems
    Exam Review ● booting ● I/O hardware, DMA, I/O software ● device drivers ● memory (i.e. address space) management ● virtual memory 1 booting ● hardware is configured to execute a program in Read-Only Memory (ROM) or flash memory: – the BIOS, basic I/O system – UEFI is the current equivalent ● BIOS knows how to access all the disk drives, chooses one to boot (perhaps with user assistance), loads the first sector (512 bytes) into memory, and starts to execute it (jmp) – first sector often includes a partition table 2 I/O hardware and DMA ● electronics, and sometimes (disks, printers) moving parts ● status registers and control registers read and set by CPU software – registers can directly control hardware, or be read and set by the device controller ● device controller can be instructed to do Direct Memory Access to transfer data to and from the CPU's memory – DMA typically uses physical addresses 3 Structure of I/O software ● user programs request I/O: read/write, send/recv, etc – daemons and servers work autonomously ● device-independent software converts the request to a device-dependent operation, and also handles requests from device drivers – e.g file systems and protocol stacks – e.g. servers in Minix ● one device driver may manage multiple devices – and handles interrupts ● buffer management required! 4 Device Drivers ● configure the device or device controller – i.e. must know specifics about the hardware ● respond to I/O requests from higher-level software: read, write, ioctl ● respond to interrupts, usually by reading status registers, writing to control registers, and transferring data (either via DMA, or by reading and writing data registers) 5 Memory Management ● linear array of memory locations ● memory is either divided into fixed-sized units (e.g.
    [Show full text]
  • Agent Based Load Balancing Scheme Using Affinity Processor Scheduling for Multicore Architectures
    WSEAS TRANSACTIONS on COMPUTERS G. Muneeswari, K. L. Shunmuganathan Agent Based Load Balancing Scheme using Affinity Processor Scheduling for Multicore Architectures G.MUNEESWARI Research Scholar Department of computer Science and Engineering RMK Engineering College, Anna University (Chennai) INDIA [email protected] http://www.rmkec.ac.in Dr.K.L.SHUNMUGANATHAN Professor & Head Department of computer Science and Engineering RMK Engineering College, Anna University (Chennai) INDIA [email protected] http://www.rmkec.ac.in Abstract:-Multicore architecture otherwise called as CMP has many processors packed together on a single chip utilizes hyper threading technology. The main reason for adding large amount of processor core brings massive advancements in parallel computing paradigm. The enormous performance enhancement in multicore platform injects lot of challenges to the task allocation and load balancing on the processor cores. Altogether it is a crucial part from the operating system scheduling point of view. To envisage this large computing capacity, efficient resource allocation schemes are needed. A multicore scheduler is a resource management component of a multicore operating system focuses on distributing the load of some highly loaded processor to the lightly loaded ones such that the overall performance of the system is maximized. We already proposed a hard-soft processor affinity scheduling algorithm that promises in minimizing the average waiting time of the non critical tasks in the centralized queue and avoids the context switching of critical tasks. In this paper we are incorporating the agent based load balancing scheme for the multicore processor using the hard-soft processor affinity scheduling algorithm. Since we use the actual round robin scheduling for non critical tasks and due to soft affinity the load balancing is done automatically for non critical tasks.
    [Show full text]