• Programs • Processes • Context Switching • Protected Mode Execution • Inter-Process Communication • Threads

Total Page:16

File Type:pdf, Size:1020Kb

• Programs • Processes • Context Switching • Protected Mode Execution • Inter-Process Communication • Threads • Programs • Processes • Context Switching • Protected Mode Execution • Inter-process Communication • Threads 1 Running Dynamic Code • One basic function of an OS is to execute and manage code dynamically, e.g.: – A command issued at a command line terminal – An icon double clicked from the desktop – Jobs/tasks run as part of a batch system (MapReduce) • A process is the basic unit of a program in execution 2 How to Run a Program? • When you double-click on an .exe, how does the OS turn the file on disk into a process? • What information must the .exe file contain in order to run as a program? 3 Program Formats • Programs obey specific file formats – CP/M and DOS: COM executables (*.com) – DOS: MZ executables (*.exe) • Named after Mark Zbikowski, a DOS developer – Windows Portable Executable (PE, PE32+) (*.exe) • Modified version of Unix COFF executable format • PE files start with an MZ header. Why? – Unix/Linux: Executable and Linkable Format (ELF) – Mac OSX: Mach object file format (Mach-O) 4 test.c #include <stdio.h> int big_big_array[10 * 1024 * 1024]; char *a_string = "Hello, World!"; int a_var_with_value = 100; int main(void) { big_big_array[0] = 100; printf("%s\n", a_string); a_var_with_value += 20; printf("main is : %p\n", &main); return 0; } 5 ELF File Format • ELF Header – Contains compatibility info – Entry point of the executable code • Program header table – Lists all the segments in the file – Used to load and execute the program • Section header table – Used by the linker 6 ELF Header Format• Entry point of typedef struct { executable code 1 unsigned char e_ident[EI_NIDENT]; • What should EIP be Elf32_Half e_type; set to initially? 5 Elf32_Half e_machine; ISA of executable code Elf32_Word e_version; Elf32_Addr e_entry; Offset of program headers Elf32_Off e_phoff; Elf32_Off e_shoff; Offset of section headers 10 Elf32_Word e_flags; Elf32_Half e_ehsize; Elf32_Half e_phentsize; # of program headers Elf32_Half e_phnum; Elf32_Half e_shentsize; # of section headers 15 Elf32_Half e_shnum; Elf32_Half e_shstrndx; } Elf32_Ehdr; 7 ELF Header Example $ gcc –g –o test test.c $ readelf --header test ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: Advanced Micro Devices X86-64 Version: 0x1 Entry point address: 0x400460 Start of program headers: 64 (bytes into file) Start of section headers: 5216 (bytes into file) Flags: 0x0 Size of this header: 64 (bytes) Size of program headers: 56 (bytes) Number of program headers: 9 Size of section headers: 64 (bytes) Number of section headers: 36 Section header string table index: 33 8 Investigating the Entry Point int main(void) { … printf("main is : %p\n", &main); return 0; } $ gcc -g -o test test.c $ readelf --headers ./test | grep Entry point' Entry point address: 0x400460 $ ./test Hello World! main is : 0x400544 9 Entry point != &main $ ./test • Most compilers insert extra Hello World! main is : 0x400544 code into compiled programs $ readelf --headers ./test | grep Entry point' • This code typically runs Entry point address: 0x400460 before and after main() $ objdump --disassemble –M intel ./test … 0000000000400460 <_start>: 400460: 31 ed xor ebp,ebp 400462: 49 89 d1 mov r9,rdx 400465: 5e pop rsi 400466: 48 89 e2 mov rdx,rsp 400469: 48 83 e4 f0 and rsp,0xfffffffffffffff0 40046d: 50 push rax 40046e: 54 push rsp 40046f: 49 c7 c0 20 06 40 00 mov r8,0x400620 400476: 48 c7 c1 90 05 40 00 mov rcx,0x400590 40047d: 48 c7 c7 44 05 40 00 mov rdi,0x400544 400484: e8 c7 ff ff ff call 400450 <__libc_start_main@plt> … 10 Sections and Segments Multiple sections in • Sections are the various one segments pieces of code and data that get linked together by the Segments compiler • Each segment contains one or more sections – Each segment contains sections that are related • E.g. all code sections – Segments are the basic units for the loader 11 Common Sections • Sections are the various pieces of code and data that compose a program • Key sections: – .text – Executable code – .bss – Global variables initialized to zero – .data, .rodata – Initialized data and strings – .strtab – Names of functions and variables – .symtab – Debug symbols 12 String variable àSection Example.data Empty 10 MB array à .bss int big_big_array[10*1024*1024]; char *a_string = "Hello, World!"; int a_var_with_value = 0x100; int main(void) { Initialized global variable à .data big_big_array[0] = 100; printf("%s\n", a_string); a_var_with_value += 20; … } String constant à .rodata Code à .text 13 $ readelf --headers ./test … Section to Segment mapping: Segment Sections... 00 01 .interp 02 .interp .note.ABI-tag .note.gnu.build-id .gnu.hash .dynsym .dynstr .gnu.version .gnu.version_r .rela.dyn .rela.plt .init .plt .text .fini .rodata .eh_frame_hdr .eh_frame 03 .ctors .dtors .jcr .dynamic .got .got.plt .data .bss 04 .dynamic 05 .note.ABI-tag .note.gnu.build-id 06 .eh_frame_hdr 07 08 .ctors .dtors .jcr .dynamic .got … There are 36 section headers, starting at offset 0x1460: Section Headers: [Nr] Name Type Address Offset Size ES Flags Link Info Align [ 0] NULL 00000000 00000000 00000000 00 0 0 0 [ 1] .interp PROGBITS 00400238 00000238 0000001c 00 A 0 0 1 [ 2] .note.ABI-tag NOTE 00400254 00000254 00000020 00 A 0 0 4 [ 3] .note.gnu.build-I NOTE 00400274 00000274 00000024 00 A 0 0 4 [ 4] .gnu.hash GNU_HASH 00400298 00000298 0000001c 00 A 5 0 8 [ 5] .dynsym DYNSYM 004002b8 000002b8 00000078 18 A 6 1 8 [ 6] .dynstr STRTAB 00400330 00000330 00000044 00 A 0 0 1 [ 7] .gnu.version VERSYM 00400374 00000374 0000000a 02 A 5 0 2 … .text Example Header typedef struct { Elf32_Word p_type; Elf32_Off p_offset; Address to load 5 Elf32_Addr p_vaddr; section in memory Elf32_Addr p_paddr; Elf32_Word p_filesz; Elf32_Word p_memsz; Offset of data in the file Elf32_Word p_flags; 10 Elf32_Word p_align; How many bytes (in hex) } Data for the are in the section $programreadelf --sections ./test ... Section Headers: Executable … [Nr] Name Type Address Offset Size ES Flags Link Info Align [13] .text PROGBITS 00400460 00000460 00000218 00 AX 0 0 16 … .bss Example Header int big_big_array[10*1024*1024]; typedef struct { Elf32_Word p_type; Offset of data in the file Elf32_Off p_offset; (Notice the length = 0) 5 Elf32_Addr p_vaddr; Elf32_Addr p_paddr; Address to load Elf32_Word p_filesz; section in memory Elf32_Word p_memsz; Elf32_Word p_flags; Contains 10 Elf32_Word p_align; no data } hex(4*10*1024*1024) = $ readelf --sections ./test ... 0x2800020 Section Headers: Writable … [Nr] Name Type Address Offset Size ES Flags Link Info Align [25] .bss NOBITS 00601040 00001034 02800020 00 WA 0 0 32 [26] .comment PROGBITS 00000000 00001034 000002a 01 MS 0 0 1 … Segments • Each segment contains one or more sections – All of the sections in a segment are related, e.g.: • All sections contain compiled code • Or, all sections contain initialized data • Or, all sections contain debug information • … etc… • Segments are used by the loader to: – Place data and code in memory – Determine memory permissions (read/write/execute) 17 Segment Header typedef struct { Type of segment Elf32_Word p_type; Offset within the ELF file Elf32_Off p_offset; for the segment data 5 Elf32_Addr p_vaddr; Location to load the segment into memory Elf32_Addr p_paddr; Size of the segment data Size of the segment in Elf32_Word p_filesz; on disk Elf32_Word p_memsz; memory Elf32_Word p_flags; 10 Elf32_Word p_align; • Flags describing the } section data • Examples: executable, read-only 18 $ readelf --segments ./test Elf file type is EXEC (Executable file) Entry point 0x400460 There are 9 program headers, starting at offset 64 Executable Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align PHDR 0x00000040 0x00400040 0x00400040 0x000001f8 0x000001f8 R E 8 INTERP 0x00000238 0x00400238 0x00400238 0x0000001c 0x0000001c R 1 LOAD 0x00000000 0x00400000 0x00400000 0x0000077c 0x0000077c R E 200000 LOAD 0x00000e28 0x00600e28 0x00600e28 0x0000020c 0x02800238 RW 200000 DYNAMIC 0x00000e50 0x00600e50 0x00600e50 0x00000190 0x00000190 RW 8 NOTE 0x00000254 0x00400254 0x00400254 0x00000044 0x00000044 R 4 GNU_EH_FRAME 0x000006a8 0x004006a8 0x004006a8 0x0000002c 0x0000002c R 4 GNU_STACK 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 RW 8 GNU_RELRO 0x00000e28 0x00600e28 0x00600e28 0x000001d8 0x000001d8 R 1 Section to Segment mapping: Segment Sections... 00 01 .interp 02 .interp .note.ABI-tag .note.gnu.build-id .gnu.hash .dynsym .dynstr .gnu.version .gnu.version_r .rela.dyn .rela.plt .init .plt .text .fini .rodata .eh_frame_hdr .eh_frame 03 .ctors .dtors .jcr .dynamic .got .got.plt .data .bss 04 .dynamic … What About Static Data? #include <stdio.h> $ strings –t d ./test 568 /lib64/ld-linux-x86-64.so.2 int big_big_array[10 * 1024 * 1024]; 817 __gmon_start__ char *a_string = "Hello, World!"; 832 libc.so.6 842 puts int a_var_with_value = 100; 847 printf 854 __libc_start_main int main(void) { 872 GLIBC_2.2.5 big_big_array[0] = 100; 1300 fff. printf("%s\n", a_string); 1314 = 1559 l$ L a_var_with_value += 20; 1564 t$(L 1569 |$0H printf("main is : %p\n", &main); 1676 Hello, World! return 0; 1690 main is : %p } 1807 ;*3$" 20 The Program Loader • OS functionality that loads programs into memory, creates processes Memory – Places segments into memory ESP Stack • Expands segments like .bss – Loads necessary dynamic libraries ELF Program – Performs relocation Heap ELF Header – Allocated the initial stack .text .bss frame .data .rodata – Sets EIP to the
Recommended publications
  • Demystifying the Real-Time Linux Scheduling Latency
    Demystifying the Real-Time Linux Scheduling Latency Daniel Bristot de Oliveira Red Hat, Italy [email protected] Daniel Casini Scuola Superiore Sant’Anna, Italy [email protected] Rômulo Silva de Oliveira Universidade Federal de Santa Catarina, Brazil [email protected] Tommaso Cucinotta Scuola Superiore Sant’Anna, Italy [email protected] Abstract Linux has become a viable operating system for many real-time workloads. However, the black-box approach adopted by cyclictest, the tool used to evaluate the main real-time metric of the kernel, the scheduling latency, along with the absence of a theoretically-sound description of the in-kernel behavior, sheds some doubts about Linux meriting the real-time adjective. Aiming at clarifying the PREEMPT_RT Linux scheduling latency, this paper leverages the Thread Synchronization Model of Linux to derive a set of properties and rules defining the Linux kernel behavior from a scheduling perspective. These rules are then leveraged to derive a sound bound to the scheduling latency, considering all the sources of delays occurring in all possible sequences of synchronization events in the kernel. This paper also presents a tracing method, efficient in time and memory overheads, to observe the kernel events needed to define the variables used in the analysis. This results in an easy-to-use tool for deriving reliable scheduling latency bounds that can be used in practice. Finally, an experimental analysis compares the cyclictest and the proposed tool, showing that the proposed method can find sound bounds faster with acceptable overheads. 2012 ACM Subject Classification Computer systems organization → Real-time operating systems Keywords and phrases Real-time operating systems, Linux kernel, PREEMPT_RT, Scheduling latency Digital Object Identifier 10.4230/LIPIcs.ECRTS.2020.9 Supplementary Material ECRTS 2020 Artifact Evaluation approved artifact available at https://doi.org/10.4230/DARTS.6.1.3.
    [Show full text]
  • Context Switch in Linux – OS Course Memory Layout – General Picture
    ContextContext switchswitch inin LinuxLinux ©Gabriel Kliot, Technion 1 Context switch in Linux – OS course Memory layout – general picture Stack Stack Stack Process X user memory Process Y user memory Process Z user memory Stack Stack Stack tss->esp0 TSS of CPU i task_struct task_struct task_struct Process X kernel Process Y kernel Process Z kernel stack stack and task_struct stack and task_struct and task_struct Kernel memory ©Gabriel Kliot, Technion 2 Context switch in Linux – OS course #1 – kernel stack after any system call, before context switch prev ss User Stack esp eflags cs … User Code eip TSS … orig_eax … tss->esp0 es Schedule() function frame esp ds eax Saved on the kernel stack during ebp a transition to task_struct kernel mode by a edi jump to interrupt and by SAVE_ALL esi macro edx thread.esp0 ecx ebx ©Gabriel Kliot, Technion 3 Context switch in Linux – OS course #2 – stack of prev before switch_to macro in schedule() func prev … Schedule() saved EAX, ECX, EDX Arguments to contex_switch() Return address to schedule() TSS Old (schedule’s()) EBP … tss->esp0 esp task_struct thread.eip thread.esp thread.esp0 ©Gabriel Kliot, Technion 4 Context switch in Linux – OS course #3 – switch_to: save esi, edi, ebp on the stack of prev prev … Schedule() saved EAX, ECX, EDX Arguments to contex_switch() Return address to schedule() TSS Old (schedule’s()) EBP … tss->esp0 ESI EDI EBP esp task_struct thread.eip thread.esp thread.esp0 ©Gabriel Kliot, Technion 5 Context switch in Linux – OS course #4 – switch_to: save esp in prev->thread.esp
    [Show full text]
  • Security and Performance Analysis of Custom Memory Allocators Tiffany
    Security and Performance Analysis of Custom Memory Allocators by Tiffany Tang Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2019 c Massachusetts Institute of Technology 2019. All rights reserved. Author.............................................................. Department of Electrical Engineering and Computer Science June 10, 2019 Certified by. Howard E. Shrobe Principal Research Scientist and Associate Director of Security, CSAIL Thesis Supervisor Certified by. Hamed Okhravi Senior Technical Staff, MIT Lincoln Laboratory Thesis Supervisor Accepted by . Katrina LaCurts, Chair, Masters of Engineering Thesis Committee 2 DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Assistant Secre- tary of Defense for Research and Engineering. 3 Security and Performance Analysis of Custom Memory Allocators by Tiffany Tang Submitted to the Department of Electrical Engineering and Computer Science on June 10, 2019, in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science Abstract Computer programmers use custom memory allocators as an alternative to built- in or general-purpose memory allocators with the intent to improve performance and minimize human error. However, it is difficult to achieve both memory safety and performance gains on custom memory allocators.
    [Show full text]
  • CSE421 Midterm Solutions —SOLUTION SET— 09 Mar 2012
    CSE421 Midterm Solutions —SOLUTION SET— 09 Mar 2012 This midterm exam consists of three types of questions: 1. 10 multiple choice questions worth 1 point each. These are drawn directly from lecture slides and intended to be easy. 2. 6 short answer questions worth 5 points each. You can answer as many as you want, but we will give you credit for your best four answers for a total of up to 20 points. You should be able to answer the short answer questions in four or five sentences. 3. 2 long answer questions worth 20 points each. Please answer only one long answer question. If you answer both, we will only grade one. Your answer to the long answer should span a page or two. Please answer each question as clearly and succinctly as possible. Feel free to draw pic- tures or diagrams if they help you to do so. No aids of any kind are permitted. The point value assigned to each question is intended to suggest how to allocate your time. So you should work on a 5 point question for roughly 5 minutes. CSE421 Midterm Solutions 09 Mar 2012 Multiple Choice 1. (10 points) Answer all ten of the following questions. Each is worth one point. (a) In the story that GWA (Geoff) began class with on Monday, March 4th, why was the Harvard student concerned about his grade? p He never attended class. He never arrived at class on time. He usually fell asleep in class. He was using drugs. (b) All of the following are inter-process (IPC) communication mechanisms except p shared files.
    [Show full text]
  • Paging: Smaller Tables
    20 Paging: Smaller Tables We now tackle the second problem that paging introduces: page tables are too big and thus consume too much memory. Let’s start out with a linear page table. As you might recall1, linear page tables get pretty 32 12 big. Assume again a 32-bit address space (2 bytes), with 4KB (2 byte) pages and a 4-byte page-table entry. An address space thus has roughly 232 one million virtual pages in it ( 212 ); multiply by the page-table entry size and you see that our page table is 4MB in size. Recall also: we usually have one page table for every process in the system! With a hundred active processes (not uncommon on a modern system), we will be allocating hundreds of megabytes of memory just for page tables! As a result, we are in search of some techniques to reduce this heavy burden. There are a lot of them, so let’s get going. But not before our crux: CRUX: HOW TO MAKE PAGE TABLES SMALLER? Simple array-based page tables (usually called linear page tables) are too big, taking up far too much memory on typical systems. How can we make page tables smaller? What are the key ideas? What inefficiencies arise as a result of these new data structures? 20.1 Simple Solution: Bigger Pages We could reduce the size of the page table in one simple way: use bigger pages. Take our 32-bit address space again, but this time assume 16KB pages. We would thus have an 18-bit VPN plus a 14-bit offset.
    [Show full text]
  • What Is an Operating System III 2.1 Compnents II an Operating System
    Page 1 of 6 What is an Operating System III 2.1 Compnents II An operating system (OS) is software that manages computer hardware and software resources and provides common services for computer programs. The operating system is an essential component of the system software in a computer system. Application programs usually require an operating system to function. Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
    [Show full text]
  • Virtual Memory Basics
    Operating Systems Virtual Memory Basics Peter Lipp, Daniel Gruss 2021-03-04 Table of contents 1. Address Translation First Idea: Base and Bound Segmentation Simple Paging Multi-level Paging 2. Address Translation on x86 processors Address Translation pointers point to objects etc. transparent: it is not necessary to know how memory reference is converted to data enables number of advanced features programmers perspective: Address Translation OS in control of address translation pointers point to objects etc. transparent: it is not necessary to know how memory reference is converted to data programmers perspective: Address Translation OS in control of address translation enables number of advanced features pointers point to objects etc. transparent: it is not necessary to know how memory reference is converted to data Address Translation OS in control of address translation enables number of advanced features programmers perspective: transparent: it is not necessary to know how memory reference is converted to data Address Translation OS in control of address translation enables number of advanced features programmers perspective: pointers point to objects etc. Address Translation OS in control of address translation enables number of advanced features programmers perspective: pointers point to objects etc. transparent: it is not necessary to know how memory reference is converted to data Address Translation - Idea / Overview Shared libraries, interprocess communication Multiple regions for dynamic allocation
    [Show full text]
  • Hiding Process Memory Via Anti-Forensic Techniques
    DIGITAL FORENSIC RESEARCH CONFERENCE Hiding Process Memory via Anti-Forensic Techniques By: Frank Block (Friedrich-Alexander Universität Erlangen-Nürnberg (FAU) and ERNW Research GmbH) and Ralph Palutke (Friedrich-Alexander Universität Erlangen-Nürnberg) From the proceedings of The Digital Forensic Research Conference DFRWS USA 2020 July 20 - 24, 2020 DFRWS is dedicated to the sharing of knowledge and ideas about digital forensics research. Ever since it organized the first open workshop devoted to digital forensics in 2001, DFRWS continues to bring academics and practitioners together in an informal environment. As a non-profit, volunteer organization, DFRWS sponsors technical working groups, annual conferences and challenges to help drive the direction of research and development. https://dfrws.org Forensic Science International: Digital Investigation 33 (2020) 301012 Contents lists available at ScienceDirect Forensic Science International: Digital Investigation journal homepage: www.elsevier.com/locate/fsidi DFRWS 2020 USA d Proceedings of the Twentieth Annual DFRWS USA Hiding Process Memory Via Anti-Forensic Techniques Ralph Palutke a, **, 1, Frank Block a, b, *, 1, Patrick Reichenberger a, Dominik Stripeika a a Friedrich-Alexander Universitat€ Erlangen-Nürnberg (FAU), Germany b ERNW Research GmbH, Heidelberg, Germany article info abstract Article history: Nowadays, security practitioners typically use memory acquisition or live forensics to detect and analyze sophisticated malware samples. Subsequently, malware authors began to incorporate anti-forensic techniques that subvert the analysis process by hiding malicious memory areas. Those techniques Keywords: typically modify characteristics, such as access permissions, or place malicious data near legitimate one, Memory subversion in order to prevent the memory from being identified by analysis tools while still remaining accessible.
    [Show full text]
  • Context Switch Overheads for Linux on ARM Platforms
    Context Switch Overheads for Linux on ARM Platforms Francis David [email protected] Jeffrey Carlyle [email protected] Roy Campbell [email protected] http://choices.cs.uiuc.edu Outline What is a Context Switch? ◦ Overheads ◦ Context switching in Linux Interrupt Handling Overheads ARM Experimentation Platform Context Switching ◦ Experiment Setup ◦ Results Interrupt Handling ◦ Experiment Setup ◦ Results What is a Context Switch? Storing current processor state and restoring another Mechanism used for multi-tasking User-Kernel transition is only a processor mode switch and is not a context switch Sources of Overhead Time spent in saving and restoring processor state Pollution of processor caches Switching between different processes ◦ Virtual memory maps need to be switched ◦ Synchronization of memory caches Paging Context Switching in Linux Context switch can be implemented in userspace or in kernelspace New 2.6 kernels use Native POSIX Threading Library (NPTL) NPTL uses one-to-one mapping of userspace threads to kernel threads Our experiments use kernel 2.6.20 Outline What is a Context Switch? ◦ Overheads ◦ Context switching in Linux Interrupt Handling Overheads ARM Experimentation Platform Context Switching ◦ Experiment Setup ◦ Results Interrupt Handling ◦ Experiment Setup ◦ Results Interrupt Handling Interruption of normal program flow Virtual memory maps not switched Also causes overheads ◦ Save and restore of processor state ◦ Perturbation of processor caches Outline What is a Context Switch? ◦ Overheads ◦ Context switching
    [Show full text]
  • Lecture 4: September 13 4.1 Process State
    CMPSCI 377 Operating Systems Fall 2012 Lecture 4: September 13 Lecturer: Prashant Shenoy TA: Sean Barker & Demetre Lavigne 4.1 Process State 4.1.1 Process A process is a dynamic instance of a computer program that is being sequentially executed by a computer system that has the ability to run several computer programs concurrently. A computer program itself is just a passive collection of instructions, while a process is the actual execution of those instructions. Several processes may be associated with the same program; for example, opening up several windows of the same program typically means more than one process is being executed. The state of a process consists of - code for the running program (text segment), its static data, its heap and the heap pointer (HP) where dynamic data is kept, program counter (PC), stack and the stack pointer (SP), value of CPU registers, set of OS resources in use (list of open files etc.), and the current process execution state (new, ready, running etc.). Some state may be stored in registers, such as the program counter. 4.1.2 Process Execution States Processes go through various process states which determine how the process is handled by the operating system kernel. The specific implementations of these states vary in different operating systems, and the names of these states are not standardised, but the general high-level functionality is the same. When a process is first started/created, it is in new state. It needs to wait for the process scheduler (of the operating system) to set its status to "new" and load it into main memory from secondary storage device (such as a hard disk or a CD-ROM).
    [Show full text]
  • IBM Power Systems Virtualization Operation Management for SAP Applications
    Front cover IBM Power Systems Virtualization Operation Management for SAP Applications Dino Quintero Enrico Joedecke Katharina Probst Andreas Schauberer Redpaper IBM Redbooks IBM Power Systems Virtualization Operation Management for SAP Applications March 2020 REDP-5579-00 Note: Before using this information and the product it supports, read the information in “Notices” on page v. First Edition (March 2020) This edition applies to the following products: Red Hat Enterprise Linux 7.6 Red Hat Virtualization 4.2 SUSE Linux SLES 12 SP3 HMC V9 R1.920.0 Novalink 1.0.0.10 ipmitool V1.8.18 © Copyright International Business Machines Corporation 2020. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . .v Trademarks . vi Preface . vii Authors. vii Now you can become a published author, too! . viii Comments welcome. viii Stay connected to IBM Redbooks . ix Chapter 1. Introduction. 1 1.1 Preface . 2 Chapter 2. Server virtualization . 3 2.1 Introduction . 4 2.2 Server and hypervisor options . 4 2.2.1 Power Systems models that support PowerVM versus KVM . 4 2.2.2 Overview of POWER8 and POWER9 processor-based hardware models. 4 2.2.3 Comparison of PowerVM and KVM / RHV . 7 2.3 Hypervisors . 8 2.3.1 Introducing IBM PowerVM . 8 2.3.2 Kernel-based virtual machine introduction . 15 2.3.3 Resource overcommitment . 16 2.3.4 Red Hat Virtualization . 17 Chapter 3. IBM PowerVM management and operations . 19 3.1 Shared processor logical partitions . 20 3.1.1 Configuring a shared processor LPAR .
    [Show full text]
  • Lecture 6: September 20 6.1 Threads
    CMPSCI 377 Operating Systems Fall 2012 Lecture 6: September 20 Lecturer: Prashant Shenoy TA: Sean Barker & Demetre Lavigne 6.1 Threads A thread is a sequential execution stream within a process. This means that a single process may be broken up into multiple threads. Each thread has its own Program Counter, registers, and stack, but they all share the same address space within the process. The primary benefit of such an approach is that a process can be split up into many threads of control, allowing for concurrency since some parts of the process to make progress while others are busy. Sharing an address space within a process allows for simpler coordination than message passing or shared memory. Threads can also be used to modularize a process { a process like a web browser may create one thread for each browsing tab, or create threads to run external plugins. 6.1.1 Processes vs threads One might argue that in general processes are more flexible than threads. For example, processes are controlled independently by the OS, meaning that if one crashes it will not affect other processes. However, processes require explicit communication using either message passing or shared memory which may add overhead since it requires support from the OS kernel. Using threads within a process allows them all to share the same address space, simplifying communication between threads. However, threads have their own problems: because they communicate through shared memory they must run on same machine and care must be taken to create thread-safe code that functions correctly despite multiple threads acting on the same set of shared data.
    [Show full text]