Review: Hardware User/Kernel Boundary

Review: Hardware User/Kernel Boundary

Review: Hardware user/kernel boundary applic. applic. applic. user lib lib lib kernel syscall pg fault syscall FS VM sockets disk disk NIC context switch TCP retransmits, ... device interrupts ² Processor can be in one of two modes - user mode – application software & libraries - kernel (supervisor/privileged) mode – for the OS kernel ² Privileged instructions only available in kernel mode Kernel execution contexts ² Top half of kernel - Kernel code acting on behalf of current user-level process - System call, page fault handler, kernel-only process, etc. - This is the only kernel context where you can sleep (e.g., if you need to wait for more memory, a buffer, etc.) ² Software interrupt – TCP/IP protocol processing ² Device interrupt – Network, disk, . - External hardware causes CPU to jump to OS entry point ² Timer interrupt (hardclock) ² Context switch code - Switches current process (thread switch for top half) Transitions between contexts ² User ! top half: syscall, page fault ² User/top half ! device/timer interrupt: hardware ² Top half ! user/context switch: return ² Top half ! context switch: sleep ² Context switch ! user/top half Context switches are expensive ² Modern CPUs very complex - Deep pipelining to maximize use of functional units - Dynamically scheduled, speculative & out-of-order execution ² Most context switches flush the pipeline - E.g., after memory fault, can't execute further instructions ² Reenabling hardware interrupts expensive ² Kernel must set up its execution context - Cannot trust user registers, especially stack pointer - Set up its own stack, save user registers ² Switching between address spaces expensive - Invalidates cached virtual memory translations (discussed in a few slides. ) Top/bottom half synchronization ² Top half kernel procedures can mask interrupts int x = splhigh (); /* ... */ splx (x); ² splhigh disables all interrupts, but also have splnet, splbio, splsoftnet, . - Hardware interrupt handler sets mask automatically ² Example: splnet implies splsoftnet - Ethernet packet arrives, driver code flags TCP/IP needed - Upon return from handler, TCP code invoked at softnet ² Masking interrupts in hardware can be expensive - Optimistic implementation – set mask flag on splhigh, check interrupted flag on splx Process context P1 P2 P3 applic. applic. applic. lib lib lib syscall pg fault syscall FS VM sockets disk disk NIC context switch TCP retransmits, ... device interrupts ² Kernel gives each program its own context ² Isolates processes from each other - So one buggy process cannot crash others Virtual memory ² Need fault isolation between processes ² Want each process to have its own view of memory - Otherwise, pain to allocate large contiguous structures - Processes can consume more than available memory - Dormant processes (wating for event) still have core images ² Solution: Virtual Memory - Give each program its own address space—I.e., Address 0x8000 goes to different physical memory in P1 and P2 - CPU must be in kernel mode to manipulate mappings - Isolation between processes is natural Paging ² Divide memory up into small pages ² Map virtual pages to physical pages - Each process has separate mapping ² Allow OS to gain control on certain operations - Read-only pages trap to OS on write - Invalid pages trap to OS on write - OS can change mapping and resume application ² Other features sometimes found: - Hardware can set “dirty” bit - Control caching of page Example: Paging on x86 ² Page size is 4 KB (in most generally used mode) ² Page table: 1024 32-bit translations for 4 Megs of Virtual mem ² Page directory: 1024 pointers to page tables ² %cr3—page table base register ² %cr0—bits enable protection and paging ² INVLPG – tell hardware page table modified Linear Address 31 22 21 12 11 0 Directory Table Offset 12 4−KByte Page Page Table 10 10 Physical Address Page Directory Page−Table Entry 20 Directory Entry 20 32* 1024 PDE £ 1024 PTE = 2 Pages CR3 (PDBR) *32 bits aligned onto a 4−KByte boundary Page−Table Entry (4−KByte Page) 31 12 11 9 8 7 6 5 4 3 2 1 0 P P P U R Page Base Address Avail G A D A C W / / P T D T S W Available for system programmer’s use Global Page Page Table Attribute Index Dirty Accessed Cache Disabled Write−Through User/Supervisor Read/Write Present Example: MIPS ² Hardware has 64-entry TLB - References to addresses not in TLB trap to kernel ² Each TLB entry has the following fields: Virtual page, Pid, Page frame, NC, D, V, Global ² Kernel itself unpaged - All of physical memory contiguously mapped in high VM - Kernel uses these pseudo-physical addresses ² User TLB fault hander very efficient - Two hardware registers reserved for it - utlb miss handler can itself fault—allow paged page tables Memory and I/O buses I/O bus 1880Mbps Memory 1056Mbps CPU Crossbar ² CPU accesses physical memory over a bus ² Devices access memory over I/O bus with DMA ² Devices can appear to be a region of memory What is memory ² SRAM – Static RAM - Like two NOT gates circularly wired input-to-output - 4–6 transistors per bit, actively holds its value - Very fast, used to cache slower memory ² DRAM – Dynamic RAM - A capacitor + gate, holds charge to indicate bit value - 1 transistor per bit – extremely dense storage - Charge leaks—need slow comparator to decide if bit 1 or 0 - Must re-write charge after reading, or periodically refresh ² VRAM – “Video RAM” - Dual ported, can write while someone else reads Communicating with a device ² Memory-mapped device registers - Certain physical addresses correspond to device registers - Load/store gets status/sends instructions – not real memory ² Device memory – device may have memory OS can write to directly on other side of I/O bus ² Special I/O instructions - Some CPUs (e.g., x86) have special I/O instructions - Like load & store, but asserts special I/O pin on CPU - OS can allow user-mode access to I/O ports with finer granularity than page ² DMA – place instructions to card in main memory - Typically then need to “poke” card by writing to register DMA buffers Memory buffers 100 1400 1500 1500 ¼ 1500 Buffer descriptor list ² Include list of buffer locations in main memory ² Card reads list then accesses buffers (w. DMA) - Allows for scatter/gather I/O Anatomy of a Network Interface Card Network link Bus Link interface interface Host I/O bus Adaptor ² Link interface talks to wire/fiber/antenna - Typically does framing, link-layer CRC ² FIFOs on card provide small amount of buffering ² Bus interface logic moves packets to/from memory or CPU Driver architecture ² Device driver provides several entry points to kernel - Reset, output, interrupt, . ² How should driver synchronize with card? - Need to know when transmit buffers free or packets arrive ² One approach: Polling - Sent a packet? Loop asking card when buffer is free - Waiting to receive? Keep asking card if it has packet ² Disadvantages of polling - Can't use CPU for anything else while polling - Or schedule poll if future and do something else, but then high latency to receive packet Interrupt driven devices ² Instead, ask card to interrupt CPU on events - Interrupt handler runs at high priority - Asks card what happened (xmit buffer free, new packet) - This is what most general-purpose OSes do ² Problem: Bad for high-throughput scenarios - Interrupts are very expensive (context switch) - Interrupts handlers have high priority - In worst case, can spend 100% of time in interrupt handler and never make any progress – receive livelock ² Best: Adaptive algorithm that switches between interrupts and polling.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    19 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us