Understanding and Improving Device Access Complexity

Total Page:16

File Type:pdf, Size:1020Kb

Understanding and Improving Device Access Complexity Understanding and Improving Device Access Complexity Asim Kadav (with Prof. Michael M. Swift) University of Wisconsin-Madison Devices enrich computers ★ Keyboard ★ Flash storage ★ Graphics ★ WIFI ★ Keyboard ★ Headphones ★ Sound ★ SD card ★ Printer ★ Camera ★ Network ★ Accelerometers ★ Storage ★ GPS ★ Touch display ★ NFC 2 Huge growth in number of devices New I/O devices: accelerometers, GPUS, GPS, touch Many buses: USB, PCI-e, thunderbolt Heterogeneous OS support: 10G ethernet vs card readers 3 Device drivers: OS interface to devices diversity applications OS Expose device abstractions and hide device complexity Expose kernel device drivers abstractions and hide OS complexity buses devices diversity Allow diverse set of applications and OS services to access diverse set of devices 4 Evolution of devices hurts device access Low Simplicity latency Tools and mechanisms to address Efficient device Cost increasing device complexity Reliabilitysupport in OS effective Growth in Run in number and challenging diversity environments Complex Hardware firmware and failures (like configuration CMOS Evolution issues) of devices modes 5 Growth in drivers hurts understanding of drivers Lines of code in Linux 3.8 applications OS 0 1750000 3500000 5250000 7000000 memory mgmt 60,000 device devicekernel drivers66,000 OS drivers file systems 760,000 kernel drivers buses Contribute 60% of Linux kernel code and more than 90% in Windows devices 6,700,000 Understand the software complexity and improve driver code 6 Last decade: Focus on the driver-kernel interface + device drivers OS 3rd party developers kernel Recipe for disaster 7 Re-use lessons from existing driver research Improvement System Validation Drivers Bus Classes New functionality Shadow driver migration [OSR09] 1 1 1 RevNIC [Eurosys 10] 1 1 1 Reliability Nooks [SOSP 03] 6 1 2 XFI [ OSDI 06] 2 1 1 CuriOS [OSDI 08] 2 1 2 Type Safety SafeDrive [OSDI 06] 6 2 3 Singularity [Eurosys 06] 1 1 1 Limited kernel changes + Applicable to lots of drivers => Specification Nexus [OSDI 08] 2 1 2 Real Impact Termite [SOSP 09] 2 1 2 Static analysis tools Windows SDV [Eurosys 06] Many Many Many DesignLarge kernel goal: subsystems CompleteCoverity and solution [CACM validity 10] that of few Alllimits deviceAll kernel typesAll resultchanges in limited and adoptionCocinelle applies [Eurosys of 08] to research all driversAll solutionsAll All 8 Goal: Address software and hardware complexity ★ Understand and improve device access in the face of rising hardware and software complexity Increasing hardware Increasing hardware complexity complexity Reliability against Low latency hardware failures 1 device availability 2 Increasing software Better understanding 3 complexity of driver code 9 My approach Narrow approach and solve specific problems in all drivers Tolerate device failures Broad approach and have a Understand drivers and holistic view of all drivers potential opportunities Known approach and apply to all Transactional approach for drivers low latency recovery Minimize kernel changes and apply to all drivers 10 Contributions/Outline SOSP ’09 First research consideration of hardware failures in drivers Tolerate device failures ASPLOS ’12 Largest study of drivers to Understand drivers and understand their behavior and potential opportunities verify research assumptions ASPLOS ’13 Introduce checkpoint/restore in Transactional approach for drivers for low latency low latency recovery fault tolerance 11 What happens when devices misbehave? ★ Drivers make it better ★ Provide error detection and recovery ★ Drivers make it worse ★ Assume perfect hardware or panic when anything bad happens What assumptions do modern drivers make about hardware ? 12 Current state of OS-hardware interaction ★ Many device drivers often assume device perfection - Common Linux network driver: 3c59x.c while&(ioread16(ioaddr&+&Wn7_MasterStatus)) &&&&0x8000); HANG! Hardware dependence bug: Device malfunction can crash the system 13 Sources of hardware misbehavior ★ Sources of hardware Driver misbehavior Bus ★ Firmware/Design bugs ★ Device wear-out, Cache insufficient burn-in ★ Bridging faults Firmware ★ Electromagnetic interference, radiation, Electrical heat Mechanical Device 14 Sources of hardware misbehavior ★ Sources of hardware ★ Results of misbehavior misbehavior ★ Firmware/Design bugs ★ Corrupted/stuck-at inputs ★ Device wear-out, ★ Timing errors insufficient burn-in ★ Interrupt storms/missing ★ Bridging faults interrupts ★ Electromagnetic ★ Incorrect memory access interference, radiation, heat 15 An evidence: Transient hardware failures caused 8% of all crashes and 9% of all unplanned reboots [1] ★ Systems work fine after reboots ★ Vendors report returned device was faultless Existing solution is hand-coded hardened drivers ★ Crashes reduce from 8% to 3% [1] Fault resilient drivers for Longhorn server, May 2004. Microsoft Corp. 16 How do hardware dependence bugs manifest? 1 Drivers use device printk(“%s”,msg[inb(regA)]); data in critical control and data paths 2 if&(inb(regA)!=&5)&&{ Drivers do not report device &&return&Q22;&//do&nothing malfunction to system log } 3 if&(inb(regA)!=&5)&{ Drivers do not detect or recover from &panic(); device failures } 17 Vendor recommendations for driver developers Recommendation Summary Recommended by Intel Sun MS Linux Validation Input validation ! ! ! Read once& CRC data ! ! ! DMA protection ! ! Timing Infinite polling ! ! ! Stuck interrupt ! Lost request ! Avoid excess delay in OS ! Unexpected events ! ! Reporting Report all failures ! ! ! Recovery Handle all failures ! ! Cleanup correctly ! ! Goal: AutomaticallyDo not crash on failure implement! as many! ! recommendations as possible in commodity drivers Wrap I/O memory access ! ! ! ! 18 Carburizer [SOSP ’09] Goal: Tolerate hardware device failures in software through hardware failure detection and recovery Static analysis component Runtime component ★ Detect and fix hardware ★ Detect interrupt dependence bugs failures ★ Detect and generate ★ Provide automatic missing error reporting recovery information 19 Carburizer architecture Bug detection and Recovery and interrupt automatic fix generation watchdog OS Kernel Kernel Interface Carburizer If&(c==0)&{ . print&(“Driver&init”); } . If&(c==0)&{ Compiler . Carburizer . print&(“Driver& init”); } . Runtime . Hardened Driver Binary Driver Faulty Hardware 20 Hardening drivers • Goal: Remove hardware dependence bugs ★ Find driver code that uses data from device ★ Ensure driver performs validity checks • Carburizer detects and fixes hardware bugs : Unsafe Unsafe System Infinite array pointer panic polling reference reference calls 21 Finding sensitive code ★ First pass: Identify tainted variables that contain data from device Tainted&Variables int&test&()&Types of { device I/O OS &a&=&readl(); a b ★ &b&=&Port I/O :!inb/inwinb(); c ★ &c&=&Memory-mappedb; I/O : readl/readw &d&=&c&+&2; d ★ DMA buffers &return&d; test() ★ Data from USB packets } e int&set()& { &&&&&&&e&=&test(); } network card 22 Detecting risky uses of tainted variables ★ Second pass: Identify risky uses of tainted variables ★ Example: Infinite polling ★ Driver waiting for device to enter particular state ★ Solution: Detect loops where all terminating conditions depend on tainted variables ★ Extra analyses to existing timeouts 23 Infinite polling ★ Infinite polling of devices can cause system lockups static&int&amd8111e_read_phy(………) { &... &&reg_val&=&readl(mmio&+&PHY_ACCESS); !!while!(reg_val!&!PHY_CMD_ACTIVE) & reg_val&=&readl(mmio&+&PHY_ACCESS); &&... } AMD&8111e&network&driver(amd8111e.c) 24 Hardware data used in array reference ★ Tainted variables used as array indexes ★ Detect existing range/not NULL checks static&void&__init&attach_pas_card(...) { &&&if&((pas_model&=&pas_read(0xFF88)))& &&&{& &&&&&... &&&&&sprintf(temp,&“%s&rev&%d”,& &&&&&&&pas_model_names[(int)&pas_model],&pas_read(0x2789));& &&&&&... } Pro&Audio&Sound&driver&(pas2_card.c) 25 Analysis results over the Linux kernel Driver class Infinite polling Static array Dynamic array Panic calls net 117 2 21 2 scsi 298 31 22 121 sound 64 1 0 2 Lightweight and usable technique to video 174find hardware0 dependence22 bugs 22 other 381 9 57 32 Total 860 43 89 179 ★ Analyzed/Built 6300 driver files (2.8 million LOC) in 37 min ★ Found 992 hardware dependence bugs in driver code ★ False positive rate: 7.4% (manual sampling of 190 bugs) 26 Repairing drivers ★ Carburizer automatically generates repair code ★ InsertsCall failure recovery detection and recovery service service callout Timeout Array bounds Not null checks check checks Unsafe Unsafe System Infinite array pointer panic polling reference reference calls 27 Runtime fault recovery : Shadow drivers Driver-Kernel Interface • Carburizer calls generic recovery service if check fails Taps • Low cost transparent recovery Shadow ★ Based on shadow drivers Driver ★ Records state of driver at all times ★ Transparently restarts and replays Device recorded state on failure Driver • No isolation required (like Nooks) Device Swift [OSDI ’04] 28 Carburizer automatically fixes infinite loops timeout!=!rdtscll(start)!+!(cpu/khz/HZ)*2; reg_val&=&readl(mmio&+&PHY_ACCESS); while&(reg_val&&&PHY_CMD_ACTIVE)& { & reg_val&=&readl(mmio&+&PHY_ACCESS);& & if!(_cur!<!timeout)! ! !!!!rdtscll(_cur); ! else ! !!!!__recover_driver(); Timeout code added } AMD&8111e&network&driver(amd8111e.c) *Code/simplified/for/presentation/purposes 29 Carburizer automatically adds bounds checks static&void&__init&attach_pas_card(...) Array bounds { detected and &&if&((pas_model&=&pas_read(0xFF88)))&
Recommended publications
  • Lguest a Journey of Learning the Linux Kernel Internals
    Lguest A journey of learning the Linux kernel internals Daniel Baluta <[email protected]> What is lguest? ● “Lguest is an adventure, with you, the reader, as a Hero” ● Minimal 32-bit x86 hypervisor ● Introduced in 2.6.23 (October, 2007) by Rusty Russel & co ● Around 6000 lines of code ● “Relatively” “easy” to understand and hack on ● Support to learn linux kernel internals ● Porting to other architectures is a challenging task Hypervisor types make Preparation ● Guest ○ Life of a guest ● Drivers ○ virtio devices: console, block, net ● Launcher ○ User space program that sets up and configures Guests ● Host ○ Normal linux kernel + lg.ko kernel module ● Switcher ○ Low level Host <-> Guest switch ● Mastery ○ What’s next? Lguest overview make Guest ● “Simple creature, identical to the Host [..] behaving in simplified ways” ○ Same kernel as Host (or at least, built from the same source) ● booting trace (when using bzImage) ○ arch/x86/boot/compressed/head_32.S: startup_32 ■ uncompresses the kernel ○ arch/x86/kernel/head_32.S : startup_32 ■ initialization , checks hardware_subarch field ○ arch/x86/lguest/head_32.S ■ LGUEST_INIT hypercall, jumps to lguest_init ○ arch/x86/lguest/boot.c ■ once we are here we know we are a guest ■ override privileged instructions ■ boot as normal kernel, calling i386_start_kernel Hypercalls struct lguest_data ● second communication method between Host and Guest ● LHCALL_GUEST_INIT ○ hypercall to tell the Host where lguest_data struct is ● both Guest and Host publish information here ● optimize hypercalls ○ irq_enabled
    [Show full text]
  • Lguest the Little Hypervisor
    Lguest the little hypervisor Matías Zabaljáuregui [email protected] based on lguest documentation written by Rusty Russell note: this is a draft and incomplete version lguest introduction launcher guest kernel host kernel switcher don't have time for: virtio virtual devices patching technique async hypercalls rewriting technique guest virtual interrupt controller guest virtual clock hw assisted vs paravirtualization hybrid virtualization benchmarks introduction A hypervisor allows multiple Operating Systems to run on a single machine. lguest paravirtualization within linux kernel proof of concept (paravirt_ops, virtio...) teaching tool guests and hosts A module (lg.ko) allows us to run other Linux kernels the same way we'd run processes. We only run specially modified Guests. Setting CONFIG_LGUEST_GUEST, compiles [arch/x86/lguest/boot.c] into the kernel so it knows how to be a Guest at boot time. This means that you can use the same kernel you boot normally (ie. as a Host) as a Guest. These Guests know that they cannot do privileged operations, such as disable interrupts, and that they have to ask the Host to do such things explicitly. Some of the replacements for such low-level native hardware operations call the Host through a "hypercall". [ drivers/lguest/hypercalls.c ] high level overview launcher /dev/lguest guest kernel host kernel switcher Launcher main() some implementations /dev/lguest write → initialize read → run guest Launcher main [ Documentation/lguest/lguest.c ] The Launcher is the Host userspace program which sets up, runs and services the Guest. Just to confuse you: to the Host kernel, the Launcher *is* the Guest and we shall see more of that later.
    [Show full text]
  • Proceedings of the Linux Symposium
    Proceedings of the Linux Symposium Volume One June 27th–30th, 2007 Ottawa, Ontario Canada Contents The Price of Safety: Evaluating IOMMU Performance 9 Ben-Yehuda, Xenidis, Mostrows, Rister, Bruemmer, Van Doorn Linux on Cell Broadband Engine status update 21 Arnd Bergmann Linux Kernel Debugging on Google-sized clusters 29 M. Bligh, M. Desnoyers, & R. Schultz Ltrace Internals 41 Rodrigo Rubira Branco Evaluating effects of cache memory compression on embedded systems 53 Anderson Briglia, Allan Bezerra, Leonid Moiseichuk, & Nitin Gupta ACPI in Linux – Myths vs. Reality 65 Len Brown Cool Hand Linux – Handheld Thermal Extensions 75 Len Brown Asynchronous System Calls 81 Zach Brown Frysk 1, Kernel 0? 87 Andrew Cagney Keeping Kernel Performance from Regressions 93 T. Chen, L. Ananiev, and A. Tikhonov Breaking the Chains—Using LinuxBIOS to Liberate Embedded x86 Processors 103 J. Crouse, M. Jones, & R. Minnich GANESHA, a multi-usage with large cache NFSv4 server 113 P. Deniel, T. Leibovici, & J.-C. Lafoucrière Why Virtualization Fragmentation Sucks 125 Justin M. Forbes A New Network File System is Born: Comparison of SMB2, CIFS, and NFS 131 Steven French Supporting the Allocation of Large Contiguous Regions of Memory 141 Mel Gorman Kernel Scalability—Expanding the Horizon Beyond Fine Grain Locks 153 Corey Gough, Suresh Siddha, & Ken Chen Kdump: Smarter, Easier, Trustier 167 Vivek Goyal Using KVM to run Xen guests without Xen 179 R.A. Harper, A.N. Aliguori & M.D. Day Djprobe—Kernel probing with the smallest overhead 189 M. Hiramatsu and S. Oshima Desktop integration of Bluetooth 201 Marcel Holtmann How virtualization makes power management different 205 Yu Ke Ptrace, Utrace, Uprobes: Lightweight, Dynamic Tracing of User Apps 215 J.
    [Show full text]
  • Bunker: a Privacy-Oriented Platform for Network Tracing Andrew G
    Bunker: A Privacy-Oriented Platform for Network Tracing Andrew G. Miklas†, Stefan Saroiu‡, Alec Wolman‡, and Angela Demke Brown† †University of Toronto and ‡Microsoft Research Abstract: ISPs are increasingly reluctant to collect closing social security numbers, names, addresses, or and store raw network traces because they can be used telephone numbers [5, 54]. to compromise their customers’ privacy. Anonymization Trace anonymization is the most common technique techniques mitigate this concern by protecting sensitive information. Trace anonymization can be performed of- for addressing these privacy concerns. A typical imple- fline (at a later time) or online (at collection time). Of- mentation uses a keyed one-way secure hash function to fline anonymization suffers from privacy problems be- obfuscate sensitive information contained in the trace. cause raw traces must be stored on disk – until the traces This could be as simple as transforming a few fields in are deleted, there is the potential for accidental leaks or the IP headers, or as complex as performing TCP connec- exposure by subpoenas. Online anonymization drasti- tion reconstruction and then obfuscating data (e.g., email cally reduces privacy risks but complicates software en- addresses) deep within the payload. There are two cur- gineering efforts because trace processing and anony- mization must be performed at line speed. This paper rent approaches to anonymizing network traces: offline presents Bunker, a network tracing system that combines and online. Offline anonymization collects and stores the software development benefits of offline anonymiz- the entire raw trace and then performs anonymization ation with the privacy benefits of online anonymization. as a post-processing step.
    [Show full text]
  • Virtual Servers and Checkpoint/Restart in Mainstream Linux
    Virtual Servers and Checkpoint/Restart in Mainstream Linux Sukadev Bhattiprolu Eric W. Biederman Serge Hallyn IBM Arastra IBM [email protected] [email protected] [email protected] Daniel Lezcano IBM [email protected] ABSTRACT one task will use one table to look for a task corresponding Virtual private servers and application checkpoint and restart to a specific pid, in other word this table is relative to the are two advanced operating system features which place dif- task. ferent but related requirements on the way kernel-provided For the past several years, we have been converting kernel- resources are accessed by userspace. In Linux, kernel re- provided resource namespaces in Linux from a single, global sources, such as process IDs and SYSV shared messages, table per resource, to Plan9-esque [31] per-process names- have traditionally been identified using global tables. Since paces for each resource. This conversion has enjoyed the for- 2005, these tables have gradually been transformed into per- tune of being deemed by many in the Linux kernel develop- process namespaces in order to support both resource avail- ment community as a general code improvement; a cleanup ability on application restart and virtual private server func- worth doing for its own sake. This perception was a tremen- tionality. Due to inherent differences in the resources them- dous help in pushing for patches to be accepted. But there selves, the semantics of namespace cloning differ for many are two additional desirable features which motivated the of the resources. This paper describes the existing and pro- authors.
    [Show full text]
  • Using KVM to Run Xen Guests Without Xen
    Using KVM to run Xen guests without Xen Ryan A. Harper Michael D. Day Anthony N. Liguori IBM IBM IBM [email protected] [email protected] [email protected] Abstract on providing future extensions to improve MMU and hardware virtualization. The inclusion of the Kernel Virtual Machine (KVM) Linux has also recently gained a set of x86 virtual- driver in Linux 2.6.20 has dramatically improved ization enhancements. For the 2.6.20 kernel release, Linux’s ability to act as a hypervisor. Previously, Linux the paravirt_ops interface and KVM driver were was only capable of running UML guests and contain- added. paravirt_ops provides a common infras- ers. KVM adds support for running unmodified x86 tructure for hooking portions of the Linux kernel that operating systems using hardware-based virtualization. would traditionally prevent virtualization. The KVM The current KVM user community is limited by the driver adds the ability for Linux to run unmodified availability of said hardware. guests, provided that the underlying processor supports The Xen hypervisor, which can also run unmodified op- either AMD or Intel’s CPU virtualization extensions. erating systems with hardware virtualization, introduced Currently, there are three consumers of the a paravirtual ABI for running modified operating sys- paravirt_ops interface: Lguest [Lguest], a tems such as Linux, Netware, FreeBSD, and OpenSo- “toy” hypervisor designed to provide an example laris. This ABI not only provides a performance advan- implementation; VMI [VMI], which provides support tage over running unmodified guests, it does not require for guests running under VMware; and XenoLinux any hardware support, increasing the number of users [Xen], which provides support for guests running under that can utilize the technology.
    [Show full text]
  • Checkpoint-Restart for a Network of Virtual Machines
    Checkpoint-Restart for a Network of Virtual Machines Rohan Garg, Komal Sodha, Zhengping Jin, Gene Cooperman∗ Northeastern University Boston, MA / USA {rohgarg,komal,jinzp,gene}@ccs.neu.edu Abstract—The ability to easily deploy parallel compu- Further, the maintainability of a proposed architecture tations on the Cloud is becoming ever more important. is important. Here, we measure maintainability by the The first uniform mechanism for checkpointing a network number of lines of new code required, beyond the base of virtual machines is described. This is important for the parallel versions of common productivity software. code of a checkpoint-restart package, or the base code Potential examples of parallelism include Simulink for of the virtual machine itself. The proposed architecture MATLAB, parallel R for the R statistical modelling relies on just 600 lines of new code: 400 lines of code language, parallel blast.py for the BLAST bioinformatics for a KVM-specific plugin used to checkpoint the virtual software, IPython.parallel for Python, and GNU parallel machine, and 200 lines of code for a TUN/TAP plugin. for parallel shells. The checkpoint mechanism is imple- mented as a plugin in the DMTCP checkpoint-restart The two DMTCP plugins above are external libraries package. It operates on KVM/QEMU, and has also been loaded into an unmodified DMTCP. Source code can be adapted to Lguest and pure user-space QEMU. The plugin found in the contrib directory of the DMTCP repository. is surprisingly compact, comprising just 400 lines of code (See Section II for further details of plugins.) to checkpoint a single virtual machine, and 200 lines of The approach described here saves the state of an code for a plugin to support saving and restoring network state.
    [Show full text]
  • Paravirtualizing Linux in a Real-Time Hypervisor
    Paravirtualizing Linux in a real-time hypervisor Vincent Legout and Matthieu Lemerre CEA, LIST, Embedded Real Time Systems Laboratory Point courrier 172, F-91191 Gif-sur-Yvette, FRANCE {vincent.legout,matthieu.lemerre}@cea.fr ABSTRACT applications written for a fully-featured operating system This paper describes a new hypervisor built to run Linux in a such as Linux. The Anaxagoros design prevent non real- virtual machine. This hypervisor is built inside Anaxagoros, time tasks to interfere with real-time tasks, thus providing a real-time microkernel designed to execute safely hard real- the security foundation to build a hypervisor to run existing time and non real-time tasks. This allows the execution of non real-time applications. This allows current applications hard real-time tasks in parallel with Linux virtual machines to run on Anaxagoros systems without any porting effort, without interfering with the execution of the real-time tasks. opening the access to a wide range of applications. We implemented this hypervisor and compared perfor- Furthermore, modern computers are powerful enough to mances with other virtualization techniques. Our hypervisor use virtualization, even embedded processors. Virtualiza- does not yet provide high performance but gives correct re- tion has become a trendy topic of computer science, with sults and we believe the design is solid enough to guarantee its advantages like scalability or security. Thus we believe solid performances with its future implementation. that building a real-time system with guaranteed real-time performances and dense non real-time tasks is an important topic for the future of real-time systems.
    [Show full text]
  • Enabling Lightweight Multi-Tenancy at the Network's Extreme Edge
    Paradrop: Enabling Lightweight Multi-tenancy at the Network’s Extreme Edge Peng Liu Dale Willis Suman Banerjee Department of Computer Sciences Department of Computer Sciences Department of Computer Sciences University of Wisconsin-Madison University of Wisconsin-Madison University of Wisconsin-Madison Email: [email protected] Email: [email protected] Email: [email protected] Abstract—We introduce, Paradrop, a specific edge computing This paper presents a specific edge computing framework, platform that provides (modest) computing and storage resources Paradrop, implemented on WiFi Access Points (APs) or other at the “extreme” edge of the network allowing third-party devel- wireless gateways (such as, set-top boxes) which allows third- opers to flexibly create new types of services. This extreme edge of the network is the WiFi Access Point (AP) or the wireless gateway party developers to bring computational functions right into through which all end-device traffic (personal devices, sensors, homes and enterprises. The WiFi AP is a unique platform for etc.) pass through. Paradrop’s focus on the WiFi APs also stems edge computing because of multiple reasons: it is ubiquitous in from the fact that the WiFi AP has unique contextual knowledge homes and enterprises, is always “on” and available, and sits of its end-devices (e.g., proximity, channel characteristics) that are directly in the data path between Internet resources and the end lost as we get deeper into the network. While different variations and implementations of edge computing platforms
    [Show full text]
  • Information Session for the Seminars “Future Internet” and “Innovative Internet Technologies and Mobile Communications”
    Chair of Network Architectures and Services Faculty of Informatics Technical University of Munich Information Session for the Seminars “Future Internet” and “Innovative Internet Technologies and Mobile Communications” Prof. Dr.-Ing. Georg Carle and I8 research staff Organization: Daniel Raumer Contact: [email protected] Content Administrative Things for all Seminars . Basic Information . Grading . Registration . About the topics www.fotoila.de Basic Information • Lecturer: Prof. Dr.-Ing. Georg Carle • Organization: [email protected] (only use this mail address!) Daniel Raumer tba. • Overview Main Language: German we will offer an English track (presuming a minimum of 4 participants) Extent: 2 SWS (4 ECTS) 4 ECTS * 30 hours = 120 working hours expected from you Course Type: For M.Sc. Students: Master’s Seminar (Master-Seminar) For B.Sc. Students: Advanced Seminar Course (Seminar) • Languages: German and English English only speakers are welcome to join (seminar will be split in two tracks if necessary) English Only Track • We offer an English only track if at least one non-German (native) speaker wants to attend the seminar • The English only track will have separate sessions Probably 1-2 sessions (depending on the number of students) • Attendance not mandatory for talks in the “standard” track Students in the “standard” track also don’t have to participate in the English track talks You are still welcome to join the other track’s talks • Usually the English track is quite small This means less attendance (if the
    [Show full text]
  • Running Xen-A Hands-On Guide to the Art of Virt
    Running Xen: A Hands-On Guide to the Art of Virtualization by Jeanna N. Matthews; Eli M. Dow; Todd Deshane; Wenjin Hu; Jeremy Bongio; Patrick F. Wilbur; Brendan Johnson Publisher: Prentice Hall Pub Date: April 10, 2008 Print ISBN-10: 0-13-234966-3 Print ISBN-13: 978-0-13-234966-6 eText ISBN-10: 0-13-207467-2 eText ISBN-13: 978-0-13-207467-4 Pages: 624 Table of Contents | Index Overview "This accessible and immediately useful book expertly provides the Xen community with everything it needs to know to download, build, deploy and manage Xen implementations." –Ian Pratt, Xen Project Leader VP Advanced Technology, Citrix Systems The Real—World, 100% Practical Guide to Xen Virtualization in Production Environments Using free, open source Xen virtualization software, you can save money, gain new flexibility, improve utilization, and simplify everything from disaster recovery to software testing. Running Xen brings together all the knowledge you need to create and manage high—performance Xen virtual machines in any environment. Drawing on the unparalleled experience of a world—class Xen team, it covers everything from installation to administration–sharing field-tested insights, best practices, and case studies you can find nowhere else. The authors begin with a primer on virtualization: its concepts, uses, and advantages. Next, they tour Xen's capabilities, explore the Xen LiveCD, introduce the Xen hypervisor, and walk you through configuring your own hard—disk—based Xen installation. After you're running, they guide you through each leading method for creating "guests" and migrating existing systems to run as Xen guests.
    [Show full text]
  • Linux Symposium 2007 As Shown by the Many Related Presentations at This Confer - Ence (E.G., KVM, Lguest, Vserver)
    Warren Henson of Google asked about other plans for For filesystems, Jon observed that disks are getting larger long-term storage and backup. Jepson said that that but not faster, so fsck time can be a problem. New filesys - sounds like a great thing for Google to do. Right now they tems such as chunkfs and tilefs are more scalable and sub - are stuck with the server (a low-power device, sealed with divide a disk so that only the active part needs to be fsck’d. a hard disk) and aren’t currently addressing that problem, The btrfs filesystem is extent-based, with subvolumes. It although Google has provided gmail accounts. In response supports snapshot checksumming, online fsck, and faster to Henson’s mention of alternative projects from other or - fsck. Jon talked about ext4 with its 48-bit block numbers ganizations, Jepson said that they would like to work with and use of extents. [Editor’s note: Also see the article in these groups and try every week. When these efforts fail, the June 2007 issue of ;login: about ext4.] the kids lose, in her opinion. Since Jepson spoke, Intel has Jon finds reiser4 interesting, but the project is unfortu - announced that they plan to work with OLPC, and Intel is nately stalled because Hans Reiser is no longer able to now listed as a supporter on the laptop.org site. work on it. It needs a new champion. Jon talked about virtualization. It is getting more attention, Linux Symposium 2007 as shown by the many related presentations at this confer - ence (e.g., KVM, lguest, Vserver).
    [Show full text]