Ept-Freebsd.Pdf

Total Page:16

File Type:pdf, Size:1020Kb

Ept-Freebsd.Pdf Implements BIOS emulation support for To virtualize this architecture efficiently, BHyVe: A BSD Hypervisor hypervisors needed to avoid execute these instructions, and replace instruction with Abstract suitable operations. Current BHyVe only supports FreeBSD/amd64 There were some approach to implement it: as a GuestOS. On VMware approach, the hypervisor replaces One of the reason why BHyVe cannot support problematic sensitive instructions on-the-fly, other OSes is lack of BIOS support. while running guest machine. This approach 2 My project is implementing BIOS emulator on called Binary Translation . BHyVe, to remove these limitations. It could run most of unmodified OSes, but it had some performance overhead. On Xen approach, the hypervisor requires to 1. Background run pre-modified GuestOS which replaced 1.1 History of virtualization on x86 problematic sensitive instructions to dedicated architecture operations called Hypercall. This approach There's a famous requirements called "Popek called Para-virtualization3. & Goldberg Virtualization requirements"1, It has less performance overhead than Binary which defines a set of conditions sufficient for Translation on some conditions, but requires an architecture to support virtualization pre-modified GuestOS. efficiently. Due to increasing popularity of virtualization Efficient virtualization means virtualize on x86 machines, Intel decided to enhance x86 machine without using full CPU emulation, run architecture to virtualizable. guest code natively. The feature called Intel VT-x, or Hardware- Explain the requirements simply, to an Assisted Virtualization which is vendor architecture virtualizable, all sensitive neutral term. instructions should be privileged instruction. AMD also developed hardware-assisted Sensitive instructions definition is the virtualization feature on their own CPU, called instruction which can interfere the global status AMD-V. of system. Which means, all sensitive instructions 1.2 Detail of Intel VT-x executed under user mode should be trapped VT-x provides new protection model which by privileged mode program. isolated with Ring protection, for Without this condition, Guest OS affects Host virtualization. OS system status and causes system crash. It added two CPU modes, hypervisor mode x86 architecture was the architecture which and guest machine mode. didin’t meet the requirement, because It had Hypervisor mode called VMX Root Mode, non-privileged sensitive instructions. and guest machine mode called VMX non Root Mode(Figure 1). Gerald J. Popek and Robert P. Goldberg. 1974. Formal requirements for virtualizable third generation architectures. Commun. ACM 17, 7 (July 1974), 412-421. DOI=10.1145/361011.361073 http://doi.acm.org/10.1145/361011.361073 2 Brian Walters. 1999. VMware Virtual Platform. Linux J. 1999, 63es, Article 6 (July 1999). 3 Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer, Ian Pratt, and Andrew Warfield. 2003. Xen and the art of virtualization. In Proceedings of the nineteenth ACM symposium on Operating systems principles (SOSP '03). ACM, New York, NY, USA, 164-177. DOI=10.1145/945445.945462 http://doi.acm.org/10.1145/945445.945462 For example, IN/OUT instruction causes VMExit, and hypervisor emulates virtual VMX VMX device access. root mode non-root VT-x defines number of events which can mode cause VMExit, and hypervisor needs to User User configure enable/disable on each VMExit (Ring 3) (Ring 3) events. Reasons of VMExit is called VMExit reason, Kernel Kernel it classified by genres of events. (Ring 0) (Ring 0) Here are VMExit reason list: • Exception or NMI Figure 1. VMX root Mode and VMX non-root • External interrupt Mode • Triple fault • INIT signal received On VT-x, hypervisor can run guest OS on • SIPI received VMX non Root Mode without any • SM received modification, including sensitive instructions, • Internal interrupt without affecting Host OS system status. • Task switch When sensitive instructions are being executed • CPUID instruction under VMX non Root Mode, CPU stops • Intel SMX instructions execution of VMX non Root Mode, exit to • Cache operation instructions(INVD, VMX Root Mode. WBINVD) Then it trapped by hypervisor, hypervisor • TLB operation instructions(HNVLPG, emulates the instruction which guest tried to INVPCID) execute. • IO operation instructions(INB, OUTB, etc) Mode change from VMX Root Mode to VMX • Performance monitoring conter operation non-root Mode called VMEntry, from VMX instruction(RDTSC) non-root Mode to VMX Root Mode called • SMM related instruction(RSM) VMExit(Figure 2). • VT-x instructions(Can use for implement nested virtualization) VMX VMX • Accesses to control registers root mode non-root • Accesses to debug registers mode • Accesses to MSR • MONITOR/MWAIT instructions User User • PAUSE instruction (Ring 3) (Ring 3) VMEntry • Accesses to Local APIC • Accesses to GDTR, IDTR, LDTR, TR VMExit Kernel Kernel • VMX preemption timer (Ring 0) (Ring 0) • RDRAND instruction Figure 2. VMEntry and VMExit Some more events other than sensitive instructions which need to intercept by hypervisor also causes VMExit. VMCS data format revision number. Error code of VMExit failure. VMCS revision identifier An area for saving / restoring guest registers. VMX-abort indicator Register saving/restoring are automatically preformed at VMExit/VMEntry. Guest-state area (Actually not all register are it's target. Some registers should save by hypervisor manually.) The area saves some non-register state, Host-state area instruction blocking state etc. VM-exection control fields An area for saving / restoring hypervisor registers. Usage is almost identical with Guest-state area. VM-exit control fields VMCS data A field control processor behavior in VMX non-root VM-entry control fields operation. VMExit causing events can configure here. VM-exit information fields A field control processor behavior in VMExit operation. A field control processor behavior in VMEntry operation. Enabling/Disabling 64bit mode can configure here. VMExit reason stored here. Figure 3. Structure of VMCS All configuration data related to VT-x stored to write initial configuration values by VMCS(Virtual Machine Control Structure), VMWRITE instruction. which is on memory data structure for each You need to write initial register values guest machine4. here, and it done by /usr/sbin/bhyveload. Figure 3 shows VMCS structure. 3. VMEntry to VMX non root mode Entry to VMX non root mode by invoking 1.3 VT-x enabled hypervisor lifecycle VMLAUNCH or VMRESUME instruction. Hypervisors for VT-x works as following On first launch you need to use lifecycle (Figure 4). VMLAUNCH, after that you need to use VMRESUME. 1. VT-x enabling Before the entry operation, you need to It requires to enable at first to use VT-x save Host OS registers and restore Guest features. OS registers. To enable it, you need set VMXE bit on VT-x only offers minimum automatic save/ CR4 register, and invoke VMXON restore features, rest of the registers need to instruction. take care manually. 2. VMCS initialization 4. Run guest machine VMCS is 4KB alined 4KB page. CPU runs VMX non root mode, guest You need to notify the page address to CPU machine works natively. by invoking VMPTRLD instruction, then 4 If guest system has two or more virtual CPUs, VMCS needs for each vCPUs. 1. VT-x enabling 2. VMCS initialization 7. Run another 3. VMEntry to VMX process non root mode 6. Do emulation for 4. Run guest the exit reason machine 5. VMExit for some exit reason Figure 4. VT-x enabled hypervisor lifecycle 5. VMExit for some reason • Page 1 of Process B on Guest B -> When some events which causes VMExit, Page 1 of Guest physical memory -> CPU returns to VTX root mode. Page 5 of Host physical You need to save/restore register at first, then check the VMExit reason. But, if you run guest OS natively, CPU will 6. Do emulation for the exit reason translate Page 1 of Process B on Guest B to If VMExit reason was the event which Page 1 of Host physical memory. requires some emulation on hypervisor, Because CPU doesn’t know the paging for perform emulation. (Ex: Guest OS wrote guests are nested. data on HDD Depending Host OS scheduling, it may There is software technique to solve the resume VM by start again from 3, or task problem called shadow paging (Figure 6). switch to another process. Hypervisor creates clone of guest page table, set host physical address on it, traps guest 1.4 Memory Virtualization writing CR3 register and set cloned page table Mordan multi-tasking OSes use paging to to CR3. provide individual memory space for each Then CPU able to know correct mapping of processes. guest memory. To run guest OS program natively, address This technique was used on both Binary translation on paging become problematic translation based VMware, and also early function. implementation of hypervisors for VT-x. For example (Figure 5): But it has big overhead, Intel decided to add You allocate physical page 1- 4 to Guest A, and nested paging support on VT-x from Nehalem 5-8 to GuestB. micro-architecture. Both guests map page 1 of Process A to page 1 of guest physical memory. EPT is the name of nested paging feature Then it should point to: (Figure 7), • Page 1 of Process A on Guest A -> It simply adds Guest physical address to Host Page 1 of Guest physical memory -> physical address translation table. Page 1 of Host physical Now hypervisor doesn’t need to take care guest paging, it become much simpler and faster. Guest A Process A Guest physical memory Page table A 1 1 1 1 2 2 Host physical memory Process B Page table B 3 1 1 3 4 1 2 2 4 2 3 4 Guest B 5 Process A Guest physical memory 6 Page table A 7 1 1 1 1 8 2 2 Process B Page table B 3 4 1 1 3 2 2 4 Figure 5.
Recommended publications
  • Interrupts and Exceptions CPU Modes and Address Spaces Dual-Mode Of
    CPU Modes and Address Spaces There are two processor (CPU) modes of operation: • Kernel (Supervisor) Mode and • User Mode The processor is in Kernel Mode when CPU mode bit in Status Interrupts and Exceptions register is set to zero. The processor enters Kernel Mode at power-up, or as result of an interrupt, exception, or error. The processor leaves Kernel Mode and enters User Mode when the CPU mode bit is set to one (by some instruction). Memory address space is divided in two ranges (simplified): • User address space – addresses in the range [0 – 7FFFFFFF16] Studying Assignment: A.7 • Kernel address space Reading Assignment: 3.1-3.2, A.10 – addresses in the range [8000000016 – FFFFFFFF16] g. babic Presentation B 28 g. babic 29 Dual-Mode of CPU Operation MIPS Privilege Instructions • CPU mode bit indicates the current CPU mode: 0 (=kernel) With CPU in User Mode, the program in execution has access or 1 (=user). only to the CPU and FPU registers, while when CPU operates • When an interrupt occurs, CPU hardware switches to the in Kernel Mode, the program has access to the full capabilities kernel mode. of processor including CP0 registers. • Switching to user mode (from kernel mode) done by setting CPU mode bit (by an instruction). Privileged instructions can not be executed when the Exception/Interrupt processor is in User mode, they can be executed only when the processor is in Kernel mode. kernel user Examples of MIPS privileged instructions: Set user mode • any instruction that accesses Kernel address space • all instructions that access any of CP0 registers, e.g.
    [Show full text]
  • Status Update on PCI Express Support in Qemu
    Status Update on PCI Express Support in QEmu Isaku Yamahata, VA Linux Systems Japan K.K. <[email protected]> Xen Summit North America April 28, 2010 Agenda ● Introduction ● Usage and Example ● Implementation Details ● Future Work ● Considerations on further development issues Introduction From http://en.wikipedia.org/wiki/PCI_Express PCI Express native Hotplug Electro Mechanical Lock(EMI) Slot Number From http://docs.hp.com/ Eventual Goal Dom0 qemu-dm interrupt root DomU Inject the error up Virtual PCIe Bus down Interrupt to notify the error Xen VMM hardware PCI express bus PCI Express root port PCI Express upstream port PCI Express Error Message native passthrough PCI Express downstream port With native hot plug support Error PCI Express device Eventual Goal ● More PCI features/PCI express features – The current emulated chipset(I440FX/PIIX3) is too old. – So new Chipset emulator is wanted. ● Xen PCI Express support – PCI Express native hotplug – PCI Express native passthourgh ● When error is detected via AER(Advanced Error Reporting), inject the error into the guest. ● these require several steps, so the first step is... First Phase Goal ● Make Qemu PCI Express ready – Introduce new chipset emulator(Q35) ● PCI Express native hot plug ● Implement PCI Express port emulators, and make it possible to inject errors into guest Current status Qemu/PCI express ● PCIe MMCONFIG Merged. the qemu/guest Q35 chipset base working PCIe portemulator working PCIe native hotplug working firmware PCIe AER WIP PCIe error injection WIP VBE paravirtualization working enhancement is seabios mcfg working almost done. e820 working host bridge initiazatlin working ● pci io/memory The next step is space initialization working passing acpi table outside qemu working qemu upstream vgabios VBE paravirtualization working merge.
    [Show full text]
  • Virtualization
    Virtualization Dave Eckhardt and Roger Dannenberg based on material from : Mike Kasick Glenn Willen Mike Cui April 10, 2009 1 Synchronization Memorial service for Timothy Wismer − Friday, April 17 − 16:00-18:00 − Breed Hall (Margaret Morrison 103) Sign will say “Private Event” − Donations to National Arthritis Foundation will be welcome 2 Outline Introduction Virtualization x86 Virtualization Paravirtualization Alternatives for Isolation Alternatives for “running two OSes on same machine” Summary 3 What is Virtualization? Virtualization: − Process of presenting and partitioning computing resources in a logical way rather than partitioning according to physical reality Virtual Machine: − An execution environment (logically) identical to a physical machine, with the ability to execute a full operating system The Process abstraction is related to virtualization: it’s at least similar to a physical machine Process : Kernel :: Kernel : ? 4 Advantages of the Process Abstraction Each process is a pseudo-machine Processes have their own registers, address space, file descriptors (sometimes) Protection from other processes 5 Disadvantages of the Process Abstraction Processes share the file system − Difficult to simultaneously use different versions of: Programs, libraries, configurations Single machine owner: − root is the superuser − Any process that attains superuser privileges controls all processes Other processes aren't so isolated after all 6 Disadvantages of the Process Abstraction Processes share the same kernel − Kernel/OS
    [Show full text]
  • SOS Internals
    Understanding a Simple Operating System SOS is a Simple Operating System designed for the 32-bit x86 architecture. Its purpose is to understand basic concepts of operating system design. These notes are meant to help you recall the class discussions. Chapter 1 : Starting Up SOS 3 Registers in the IA-32 x86 Architecture BIOS (Basic Input/Ouput System) Routines Real Mode Addressing Organization of SOS on Disk and Memory Master Boot Record SOS Startup A20 Line 32-bit Protected Mode Addressing Privilege Level Global Descriptor Table (GDT) More on Privilege Levels The GDT Setup Enabling Protected Mode Calling main() Chapter 2 : SOS Initializations 10 In main() Disk Initialization Display Initialization Setting Up Interrupt Handlers Interrupt Descriptor Table (IDT) A Default Interrupt Handler Load IDT The Programmable Interrupt Controller (PIC) The Keyboard Interrupt Handler Starting the Console Putting It All Together Chapter 3: SOS1 – Single-tasking SOS 16 Running User Programs GDT Entries Default Exception Handler User Programs Executable Format System Calls Creating User Programs The run Command Understanding a Simple Operating System The DUMB Memory Manager Program Address Space Process Control Block Switching to a User Program Kernel-Mode Stack Chapter 4 : SOS2 – Multi-tasking SOS 24 Running Multiple User Programs NAÏVE Memory Manager Programmable Interval Timer (PIT) Process States Timer Interrupt Handler Sleep System Call The run Command Process Queue The Scheduler The Complete Picture ps Command Chapter 5 : SOS3 – Paging in SOS 31
    [Show full text]
  • CS3210: Booting and X86
    1 CS3210: Booting and x86 Taesoo Kim 2 What is an operating system? • e.g. OSX, Windows, Linux, FreeBSD, etc. • What does an OS do for you? • Abstract the hardware for convenience and portability • Multiplex the hardware among multiple applications • Isolate applications to contain bugs • Allow sharing among applications 3 Example: Intel i386 4 Example: IBM T42 5 Abstract model (Wikipedia) 6 Abstract model: CPU, Memory, and I/O • CPU: execute instruction, IP → next IP • Memory: read/write, address → data • I/O: talk to external world, memory-mapped I/O or port I/O I/O: input and output, IP: instruction pointer 7 Today: Bootstrapping • CPU → what's first instruction? • Memory → what's initial code/data? • I/O → whom to talk to? 8 What happens after power on? • High-level: Firmware → Bootloader → OS kernel • e.g., jos: BIOS → boot/* → kern/* • e.g., xv6: BIOS → bootblock → kernel • e.g., Linux: BIOS/UEFI → LILO/GRUB/syslinux → vmlinuz • Why three steps? • What are the handover protocols? 9 BIOS: Basic Input/Output System • QEMU uses an opensource BIOS, called SeaBIOS • e.g., try to run, qemu (with no arguments) 10 From power-on to BIOS in x86 (miniboot) • Set IP → 4GB - 16B (0xfffffff0) • e.g., 80286: 1MB - 16B (0xffff0) • e.g., SPARCS v8: 0x00 (reset vector) DEMO : x86 initial state on QEMU 11 The first instruction • To understand, we first need to understand: 1. x86 state (e.g., registers) 2. Memory referencing model (e.g,. segmentation) 3. BIOS features (e.g., memory aliasing) (gdb) x/1i 0xfffffff0 0xfffffff0: ljmp $0xf000,$0xe05b 12 x86
    [Show full text]
  • RTA-OSEK Binding Manual: TMS470/TI
    RTA-OSEK Binding Manual: TMS470/TI Contact Details ETAS Group www.etasgroup.com Germany USA ETAS GmbH ETAS Inc. Borsigstraße 14 3021 Miller Road 70469 Stuttgart Ann Arbor, MI 48103 Tel.:+49 (711) 8 96 61-102 Tel.: +1 (888) ETAS INC Fax:+49 (711) 8 96 61-106 Fax: +1 (734) 997-94 49 www.etas.de www.etasinc.com Japan France ETAS K.K. ETAS S.A.S. Queen's Tower C-17F, 1, place des États-Unis 2-3-5, Minatomirai, Nishi-ku, SILIC 307 Yokohama, Kanagawa 94588 Rungis Cedex 220-6217 Japan Tel.: +33 (1) 56 70 00 50 Tel.: +81 (45) 222-0900 Fax: +33 (1) 56 70 00 51 Fax: +81 (45) 222-0956 www.etas.fr www.etas.co.jp Korea Great Britain ETAS Korea Co. Ltd. ETAS UK Ltd. 4F, 705 Bldg. 70-5 Studio 3, Waterside Court Yangjae-dong, Seocho-gu Third Avenue, Centrum 100 Seoul 137-889, Korea Burton-upon-Trent Tel.: +82 (2) 57 47-016 Staffordshire DE14 2WQ Fax: +82 (2) 57 47-120 Tel.: +44 (0) 1283 - 54 65 12 www.etas.co.kr Fax: +44 (0) 1283 - 54 87 67 www.etas-uk.net Copyright Notice © 2001 - 2007 LiveDevices Ltd. All rights reserved. Version: M00088-001 No part of this document may be reproduced without the prior written consent of LiveDevices Ltd. The software described in this document is furnished under a license and may only be used or copied in accordance with the terms of such a license. Disclaimer The information in this document is subject to change without notice and does not represent a commitment on any part of LiveDevices.
    [Show full text]
  • Bringing Virtualization to the X86 Architecture with the Original Vmware Workstation
    12 Bringing Virtualization to the x86 Architecture with the Original VMware Workstation EDOUARD BUGNION, Stanford University SCOTT DEVINE, VMware Inc. MENDEL ROSENBLUM, Stanford University JEREMY SUGERMAN, Talaria Technologies, Inc. EDWARD Y. WANG, Cumulus Networks, Inc. This article describes the historical context, technical challenges, and main implementation techniques used by VMware Workstation to bring virtualization to the x86 architecture in 1999. Although virtual machine monitors (VMMs) had been around for decades, they were traditionally designed as part of monolithic, single-vendor architectures with explicit support for virtualization. In contrast, the x86 architecture lacked virtualization support, and the industry around it had disaggregated into an ecosystem, with different ven- dors controlling the computers, CPUs, peripherals, operating systems, and applications, none of them asking for virtualization. We chose to build our solution independently of these vendors. As a result, VMware Workstation had to deal with new challenges associated with (i) the lack of virtual- ization support in the x86 architecture, (ii) the daunting complexity of the architecture itself, (iii) the need to support a broad combination of peripherals, and (iv) the need to offer a simple user experience within existing environments. These new challenges led us to a novel combination of well-known virtualization techniques, techniques from other domains, and new techniques. VMware Workstation combined a hosted architecture with a VMM. The hosted architecture enabled a simple user experience and offered broad hardware compatibility. Rather than exposing I/O diversity to the virtual machines, VMware Workstation also relied on software emulation of I/O devices. The VMM combined a trap-and-emulate direct execution engine with a system-level dynamic binary translator to ef- ficiently virtualize the x86 architecture and support most commodity operating systems.
    [Show full text]
  • Protected Mode - Wikipedia
    2/12/2019 Protected mode - Wikipedia Protected mode In computing, protected mode, also called protected virtual address mode,[1] is an operational mode of x86- compatible central processing units (CPUs). It allows system software to use features such as virtual memory, paging and safe multi-tasking designed to increase an operating system's control over application software.[2][3] When a processor that supports x86 protected mode is powered on, it begins executing instructions in real mode, in order to maintain backward compatibility with earlier x86 processors.[4] Protected mode may only be entered after the system software sets up one descriptor table and enables the Protection Enable (PE) bit in the control register 0 (CR0).[5] Protected mode was first added to the x86 architecture in 1982,[6] with the release of Intel's 80286 (286) processor, and later extended with the release of the 80386 (386) in 1985.[7] Due to the enhancements added by protected mode, it has become widely adopted and has become the foundation for all subsequent enhancements to the x86 architecture,[8] although many of those enhancements, such as added instructions and new registers, also brought benefits to the real mode. Contents History The 286 The 386 386 additions to protected mode Entering and exiting protected mode Features Privilege levels Real mode application compatibility Virtual 8086 mode Segment addressing Protected mode 286 386 Structure of segment descriptor entry Paging Multitasking Operating systems See also References External links History https://en.wikipedia.org/wiki/Protected_mode
    [Show full text]
  • Institutionalizing Freebsd Isolated and Virtualized Hosts Using Bsdinstall(8), Zfs(8) and Nfsd(8)
    Institutionalizing FreeBSD Isolated and Virtualized Hosts Using bsdinstall(8), zfs(8) and nfsd(8) [email protected] @MichaelDexter BSDCan 2018 Jails and bhyve… FreeBSD’s had Isolation since 2000 and Virtualization since 2014 Why are they still strangers? Institutionalizing FreeBSD Isolated and Virtualized Hosts Using bsdinstall(8), zfs(8) and nfsd(8) Integrating as first-class features Institutionalizing FreeBSD Isolated and Virtualized Hosts Using bsdinstall(8), zfs(8) and nfsd(8) This example but this is not FreeBSD-exclusive Institutionalizing FreeBSD Isolated and Virtualized Hosts Using bsdinstall(8), zfs(8) and nfsd(8) jail(8) and bhyve(8) “guests” Application Binary Interface vs. Instructions Set Architecture Institutionalizing FreeBSD Isolated and Virtualized Hosts Using bsdinstall(8), zfs(8) and nfsd(8) The FreeBSD installer The best file system/volume manager available The Network File System Broad Motivations Virtualization! Containers! Docker! Zones! Droplets! More more more! My Motivations 2003: Jails to mitigate “RPM Hell” 2011: “bhyve sounds interesting...” 2017: Mitigating Regression Hell 2018: OpenZFS EVERYWHERE A Tale of Two Regressions Listen up. Regression One FreeBSD Commit r324161 “MFV r323796: fix memory leak in [ZFS] g_bio zone introduced in r320452” Bug: r320452: June 28th, 2017 Fix: r324162: October 1st, 2017 3,710 Commits and 3 Months Later June 28th through October 1st BUT July 27th, FreeNAS MFC Slips into FreeNAS 11.1 Released December 13th Fixed in FreeNAS January 18th 3 Months in FreeBSD HEAD 36 Days
    [Show full text]
  • ICE-BDE-T7 COM Express Module
    ICE-BDE-T7 COM Express Module MODEL: ICE-BDE-T7 COM Express R3.0 Module (Type 7) with Intel® Xeon® Processor D-1548/D-1518/D-1508, Two ECC DDR4 SO-DIMM, GbE, 10G KR, NCSI, SATA 6Gb/s, PCIe Gen3 and RoHS User Manual Page i Rev. 1.01 – January 30, 2019 ICE-BDE-T7 COM Express Module Revision Date Version Changes January 30, 2019 1.01 Added a new SKU - ICE-BDE-T7-1508 December 13, 2017 1.00 Initial release Page ii ICE-BDE-T7 COM Express Module Copyright COPYRIGHT NOTICE The information in this document is subject to change without prior notice in order to improve reliability, design and function and does not represent a commitment on the part of the manufacturer. In no event will the manufacturer be liable for direct, indirect, special, incidental, or consequential damages arising out of the use or inability to use the product or documentation, even if advised of the possibility of such damages. This document contains proprietary information protected by copyright. All rights are reserved. No part of this manual may be reproduced by any mechanical, electronic, or other means in any form without prior written permission of the manufacturer. TRADEMARKS All registered trademarks and product names mentioned herein are used for identification purposes only and may be trademarks and/or registered trademarks of their respective owners. Page iii ICE-BDE-T7 COM Express Module Manual Conventions WARNING Warnings appear where overlooked details may cause damage to the equipment or result in personal injury. Warnings should be taken seriously.
    [Show full text]
  • PCIE-Q870-I2 PICMG 1.3 CPU Card
    PCIE-Q870-i2 PICMG 1.3 CPU Card MODEL: PCIE-Q870-i2 Full-Size PICMG 1.3 CPU Card Supports LGA1150 Intel® Core™ i7/i5/i3, Pentium® or Celeron® CPU, Intel® Q87 Chipset, DDR3, VGA, iDP, Dual Intel® PCIe GbE, SATA 6Gb/s, PCIe Mini, mSATA, RS-232, HD Audio, iRIS-2400 and RoHS User Manual Page i Rev. 1.05 – November 13, 2015 PCIE-Q870-i2 PICMG 1.3 CPU Card Revision Date Version Changes November 13, 2015 1.05 Updated Section 1.6: Technical Specifications Updated Section 2.4: Optional Items Updated Chapter 5: BIOS March 23, 2015 1.04 Updated Section 4.3.3: Flash Descriptor Security Override Jumper November 5, 2014 1.03 Updated PCIe specifications on page 7 June 16, 2014 1.02 Modified LAN pinouts Updated Chapter 2: Packing List March 24, 2014 1.01 Deleted I2C information Updated Section 2.4: Optional Items January 14, 2014 1.00 Initial release Page ii PCIE-Q870-i2 PICMG 1.3 CPU Card Copyright COPYRIGHT NOTICE The information in this document is subject to change without prior notice in order to improve reliability, design and function and does not represent a commitment on the part of the manufacturer. In no event will the manufacturer be liable for direct, indirect, special, incidental, or consequential damages arising out of the use or inability to use the product or documentation, even if advised of the possibility of such damages. This document contains proprietary information protected by copyright. All rights are reserved. No part of this manual may be reproduced by any mechanical, electronic, or other means in any form without prior written permission of the manufacturer.
    [Show full text]
  • Porting the QEMU Virtualization Software to MINIX 3
    Porting the QEMU virtualization software to MINIX 3 Master's thesis in Computer Science Erik van der Kouwe Student number 1397273 [email protected] Vrije Universiteit Amsterdam Faculty of Sciences Department of Mathematics and Computer Science Supervised by dr. Andrew S. Tanenbaum Second reader: dr. Herbert Bos 12 August 2009 Abstract The MINIX 3 operating system aims to make computers more reliable and more secure by keeping privileged code small and simple. Unfortunately, at the moment only few major programs have been ported to MINIX. In particular, no virtualization software is available. By isolating software environments from each other, virtualization aids in software development and provides an additional way to achieve reliability and security. It is unclear whether virtualization software can run efficiently within the constraints of MINIX' microkernel design. To determine whether MINIX is capable of running virtualization software, I have ported QEMU to it. QEMU provides full system virtualization, aiming in particular at portability and speed. I find that QEMU can be ported to MINIX, but that this requires a number of changes to be made to both programs. Allowing QEMU to run mainly involves adding standardized POSIX functions that were previously missing in MINIX. These additions do not conflict with MINIX' design principles and their availability makes porting other software easier. A list of recommendations is provided that could further simplify porting software to MINIX. Besides just porting QEMU, I also investigate what performance bottlenecks it experiences on MINIX. Several areas are found where MINIX does not perform as well as Linux. The causes for these differences are investigated.
    [Show full text]