Hardware Interfaces Old PC Hardware Overview IDE USB PCI Net CPU

Total Page:16

File Type:pdf, Size:1020Kb

Hardware Interfaces Old PC Hardware Overview IDE USB PCI Net CPU OPERATING SYSTEM STRUCTURES OPERATING SYSTEM STRUCTURES Hardware interfaces Old PC hardware overview System buses CPU • External buses and ports • Storage device interfaces northbridge RAM • Other buses around computers PCI • southbridge ISA VGA net IDE USB bridge ISA audio floppy MEELIS ROOS 1 MEELIS ROOS 2 OPERATING SYSTEM STRUCTURES OPERATING SYSTEM STRUCTURES PC hardware overview Modern PC hardware overview CPU CPU RAM CPU CPU RAM AGP northbridge RAM north− north− graphics 10G net CSA bridge bridge I2C/SMBus PCI−E graphics southbridge southbridge PCI temp PCI bus audioSATA net USB ISA bus/ PCI audio SATAnet USB LPC SCSI FW LPC floppy MEELIS ROOS 3 MEELIS ROOS 4 OPERATING SYSTEM STRUCTURES OPERATING SYSTEM STRUCTURES System buses PCI (Peripheral Component Interface) Old PC: ISA, EISA, MCA, VLB Originally 32-bit, 33 MHz • • PnP, ISAPnP, other PnP 64-bit 66 MHz PCI • • Non-PC: VME, SBus, NuBus, Zorro II/III, TURBOChannel, VLB AGP — Advanced Graphics Port (PCI+IOMMU) • • analogues 3.3V, 5V voltages • Current: PCI family PCI-X (32- or 64-bit; 33, 66, 100, 133, 266 and 533 MHz) • • PCMCIA / PC Card, ExpressCard, CompactFlash, ... PCI Express (PCI-E) • • SGI: GIO32/GIO64, XIO – Fast point-to-point serial interface • – Up to 32 serial lanes can be used in parallel – One lane is 250 MB/s, 2.5 5 GT/s – PCI-E 2.0 doubled the speed CSA — Communication Streaming Architecture • Hotplug, cPCI • MEELIS ROOS 5 MEELIS ROOS 6 OPERATING SYSTEM STRUCTURES OPERATING SYSTEM STRUCTURES Flash memory interfaces External ports MTD (Memory Technology Devices) — low level interfaces to Serial ports — RS-232, RS-422 (0.15..460 kbps) • • flash memory (usually NAND, NOR) Parallel port — IEEE-1284 (up to 8 Mbps) • (EEPROM,) Flash BIOS • PS/2 — special-purpose serial ports • CompactFlash • Other special-purpose serial ports (keyboard, JTAG, . ) • MMC/xD • SD/SDIO • . • Most interfaces are also usable for I/O devices • MEELIS ROOS 7 MEELIS ROOS 8 OPERATING SYSTEM STRUCTURES OPERATING SYSTEM STRUCTURES External buses Storage device interfaces ADB — Apple Desktop Bus (10 kbps) SCSI — Small Computer System Interface • • USB — Universal Serial Bus IDE (Integrated Drive Electronics) / ATA (AT Attachment) • • – 1.1 and 12 Mbps (UHCI, OHCI) ATAPI — ATA Packet Interface • – 480 Mbps (EHCI, USB 2.0) FibreChannel — physical layer for SCSI, IP etc; actually a • – 4.8 Gbps (XHCI, USB 3.0) separate network – Wireless USB (WUSB) — 53..480 Mbps Serial ATA • FireWire (IEEE-1394) • SAS — Serial Attached SCSI – 400 Mbps, 800 Mbps, (1600 Mbps, 3200 Mbps) • (Ethenet) – OHCI • MEELIS ROOS 9 MEELIS ROOS 10 OPERATING SYSTEM STRUCTURES OPERATING SYSTEM STRUCTURES SCSI Parallel SCSI cable speeds Physical layer and communication protocol are independent Name speed comments • Parallel cable is the most usual physical layer Narrow 5 • Parallel cables need termination on both ends Fast 10 • Fast Wide 20 3 types of parallel cables: • Ultra 20 – SE — Single-Ended Ultra Wide 40 – Differential Ultra 2 40 LVD – LVD/SE — Low Voltage Differential/Single-Ended Ultra 2 Wide 80 LVD Many devices on the same bus (usually up to 8 or 16) • Ultra160 160 LVD – Addressing: (bus, target, LUN) Ultra320 320 LVD Synchronous and asynchronous transfers; disconnect and • reconnect TCQ — Tagged Command Queuing • MEELIS ROOS 11 MEELIS ROOS 12 OPERATING SYSTEM STRUCTURES OPERATING SYSTEM STRUCTURES IDE/ATA standards Network interfaces Name max speed modes Prehistory: PIO • ATA (ATA-1) 8.3 MB/s PIO 0-2, SWDMA 0-2, MWDMA 0 Modern: DMA (alignment!) • ATA-2 16.6 MB/s PIO 0-4, SWDMA 0-2, MWDMA 0-2 Scatter-gather DMA ATA-3 16.6 MB/s PIO 0-4, SWDMA 0-2, MWDMA 0-2 • Checksum offloading ATA-4 33 MB/s + UDMA 0,1,2 • Interrupt mitigation ATA-5 66 MB/s + UDMA 3,4 + • ATA-6 100 MB/s + UDMA 5 Segmentation offloading (TX/RX) • — 133 MB/s Full TCP offloading • SATA-1 150 MB/s Multiqueue SATA-2 300 MB/s NCQ • Multiple virtual devices SATA-3 600 MB/s new connectors • eSATA, Port Multiplier (PMP), power management, . MEELIS ROOS 13 MEELIS ROOS 14 OPERATING SYSTEM STRUCTURES Other buses I2C, SMBus • Dallas 1-wire • SPI (Serial Peripheral Interface) • MEELIS ROOS 15.
Recommended publications
  • High Performance Solid State Storage Under Linux
    High Performance Solid State Storage Under Linux Eric Seppanen, Matthew T. O’Keefe, David J. Lilja Department of Electrical and Computer Engineering University of Minnesota Minneapolis, MN 55455 fsepp0019, mokeefe, [email protected] Abstract—Solid state drives (SSDs) allow single-drive perfor- II. BACKGROUND mance that is far greater than disks can produce. Their low latency and potential for parallel operations mean that they are A. Solid State Drives able to read and write data at speeds that strain operating system In many ways, SSD performance is far superior to that of I/O interfaces. Additionally, their performance characteristics expose gaps in existing benchmarking methodologies. disk drives. With random read performance nearly identical We discuss the impact on Linux system design of a prototype to sequential read performance, seek-heavy workloads can PCI Express SSD that operates at least an order of magnitude perform orders of magnitude faster with an SSD compared to faster than most drives available today. We develop benchmark- a disk drive. SSDs have other benefits besides performance: ing strategies and focus on several areas where current Linux they consume less power and are more resistant to physical systems need improvement, and suggest methods of taking full advantage of such high-performance solid state storage. abuse than disk drives. We demonstrate that an SSD can perform with high through- But SSDs bear the burden of NAND flash management. put, high operation rates, and low latency under the most Large flash block erase sizes cause write amplification and difficult conditions. This suggests that high-performance SSDs delays, and drives must have large page-mapping tables to sup- can dramatically improve parallel I/O performance for future port small writes efficiently.
    [Show full text]
  • Direct Memory Access Components Verification System
    ТРУДЫ МФТИ. — 2012. — Том 4, № 1 Frolov P. V. et al. 1 УДК 004.052.42 P. V. Frolov, V. N. Kutsevol, A. N. Meshkov, N. Yu. Polyakov, M. P. Ryzhov AO «MCST» PAO «INEUM» Direct Memory Access components verification system A method of direct memory access subsystem verification used for Elbrus series micro- processors has been described. A peripheral controller imitator has been developed in order to reduce verification overhead. The model of imitator has been included into the functional machine simulator. A pseudorandom test generator for verification of the direct memory access subsystem has been based on the simulator. Ключевые слова: system verification, functional model, direct memory access, pseu- dorandom test generation. Direct Memory Access components verification system 1. Introduction Modern computer systems require very intensive data exchange between the peripheral de- vices and the random-access memory. In the most cases this exchange is performed by the direct memory access (DMA) subsystem. The increasing demands for the performance of the subsys- tem lead to an increase in its complexity, therefore requiring development of effective approaches to DMA subsystem verification [1,2]. This article is based on a result of a comprehensive project than combined implementation of a there co-designed verification techniques based on the consecutive investigation of theDMA subsystem employing one the three models: 1) a functional model written in C++ that corre- sponds to behaviour of the subsystem in the environment determined by a real computer system configuration, 2) RTL model in Verilog and 3) FPGA-based prototype. This article describesthe first method that enables verifying correctness of the design at an early stage of the verification and eliminate a large quantity of bugs using simple tests.
    [Show full text]
  • Scaling Enterprise Storage with SAS Hard Drives
    Whitepaper | November 2007 Scaling Enterprise Storage with SAS Hard Drives Introduction Data center workloads have increased exponentially in recent years, requiring IT managers to find new ways of scaling their enterprise storage resources in a way that is both highly reliable and cost-effective. With the introduction of complementary serial interface technologies, IT managers now have the flexibility to deploy either high performance SAS drives or cost-effective Serial ATA (SATA) drives in a Serial Attached SCSI (SAS) storage environment. Hardware compatibility between the new interfaces will provide unprecedented design flexibility for server and storage subsystem deployments. SAS was designed to be the successor to parallel SCSI, which has been used effectively as an enterprise storage interface for more than 20 years. SAS supports the SCSI com- mand set and protocol, maintaining compatibility with the last 20 years of application software investment. SAS will support faster data transfer rates and more devices per controller, as well as reduce the size and complexity of the cables and connectors (thus enabling smaller, more densely-packed disk arrays). SAS is a point-to-point serial architecture, meaning that each drive has a dedicated connection to the host. Eliminating the shared (parallel) bus bottleneck results in higher overall performance because the host will deliver full bandwidth to each individual hard drive. These dedicated, point-to-point connections provide full-duplex connectivity at 3Gb/s for superior performance. SAS is a dual-port interface that provides two separate data paths into the drive. This delivers higher levels of performance and eliminates the “single point of failure” that is a drawback of the current parallel SCSI inter- face.
    [Show full text]
  • VIA RAID Configurations
    VIA RAID configurations The motherboard includes a high performance IDE RAID controller integrated in the VIA VT8237R southbridge chipset. It supports RAID 0, RAID 1 and JBOD with two independent Serial ATA channels. RAID 0 (called Data striping) optimizes two identical hard disk drives to read and write data in parallel, interleaved stacks. Two hard disks perform the same work as a single drive but at a sustained data transfer rate, double that of a single disk alone, thus improving data access and storage. Use of two new identical hard disk drives is required for this setup. RAID 1 (called Data mirroring) copies and maintains an identical image of data from one drive to a second drive. If one drive fails, the disk array management software directs all applications to the surviving drive as it contains a complete copy of the data in the other drive. This RAID configuration provides data protection and increases fault tolerance to the entire system. Use two new drives or use an existing drive and a new drive for this setup. The new drive must be of the same size or larger than the existing drive. JBOD (Spanning) stands for Just a Bunch of Disks and refers to hard disk drives that are not yet configured as a RAID set. This configuration stores the same data redundantly on multiple disks that appear as a single disk on the operating system. Spanning does not deliver any advantage over using separate disks independently and does not provide fault tolerance or other RAID performance benefits. If you use either Windows® XP or Windows® 2000 operating system (OS), copy first the RAID driver from the support CD to a floppy disk before creating RAID configurations.
    [Show full text]
  • HP Proliant G6 Intel Xeon Bladesystem C-Class Server Blades
    Technologies in HP ProLiant G6 c-Class server blades with Intel® Xeon® processors technology brief Abstract.............................................................................................................................................. 2 ProLiant c-Class server blade architecture................................................................................................ 2 Processor technologies ......................................................................................................................... 3 Multi-level caches............................................................................................................................. 3 QuickPath Interconnect controller ....................................................................................................... 4 Hyper-Threading .............................................................................................................................. 4 Turbo Boost technology..................................................................................................................... 5 Thermal Logic technologies ................................................................................................................... 5 Processor socket technology.................................................................................................................. 6 Memory technologies ........................................................................................................................... 6 I/O technologies
    [Show full text]
  • SAS Enters the Mainstream Although Adoption of Serial Attached SCSI
    SAS enters the mainstream By the InfoStor staff http://www.infostor.com/articles/article_display.cfm?Section=ARTCL&C=Newst&ARTICLE_ID=295373&KEYWORDS=Adaptec&p=23 Although adoption of Serial Attached SCSI (SAS) is still in the infancy stages, the next 12 months bode well for proponents of the relatively new disk drive/array interface. For example, in a recent InfoStor QuickVote reader poll, 27% of the respondents said SAS will account for the majority of their disk drive purchases over the next year, although Serial ATA (SATA) topped the list with 37% of the respondents, followed by Fibre Channel with 32%. Only 4% of the poll respondents cited the aging parallel SCSI interface (see figure). However, surveys of InfoStor’s readers are skewed by the fact that almost half of our readers are in the channel (primarily VARs and systems/storage integrators), and the channel moves faster than end users in terms of adopting (or at least kicking the tires on) new technologies such as serial interfaces. Click here to enlarge image To get a more accurate view of the pace of adoption of serial interfaces such as SAS, consider market research predictions from firms such as Gartner and International Data Corp. (IDC). Yet even in those firms’ predictions, SAS is coming on surprisingly strong, mostly at the expense of its parallel SCSI predecessor. For example, Gartner predicts SAS disk drives will account for 16.4% of all multi-user drive shipments this year and will garner almost 45% of the overall market in 2009 (see figure on p. 18).
    [Show full text]
  • Nubus Physical Designs
    Macintosh Technical Notes ® Developer Technical Support #234: NuBus Physical Designs—Beware Revised by: Rich Collyer December 1989 Written by: Rich Collyer June 1989 This Technical Note discusses the possible problems you might run into while designing a NuBus™ card. It covers some of the specifications which, if not followed, will have problems with current Macintosh machines, and possibly future machines. Changes since June 1989: Added warnings about the no component area and full-size NuBus cards. If you are making a NuBus card for the Macintosh II family of computers, then you have to be very careful to follow the physical specifications which are listed in the NuBus specifications (IEEE P1196). There are two areas where some developers have run into problems. The first problem has to do with not positioning the external connector properly. The result is that some products have problems with the external hole on the back of the Macintosh IIcx. The second problem has to do with developers who run ribbon cables over the top of their boards to connect two boards. If a slot is not cut into the top of the board to allow the ribbon cable to sit below the top of the card, then the boards will have problems in our machines. External Connector The NuBus specification allows for an external connector plastics opening of only 74.55 mm x 11.90 mm. The Macintosh II and IIx allowed a significantly larger hole than the specification (80.00 mm x 17.00 mm) and some developers incorrectly assumed that Apple would continue to allow for this larger size.
    [Show full text]
  • Dell EMC Openmanage CIM Reference Guide Version 9.1 Notes, Cautions, and Warnings
    Dell EMC OpenManage CIM Reference Guide Version 9.1 Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2017 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners. 2017 - 12 Rev. A00 Contents 1 Introduction....................................................................................................................................................6 Server Administrator..........................................................................................................................................................6 Documenting CIM Classes and Their Properties........................................................................................................... 6 Base Classes................................................................................................................................................................. 7 Parent Classes.............................................................................................................................................................. 7 Classes That Describe Relationships........................................................................................................................
    [Show full text]
  • SPARC/CPU-5V Technical Reference Manual
    SPARC/CPU-5V Technical Reference Manual P/N 203651 Edition 5.0 February 1998 FORCE COMPUTERS Inc./GmbH All Rights Reserved This document shall not be duplicated, nor its contents used for any purpose, unless express permission has been granted. Copyright by FORCE COMPUTERS CPU-5V Technical Reference Manual Table of Contents SECTION 1 INTRODUCTION ....................................................................................1 1. Getting Started ..................................................................................................................................... 1 1.1. The SPARC CPU-5V Technical Reference Manual Set.................................................................. 1 1.2. Summary of the SPARC CPU-5V ................................................................................................... 2 1.3. Specifications ................................................................................................................................... 4 1.3.1. Ordering Information........................................................................................................... 6 1.4. History of the Manual ...................................................................................................................... 9 SECTION 2 INSTALLATION ....................................................................................11 2. Introduction........................................................................................................................................ 11 2.1. Caution
    [Show full text]
  • Discontinued Emulex- Branded Products
    Broadcom 1320 Ridder Park Drive San Jose, CA 95131 broadcom.com Discontinued Emulex- branded Products . Fibre Channel Host Bus Adapters . Fibre Channel HUBS . Switches . Software Solutions October 30, 2020 This document only applies to Emulex- branded products. Consult your supplier for specific information on OEM-branded products. Discontinued Emulex-branded Products Overview Broadcom Limited is committed to our customers by delivering product functionality and reliability that exceeds their expectation. Below is a description of key terminology and discontinued product tables. Table 1—Fibre Channel Host Bus Adapters . Table 2—Fibre Channel Optical Transceiver Kits . Table 3—Fibre Channel HUBs . Table 4—Switches . Table 5—Software Solutions Key Terminology End of Life (EOL) notification EOL indicates the date that distributors were notified of Broadcom intent to discontinue an Emulex model in production. Distributors are typically given a three-to-six month notice of Broadcom plans to discontinue a particular model. Final order This is the last date that Broadcom will accept orders from a customer or distributor. End of support This is the last date technical support is available for reporting software issues on discontinued models. The tables below indicate the last date of support or whether support has already ended. Software updates Active devices: Drivers for already supported operating system (OS) versions and firmware are updated periodically with new features and/ or corrections. Customers should check www.broadcom.com or call technical support for support of new OS versions for a given device. End of maintenance, Support Availability For software solutions, this is the last date software upgrades or updates are available.
    [Show full text]
  • Get More out of the Intel Foxhollow Platform
    Get More Out Of the Intel Foxhollow Platform Akber Kazmi, Marketing Director, PLX Technology Introduction As being reported by the mainstream technology media, Intel is leveraging the technology from its latest server-class Nehalem CPU to offer the Lynnfield CPU, targeted for high-end desktop and entry-level servers. This platform is codenamed “Foxhollow “. Intel is expected to launch this new platform sometime in the second half of 2009. This entry-level uni-processor (UP) server platform will feature two to four cores as Intel wants to pack a lot of processing power in all its platforms. The Foxhollow platform is quite different from the previous Desktops and UP servers in that it reduces the solution from three chips to two chips by eliminating the northbridge and replacing the southbridge with a new device called the Platform Controller Hub (or PCH) code named Ibexpeak (5 Series Chipset). As Intel has moved the memory controller and the graphics function into the CPU, there's no need for an MCH (Memory Controller Hub), so Intel has simplified its chipset design to keep costs down in the entry-level and mainstream segments. The PCH chip interfaces with the CPU through Intel’s DMI interconnect. The PCH will support eight PCIe lanes, up to four PCI slots, the GE MAC, display interface controllers, I/O controllers, RAID controllers, SATA controllers, USB 2.0 controllers, etc. Foxhollow Motherboards Foxhollow motherboards are being offered in two configurations, providing either two or three x8 PCIe ports for high performance I/Os. However, motherboard vendors can use an alternate configuration that provides one more PCIe x8 port with no significant burden and instead offers 33% more value than the three port solution and 50% more value than the two port solution.
    [Show full text]
  • M.2 2280 Sata Ssd
    Product Datasheet Version 1 M.2 2280 SATA SSD Product Name: I M 2 S 3 3 3 8 Capacity: 6 4 G B 、 1 2 8 GB、 2 5 6 G B 、 5 1 2 G B 、 1 T B I Revision History Revision Date Description Editor 0 May.7. 2019 Initial release Terry Chu 1 Oct. 18. 2019 Change to IA format Steven Wang 2 Apr. 24. 2020 Add DWPD Austin Lee II Table of Contents 1.0 General Description ........................................................................................................................... 2 2.0 Mechanical Specification ................................................................................................................. 3 2.1 Physical dimensions and Weight ........................................................................................... 3 2.2 Product Dimensions .................................................................................................................. 3 3.0 Product Specification ........................................................................................................................ 5 3.1 Interface and configuration ..................................................................................................... 5 3.2 Capacity ........................................................................................................................................ 5 3.3 Performance ................................................................................................................................ 5 3.4 Electrical ......................................................................................................................................
    [Show full text]