Intel® Xeon® E5-2400 Product Family Datasheet

Total Page:16

File Type:pdf, Size:1020Kb

Intel® Xeon® E5-2400 Product Family Datasheet Intel® Xeon® Processor E5-2400 Product Family Datasheet - Volume One May 2012 Reference Number: 327248-001 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The Intel® Xeon® processor E5-2400 product family, Intel® C600 Series Chipset, and the Intel® Xeon® processor E5-2400 product family-based platform described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to: http://www.intel.com/#/en_US_01 Hyper-Threading Technology requires a computer system with a processor supporting HT Technology and an HT Technology enabled chipset, BIOS and operating system. Performance will vary depending on the specific hardware and software you use. For more information including details on which processors support HT Technology, see http://www.intel.com/products/ht/hyperthreading_more.htm. Enabling Execute Disable Bit functionality requires a PC with a processor with Execute Disable Bit capability and a supporting operating system. Check with your PC manufacturer on whether your system delivers Execute Disable Bit functionality. Intel® Virtualization Technology requires a computer system with an enabled Intel® processor, BIOS, virtual machine monitor (VMM) and, for some uses, certain computer system software enabled for it. Functionality, performance or other benefits will vary depending on hardware and software configurations and may require a BIOS update. Software applications may not be compatible with all operating systems. Please check with your application vendor. Intel® Turbo Boost Technology requires a PC with a processor with Intel Turbo Boost Technology capability. Intel Turbo Boost Technology performance varies depending on hardware, software and overall system configuration. Check with your PC manufacturer on whether your system delivers Intel Turbo Boost Technology. For more information, see http://www.intel.com/technology/turboboost. No computer system can provide absolute security under all conditions. Intel® Trusted Execution Technology (Intel® TXT) requires a computer with Intel® Virtualization Technology, an Intel TXT-enabled processor, chipset, BIOS, Authenticated Code Modules and an Intel TXT-compatible measured launched environment (MLE). Intel TXT also requires the system to contain a TPM v1.s. For more information, visit http://www.intel.com/technology/security The Processor Spec Finder at http://ark.intel.com or contact your Intel representative for more information. 64-bit computing on Intel architecture requires a computer system with a processor, chipset, BIOS, operating system, device drivers and applications enabled for Intel® 64 architecture. Performance will vary depending on your hardware and software configurations. Consult with your system vendor for more information. Δ Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. See http://www.intel.com/products/processor_number for details. I2C is a two-wire communications bus/protocol developed by Philips. SMBus is a subset of the I2C bus/protocol and was developed by Intel. Implementations of the I2C bus/protocol may require licenses from various entities, including Philips Electronics N.V. and North American Philips Corporation. Intel, Xeon, Enhanced Intel SpeedStep Technology, Core, and the Intel logo are trademarks of Intel Corporation in the U. S. and other countries. *Other names and brands may be claimed as the property of others. Copyright © 2009-2012, Intel Corporation. All rights reserved. 2 Intel® Xeon® Processor E5-2400 Product Family Datasheet Volume One Table of Contents 1Overview................................................................................................................. 13 1.1 Introduction ..................................................................................................... 13 1.1.1 Processor Feature Details ........................................................................ 13 1.1.2 Supported Technologies .......................................................................... 14 1.2 Interfaces ........................................................................................................ 14 1.2.1 System Memory Support ......................................................................... 14 1.2.2 PCI Express* ......................................................................................... 15 1.2.3 Direct Media Interface Gen 2 (DMI2)......................................................... 17 1.2.4 Intel® QuickPath Interconnect (Intel® QPI) .............................................. 17 1.2.5 Platform Environment Control Interface (PECI)........................................... 18 1.3 Power Management Support ............................................................................... 18 1.3.1 Processor Package and Core States........................................................... 18 1.3.2 System States Support ........................................................................... 18 1.3.3 Memory Controller.................................................................................. 18 1.3.4 PCI Express........................................................................................... 19 1.3.5 Intel QuickPath Interconnect.................................................................... 19 1.4 Thermal Management Support ............................................................................ 19 1.5 Package Summary............................................................................................. 19 1.6 Terminology ..................................................................................................... 19 1.7 Related Documents ........................................................................................... 22 1.8 State of Data .................................................................................................... 23 2Interfaces................................................................................................................ 25 2.1 System Memory Interface .................................................................................. 25 2.1.1 System Memory Technology Support ........................................................ 25 2.1.2 System Memory Timing Support............................................................... 25 2.2 PCI Express* Interface....................................................................................... 26 2.2.1 PCI Express* Architecture ....................................................................... 26 2.2.1.1 Transaction Layer ..................................................................... 27 2.2.1.2 Data Link Layer ........................................................................ 27 2.2.1.3 Physical Layer .......................................................................... 27 2.2.2 PCI Express* Configuration Mechanism ..................................................... 27 2.3 DMI2/PCI Express* Interface .............................................................................. 28 2.3.1 DMI2 Error Flow ..................................................................................... 28 2.3.2 Processor/PCH Compatibility Assumptions.................................................. 28 2.3.3 DMI2 Link Down..................................................................................... 28 2.4 Intel QuickPath Interconnect............................................................................... 28 2.5 Platform Environment Control Interface (PECI)...................................................... 31 2.5.1 PECI Client Capabilities ........................................................................... 31 2.5.1.1 Thermal Management................................................................ 32 2.5.1.2 Platform Manageability
Recommended publications
  • Designing an Ultra Low-Overhead Multithreading Runtime for Nim
    Designing an ultra low-overhead multithreading runtime for Nim Mamy Ratsimbazafy Weave [email protected] https://github.com/mratsim/weave Hello! I am Mamy Ratsimbazafy During the day blockchain/Ethereum 2 developer (in Nim) During the night, deep learning and numerical computing developer (in Nim) and data scientist (in Python) You can contact me at [email protected] Github: mratsim Twitter: m_ratsim 2 Where did this talk came from? ◇ 3 years ago: started writing a tensor library in Nim. ◇ 2 threading APIs at the time: OpenMP and simple threadpool ◇ 1 year ago: complete refactoring of the internals 3 Agenda ◇ Understanding the design space ◇ Hardware and software multithreading: definitions and use-cases ◇ Parallel APIs ◇ Sources of overhead and runtime design ◇ Minimum viable runtime plan in a weekend 4 Understanding the 1 design space Concurrency vs parallelism, latency vs throughput Cooperative vs preemptive, IO vs CPU 5 Parallelism is not 6 concurrency Kernel threading 7 models 1:1 Threading 1 application thread -> 1 hardware thread N:1 Threading N application threads -> 1 hardware thread M:N Threading M application threads -> N hardware threads The same distinctions can be done at a multithreaded language or multithreading runtime level. 8 The problem How to schedule M tasks on N hardware threads? Latency vs 9 Throughput - Do we want to do all the work in a minimal amount of time? - Numerical computing - Machine learning - ... - Do we want to be fair? - Clients-server - Video decoding - ... Cooperative vs 10 Preemptive Cooperative multithreading:
    [Show full text]
  • Fiber Context As a first-Class Object
    Document number: P0876R6 Date: 2019-06-17 Author: Oliver Kowalke ([email protected]) Nat Goodspeed ([email protected]) Audience: SG1 fiber_context - fibers without scheduler Revision History . .1 abstract . .2 control transfer mechanism . .2 std::fiber_context as a first-class object . .3 encapsulating the stack . .4 invalidation at resumption . .4 problem: avoiding non-const global variables and undefined behaviour . .5 solution: avoiding non-const global variables and undefined behaviour . .6 inject function into suspended fiber . 11 passing data between fibers . 12 termination . 13 exceptions . 16 std::fiber_context as building block for higher-level frameworks . 16 interaction with STL algorithms . 18 possible implementation strategies . 19 fiber switch on architectures with register window . 20 how fast is a fiber switch . 20 interaction with accelerators . 20 multi-threading environment . 21 acknowledgments . 21 API ................................................................ 22 33.7 Cooperative User-Mode Threads . 22 33.7.1 General . 22 33.7.2 Empty vs. Non-Empty . 22 33.7.3 Explicit Fiber vs. Implicit Fiber . 22 33.7.4 Implicit Top-Level Function . 22 33.7.5 Header <experimental/fiber_context> synopsis . 23 33.7.6 Class fiber_context . 23 33.7.7 Function unwind_fiber() . 27 references . 29 Revision History This document supersedes P0876R5. Changes since P0876R5: • std::unwind_exception removed: stack unwinding must be performed by platform facilities. • fiber_context::can_resume_from_any_thread() renamed to can_resume_from_this_thread(). • fiber_context::valid() renamed to empty() with inverted sense. • Material has been added concerning the top-level wrapper logic governing each fiber. The change to unwinding fiber stacks using an anonymous foreign exception not catchable by C++ try / catch blocks is in response to deep discussions in Kona 2019 of the surprisingly numerous problems surfaced by using an ordinary C++ exception for that purpose.
    [Show full text]
  • Fiber-Based Architecture for NFV Cloud Databases
    Fiber-based Architecture for NFV Cloud Databases Vaidas Gasiunas, David Dominguez-Sal, Ralph Acker, Aharon Avitzur, Ilan Bronshtein, Rushan Chen, Eli Ginot, Norbert Martinez-Bazan, Michael Muller,¨ Alexander Nozdrin, Weijie Ou, Nir Pachter, Dima Sivov, Eliezer Levy Huawei Technologies, Gauss Database Labs, European Research Institute fvaidas.gasiunas,david.dominguez1,fi[email protected] ABSTRACT to a specific workload, as well as to change this alloca- The telco industry is gradually shifting from using mono- tion on demand. Elasticity is especially attractive to op- lithic software packages deployed on custom hardware to erators who are very conscious about Total Cost of Own- using modular virtualized software functions deployed on ership (TCO). NFV meets databases (DBs) in more than cloudified data centers using commodity hardware. This a single aspect [20, 30]. First, the NFV concepts of modu- transformation is referred to as Network Function Virtual- lar software functions accelerate the separation of function ization (NFV). The scalability of the databases (DBs) un- (business logic) and state (data) in the design and imple- derlying the virtual network functions is the cornerstone for mentation of network functions (NFs). This in turn enables reaping the benefits from the NFV transformation. This independent scalability of the DB component, and under- paper presents an industrial experience of applying shared- lines its importance in providing elastic state repositories nothing techniques in order to achieve the scalability of a DB for various NFs. In principle, the NFV-ready DBs that are in an NFV setup. The special combination of requirements relevant to our discussion are sophisticated main-memory in NFV DBs are not easily met with conventional execution key-value (KV) stores with stringent high availability (HA) models.
    [Show full text]
  • A Thread Performance Comparison: Windows NT and Solaris on a Symmetric Multiprocessor
    The following paper was originally published in the Proceedings of the 2nd USENIX Windows NT Symposium Seattle, Washington, August 3–4, 1998 A Thread Performance Comparison: Windows NT and Solaris on A Symmetric Multiprocessor Fabian Zabatta and Kevin Ying Brooklyn College and CUNY Graduate School For more information about USENIX Association contact: 1. Phone: 510 528-8649 2. FAX: 510 548-5738 3. Email: [email protected] 4. WWW URL:http://www.usenix.org/ A Thread Performance Comparison: Windows NT and Solaris on A Symmetric Multiprocessor Fabian Zabatta and Kevin Ying Logic Based Systems Lab Brooklyn College and CUNY Graduate School Computer Science Department 2900 Bedford Avenue Brooklyn, New York 11210 {fabian, kevin}@sci.brooklyn.cuny.edu Abstract Manufacturers now have the capability to build high performance multiprocessor machines with common CPU CPU CPU CPU cache cache cache cache PC components. This has created a new market of low cost multiprocessor machines. However, these machines are handicapped unless they have an oper- Bus ating system that can take advantage of their under- lying architectures. Presented is a comparison of Shared Memory I/O two such operating systems, Windows NT and So- laris. By focusing on their implementation of Figure 1: A basic symmetric multiprocessor architecture. threads, we show each system's ability to exploit multiprocessors. We report our results and inter- pretations of several experiments that were used to compare the performance of each system. What 1.1 Symmetric Multiprocessing emerges is a discussion on the performance impact of each implementation and its significance on vari- Symmetric multiprocessing (SMP) is the primary ous types of applications.
    [Show full text]
  • Concurrent Cilk: Lazy Promotion from Tasks to Threads in C/C++
    Concurrent Cilk: Lazy Promotion from Tasks to Threads in C/C++ Christopher S. Zakian, Timothy A. K. Zakian Abhishek Kulkarni, Buddhika Chamith, and Ryan R. Newton Indiana University - Bloomington, fczakian, tzakian, adkulkar, budkahaw, [email protected] Abstract. Library and language support for scheduling non-blocking tasks has greatly improved, as have lightweight (user) threading packages. How- ever, there is a significant gap between the two developments. In previous work|and in today's software packages|lightweight thread creation incurs much larger overheads than tasking libraries, even on tasks that end up never blocking. This limitation can be removed. To that end, we describe an extension to the Intel Cilk Plus runtime system, Concurrent Cilk, where tasks are lazily promoted to threads. Concurrent Cilk removes the overhead of thread creation on threads which end up calling no blocking operations, and is the first system to do so for C/C++ with legacy support (standard calling conventions and stack representations). We demonstrate that Concurrent Cilk adds negligible overhead to existing Cilk programs, while its promoted threads remain more efficient than OS threads in terms of context-switch overhead and blocking communication. Further, it enables development of blocking data structures that create non-fork-join dependence graphs|which can expose more parallelism, and better supports data-driven computations waiting on results from remote devices. 1 Introduction Both task-parallelism [1, 11, 13, 15] and lightweight threading [20] libraries have become popular for different kinds of applications. The key difference between a task and a thread is that threads may block|for example when performing IO|and then resume again.
    [Show full text]
  • The Impact of Process Placement and Oversubscription on Application Performance: a Case Study for Exascale Computing
    Takustraße 7 Konrad-Zuse-Zentrum D-14195 Berlin-Dahlem fur¨ Informationstechnik Berlin Germany FLORIAN WENDE,THOMAS STEINKE,ALEXANDER REINEFELD The Impact of Process Placement and Oversubscription on Application Performance: A Case Study for Exascale Computing ZIB-Report 15-05 (January 2015) Herausgegeben vom Konrad-Zuse-Zentrum fur¨ Informationstechnik Berlin Takustraße 7 D-14195 Berlin-Dahlem Telefon: 030-84185-0 Telefax: 030-84185-125 e-mail: [email protected] URL: http://www.zib.de ZIB-Report (Print) ISSN 1438-0064 ZIB-Report (Internet) ISSN 2192-7782 The Impact of Process Placement and Oversubscription on Application Performance: A Case Study for Exascale Computing Florian Wende, Thomas Steinke, Alexander Reinefeld Abstract: With the growing number of hardware components and the increasing software complexity in the upcoming exascale computers, system failures will become the norm rather than an exception for long-running applications. Fault-tolerance can be achieved by the creation of checkpoints dur- ing the execution of a parallel program. Checkpoint/Restart (C/R) mechanisms allow for both task migration (even if there were no hardware faults) and restarting of tasks after the occurrence of hard- ware faults. Affected tasks are then migrated to other nodes which may result in unfortunate process placement and/or oversubscription of compute resources. In this paper we analyze the impact of unfortunate process placement and oversubscription of compute resources on the performance and scalability of two typical HPC application workloads, CP2K and MOM5. Results are given for a Cray XC30/40 with Aries dragonfly topology. Our results indicate that unfortunate process placement has only little negative impact while oversubscription substantially degrades the performance.
    [Show full text]
  • S2600GZ and S2600GL
    Intel® Server Board S2600GZ/GL Technical Product Specification Revision 2.4 October 2014 Revision History Intel® Server Board S2600GZ/GL TPS Revision History Date Revision Modifications Number January 2012 1.0 First public release. Deleted section 6.9.2 Fan Profile. Deleted chapter 11 - Environmental Limits Specification. March 2012 1.1 . Added chapter 12 - BIOS Setup Utility. Added Figure 17. Intel® Server Board S2600GZ/GL PCI Layout. Added support for Intel® Xeon® processor E5-2600 v2 product family . Updated memory support tables August 2013 2.0 . Corrected POST code table - E0h – E3h . Updated reference documents table . Corrected Table 39 - pin out for on-board 7-pin SATA connectors. Updated PCIe Gen 3 support verbiage – section 3.2.5 . Added Phase Shedding support verbiage – section 3.2.4.1.1 February 2014 2.1 . Updated BIOS Setup options to include Phase Shedding option, PCIe Gen3 support option. Updated BIOS Setup options to include Extended ATR option, PFloor tuning option, Memory Mapped I/O Size option, PCIe AER Support option, Log Correctable Errors option and System Early POST Timeout option. Corrected Figure 9 and Figure 16 – PCIe Gen1x48GB/s April 2014 2.2 . Updated BIOS Setep options to include Phase Shedding and Memory SPD Override options. Updated Table 37. System Status LED State Definitions – Remove the “Battery Failure” from Description column. June 2014 2.3 . Updated Table 51. SystemStatus LED State Definitions – Remove the “Battery Failure” from Description column. October 2014 2.4 . Add note for Figure 29. ii Revision 2.4 Intel® Server Board S2600GZ/GL TPS Disclaimers Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS.
    [Show full text]
  • And Computer Systems I: Introduction Systems
    Welcome to Computer Team! Noah Singer Introduction Welcome to Computer Team! Units Computer and Computer Systems I: Introduction Systems Computer Parts Noah Singer Montgomery Blair High School Computer Team September 14, 2017 Overview Welcome to Computer Team! Noah Singer Introduction 1 Introduction Units Computer Systems 2 Units Computer Parts 3 Computer Systems 4 Computer Parts Welcome to Computer Team! Noah Singer Introduction Units Computer Systems Computer Parts Section 1: Introduction Computer Team Welcome to Computer Team! Computer Team is... Noah Singer A place to hang out and eat lunch every Thursday Introduction A cool discussion group for computer science Units Computer An award-winning competition programming group Systems A place for people to learn from each other Computer Parts Open to people of all experience levels Computer Team isn’t... A programming club A highly competitive environment A rigorous course that requires commitment Structure Welcome to Computer Team! Noah Singer Introduction Units Four units, each with five to ten lectures Computer Each lecture comes with notes that you can take home! Systems Computer Guided activities to help you understand more Parts difficult/technical concepts Special presentations and guest lectures on interesting topics Conventions Welcome to Computer Team! Noah Singer Vocabulary terms are marked in bold. Introduction Definition Units Computer Definitions are in boxes (along with theorems, etc.). Systems Computer Parts Important statements are highlighted in italics. Formulas are written
    [Show full text]
  • Enabling Technologies for Distributed and Cloud Computing Dr
    Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Technologies for Network-Based Systems Multi-core CPUs and Multithreading Technologies CPU’s today assume a multi-core architecture with dual, quad, six, or more processing cores. The clock rate increased from 10 MHz for Intel 286 to 4 GHz for Pentium 4 in 30 years. However, the clock rate reached its limit on CMOS chips due to power limitations. Clock speeds cannot continue to increase due to excessive heat generation and current leakage. Multi-core CPUs can handle multiple instruction threads. Technologies for Network-Based Systems Multi-core CPUs and Multithreading Technologies LI cache is private to each core, L2 cache is shared and L3 cache or DRAM is off the chip. Examples of multi-core CPUs include Intel i7, Xeon, AMD Opteron. Each core can also be multithreaded. E.g. the Niagara II has 8 cores with each core handling 8 threads for a total of 64 threads maximum. Technologies for Network-Based Systems Hyper-threading (HT) Technology A feature of certain Intel chips (such as Xeon, i7) that makes one physical core appear as two logical processors. On an operating system level, a single-core CPU with Hyper- Threading technology will be reported as two logical processors, dual-core CPU with HT is reported as four logical processors, and so on. HT adds a second set of general, control and special registers. The second set of registers allows the CPU to keep the state of both cores, and effortlessly switch between them by switching the register set.
    [Show full text]
  • LGA1150 Socket — Application Guide
    LGA1150 Socket Application Guide September 2013 Order No.: 328999-002 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them.
    [Show full text]
  • Comptia A+ Complete Study Guide A+ Essentials (220-601) Exam Objectives
    4830bperf.fm Page 1 Thursday, March 8, 2007 10:03 AM CompTIA A+ Complete Study Guide A+ Essentials (220-601) Exam Objectives OBJECTIVE CHAPTER Domain 1.0 Personal Computer Components 1.1 Identify the fundamental principles of using personal computers 1 1.2 Install, configure, optimize and upgrade personal computer components 2 1.3 Identify tools, diagnostic procedures and troubleshooting techniques for personal computer components 2 1.4 Perform preventative maintenance on personal computer components 2 Domain 2.0 Laptops and Portable Devices 2.1 Identify the fundamental principles of using laptops and portable devices 3 2.2 Install, configure, optimize and upgrade laptops and portable devices 3 2.3 Identify tools, basic diagnostic procedures and troubleshooting techniques for laptops and portable devices 3 2.4 Perform preventative maintenance on laptops and portable devices 3 Domain 3.0 Operating Systems 3.1 Identify the fundamentals of using operating systems 4 3.2 Install, configure, optimize and upgrade operating systems 5 3.3 Identify tools, diagnostic procedures and troubleshooting techniques for operating systems 6 3.4 Perform preventative maintenance on operating systems 6 Domain 4.0 Printers and Scanners 4.1 Identify the fundamental principles of using printers and scanners 7 4.2 Identify basic concepts of installing, configuring, optimizing and upgrading printers and scanners 7 4.3 Identify tools, basic diagnostic procedures and troubleshooting techniques for printers and scanners 7 Domain 5.0 Networks 5.1 Identify the fundamental
    [Show full text]
  • Eindhoven University of Technology MASTER Acceleration of A
    Eindhoven University of Technology MASTER Acceleration of a geodesic fiber-tracking algorithm for diffusion tensor imaging using CUDA van Aart, E. Award date: 2010 Link to publication Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain Acceleration of a Geodesic Fiber-Tracking Algorithm for Diffusion Tensor Imaging using CUDA Evert van Aart June 2010 Graduation Supervisor Dr. Andrei Jalba Eindhoven University of Technology Department of Mathematics and Computer Science Visualization group Committee Members Dr. Andrei Jalba Dr. Anna Vilanova Eindhoven University of Technology Department of Biomedical Engineering Biomedical Image Analysis group Dr. Bart Mesman Eindhoven University of Technology Department of Electrical Engineering Electronic Systems group Contents 1 Introduction 3 1.1 Problem Description .
    [Show full text]