The Amd Opteron Northbridge Architecture

Total Page:16

File Type:pdf, Size:1020Kb

The Amd Opteron Northbridge Architecture ..................................................................................................................................................................................................................................................... THE AMD OPTERON NORTHBRIDGE ARCHITECTURE ..................................................................................................................................................................................................................................................... TO INCREASE PERFORMANCE WHILE OPERATING WITHIN A FIXED POWER BUDGET, THE AMD OPTERON PROCESSOR INTEGRATES MULTIPLE X86-64 CORES WITH A ROUTER AND MEMORY CONTROLLER.AMD’S EXPERIENCE WITH BUILDING A WIDE VARIETY OF SYSTEM TOPOLOGIES USING OPTERON’S HYPERTRANSPORT-BASED PROCESSOR INTERFACE HAS PROVIDED USEFUL LESSONS THAT EXPOSE THE CHALLENGES TO BE ADDRESSED WHEN DESIGNING FUTURE SYSTEM INTERCONNECT, MEMORY HIERARCHY, AND I/O TO SCALE WITH BOTH THE NUMBER OF CORES AND SOCKETS IN FUTURE X86-64 CMP ARCHITECTURES. ...... In 2005, Advanced Micro Devices significant throughput improvements in introduced the industry’s first native 64-bit future products while operating within x86 chip multiprocessor (CMP) architec- a fixed power budget. AMD has also ture combining two independent processor launched an initiative to provide industry cores on a single silicon die. The dual-core access to the Direct Connect architecture. Opteron chip featuring AMD’s Direct The ‘‘Torrenza Initiative’’ sidebar sum- Connect architecture provided a path for marizes the project’s goals. existing Opteron shared-memory multipro- cessors to scale up from 4- and 8-way to 8- The x86 blade server architecture and 16-way while operating within the same Figure 1a shows the traditional front-side Pat Conway power envelope as the original single-core bus (FSB) architecture of a four-processor Opteron processor.1,2 The foundation for (4P) blade, in which several processors share Bill Hughes AMD’s Direct Connect architecture is its a bus connected to an external memory innovative Opteron processor northbridge. controller (the northbridge) and an I/O Advanced Micro Devices In this article, we discuss the wide variety controller (the southbridge). Discrete exter- of system topologies that use the Direct nal memory buffer chips (XMBs) provide Connect architecture for glueless multipro- expanded memory capacity. The single cessing, the latency and bandwidth char- memory controller can be a major bottle- acteristics of these systems, and the impor- neck, preventing faster CPUs or additional tance of topology selection and virtual- cores from improving performance signifi- channel-buffer allocation to optimizing cantly. system throughput. We also describe several In contrast, Figure 1b illustrates AMD’s extensions of the Opteron northbridge Direct Connect architecture, which uses architecture, planned by AMD to provide industry-standard HyperTransport technol- ....................................................................... 10 Published by the IEEE Computer Society. 0272-1732/07/$20.00 G 2007 IEEE Authorized licensed use limited to: Australian National University. Downloaded on March 21, 2009 at 18:44 from IEEE Xplore. Restrictions apply. ............................................................................................................................................................................................................................................................................ Torrenza Initiative AMD’s Torrenza is a multiyear initiative to create an innovation innovation across the industry, AMD is opening access to HyperTran- platform by opening access to the AMD64 Direct Connect architecture to sport. enhance acceleration and coprocessing in homogeneous and heteroge- neous systems. Figure A shows the Torrenza platform, illustrating how custom designed accelerators, say for the processing of Extensible Torrenza is designed to create an opportunity for a global innovation Markup Language (XML) documents or for service-oriented architecture community to develop and deploy application-specific coprocessors to (SOA) applications, can be tightly coupled with Opteron processors. As the industry’s first open, customer-centered x86 innovation platform, Torrenza capitalizes on the Direct Connect architecture and HyperTransport technology advances of the AMD64 platform. The Torrenza Initiative includes the following elements: N Innovation Socket. In Septem- ber 2006 AMD announced it would license the AMD64 processor socket and design specifications to OEMs to allow collaboration on specifi- cations so that they can take full advantage of the x86 architecture. Cray, Fujitsu, Sie- mens, IBM, and Sun have publicly stated their support and are designing products for the Innovation Socket. N Coprocessor enablement. Leveraging the strengths of HyperTransport, AMD is work- ing with various partners to Figure A. Torrenza platform. create an extensive partner ecosystem of tools, services, and software to implement coprocessors in silicon. HyperTransport is work alongside AMD processors in multisocket systems. Its goal is to the only open, standards-based, extensible system bus. help accelerate industry innovation and drive new technology, which can N Direct Connect platform enablement. AMD is encouraging standards then become mainstream. It gives users, original equipment manufac- bodies and operating system suppliers to support accelerators and turers, and independent software vendors the ability to leverage billions coprocessors directly connected to the processor. To help drive in third-party investments. ogy to interconnect the processors.3 Hyper- advantage in memory capacity and band- Transport interconnect offers scalability, width over the traditional architecture, high bandwidth, and low latency. The without requiring the use of costly, power- distributed shared-memory architecture consuming memory buffers. Thus, the includes four integrated memory control- Direct Connect architecture reduces FSB lers, one per chip, giving it a fourfold bottlenecks. ........................................................................ MARCH–APRIL 2007 11 Authorized licensed use limited to: Australian National University. Downloaded on March 21, 2009 at 18:44 from IEEE Xplore. Restrictions apply. ......................................................................................................................................................................................................................... HOT CHIPS Northbridge microarchitecture packets, which are 4 or 8 bytes in size, In the Opteron processor, the northbridge and a data crossbar for routing the data consists of all the logic outside the processor payload associated with commands, which core. Figure 2 shows an Opteron processor can be 4 or 64 bytes in size. with a simplified view of the northbridge Figure 3 depicts the northbridge com- microarchitecture, including system request mand flow. The command crossbar routes interface (SRI) and host bridge, crossbar, coherent HyperTransport commands. It memory controller, DRAM controller, and can deliver an 8-byte HyperTransport HyperTransport ports. packet header at a rate of 1 per clock (one The northbridge is a custom design that every 333 ps with a 3-GHz CPU). Each runs at the same frequency as the processor input port has a pool of command-size core. The command flow starts in the buffers, which are divided between four processor core with a memory access that virtual channels (VCs): Request, Posted misses in the L2 cache, such as an in- request, Probe, and Response. A static struction fetch. The SRI contains the system allocation of command buffers occurs at address map, which maps memory ranges to each of the five crossbar input ports. (The nodes. If the memory access is to local next section of this article discusses how memory, an address map lookup in the SRI buffers should be allocated across different sends it to the on-chip memory controller; virtual channels to optimize system if the memory access is off-chip, a routing throughput.) table lookup routes it to a HyperTransport The data crossbar, shown in Figure 4, port. supports cut-through routing of data pack- The northbridge crossbar has five ports: ets. The cache line size is 64 bytes, and all SRI, memory controller, and three Hyper- buffers are sized in multiples of 64 bytes to Transport ports. The processing of com- optimize the transfer of cache-line-size data mand packet headers and data packets is packets. Data packets traverse on-chip data logically separated. There is a command paths in 8 clock cycles. Transfers to crossbar dedicated to routing command different output ports are time multiplexed Figure 1. Evolution of x86 blade server architecture: traditional front-side bus architecture (a) and AMD’s Direct Connect architecture (b). MCP: multichip package; Mem.: memory controller. ....................................................................... 12 IEEE MICRO Authorized licensed use limited to: Australian National University. Downloaded on March 21, 2009 at 18:44 from IEEE Xplore. Restrictions apply. Figure 2. Opteron 800 series processor architecture. clock by clock to support high concurrency; market segment. The SCI protocol supports for example, two concurrent transfers from a single shared-address space for an arbitrary CPU and memory controller input ports to number of nodes in a distributed shared- different output ports are possible. The
Recommended publications
  • ATX-945G Industrial Motherboard in ATX Form Factor with Intel® 945G Chipset
    ATX-945G Industrial Motherboard in ATX form factor with Intel® 945G chipset Supports Intel® Core™2 Duo, Pentium® D CPU on LGA775 socket 1066/800/533MHz Front Side Bus Intel® Graphics Media Accelerator 950 Dual Channel DDR2 DIMM, maximum 4GB Integrated PCI Express LAN, USB 2.0 and SATA 3Gb/s ATX-945G Product Overview The ATX-945G is an ATX form factor single-processor (x8) and SATA round out the package for a powerful industrial motherboard that is based on the Intel® 945G industrial PC. I/O features of the ATX-945G include a chipset with ICH7 I/O Controller Hub. It supports the Intel® 32-bit/33MHz PCI Bus, 16-bit/8MHz ISA bus, 1x PCI Express Core™2 Duo, Pentium® D, Pentium® 4 with Hyper-Threading x16 slot, 2x PCI Express x1 slots, 3x PCI slots, 1x ISA slot Technology, or Celeron® D Processor on the LGA775 socket, (shared w/ PCI), onboard Gigabit Ethernet, LPC, EIDE Ultra 1066/800/533MHz Front Side Bus, Dual Channel DDR2 DIMM ATA/100, 4 channels SATA 3Gb/s, Mini PCI card slot, UART 667/533/400MHz, up to 4 DIMMs and a maximum of 4GB. The compatible serial ports, parallel port, floppy drive port, HD Intel® Graphic Media Accelerator 950 technology with Audio, and PS/2 Keyboard and Mouse ports. 2048x1536x8bit at 75Hz resolution, PCI Express LAN, USB 2.0 Block Diagram CPU Core™2 Duo Pentium® D LGA775 package 533/800/1066MHz FSB Dual-Core Hyper-Threading 533/800/1066 MHz FSB D I M Northbridge DDR Channel A M ® x Intel 945G GMCH 2 DDRII 400/533/667 MHz D I CRT M M DB-15 DDR Channel B x 2 Discrete PCIe x16 Graphics DMI Interface 2 GB/s IDE Device
    [Show full text]
  • Owner's Manual
    Dell Latitude 3330 Owner's Manual Regulatory Model: P18S Regulatory Type: P18S002 Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2013 Dell Inc. Trademarks used in this text: Dell™, the Dell logo, Dell Boomi™, Dell Precision™ , OptiPlex™, Latitude™, PowerEdge™, PowerVault™, PowerConnect™, OpenManage™, EqualLogic™, Compellent™, KACE™, FlexAddress™, Force10™ and Vostro™ are trademarks of Dell Inc. Intel®, Pentium®, Xeon®, Core® and Celeron® are registered trademarks of Intel Corporation in the U.S. and other countries. AMD® is a registered trademark and AMD Opteron™, AMD Phenom™ and AMD Sempron™ are trademarks of Advanced Micro Devices, Inc. Microsoft®, Windows®, Windows Server®, Internet Explorer®, MS-DOS®, Windows Vista® and Active Directory® are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries. Red Hat® and Red Hat® Enterprise Linux® are registered trademarks of Red Hat, Inc. in the United States and/or other countries. Novell® and SUSE® are registered trademarks of Novell Inc. in the United States and other countries. Oracle® is a registered trademark of Oracle Corporation and/or its affiliates. Citrix®, Xen®, XenServer® and XenMotion® are either registered trademarks or trademarks of Citrix Systems, Inc. in the United States and/or other countries. VMware®, Virtual SMP®, vMotion®, vCenter® and vSphere® are registered trademarks or trademarks of VMware, Inc. in the United States or other countries.
    [Show full text]
  • Exploring Weak Scalability for FEM Calculations on a GPU-Enhanced Cluster
    Exploring weak scalability for FEM calculations on a GPU-enhanced cluster Dominik G¨oddeke a,∗,1, Robert Strzodka b,2, Jamaludin Mohd-Yusof c, Patrick McCormick c,3, Sven H.M. Buijssen a, Matthias Grajewski a and Stefan Turek a aInstitute of Applied Mathematics, University of Dortmund bStanford University, Max Planck Center cComputer, Computational and Statistical Sciences Division, Los Alamos National Laboratory Abstract The first part of this paper surveys co-processor approaches for commodity based clusters in general, not only with respect to raw performance, but also in view of their system integration and power consumption. We then extend previous work on a small GPU cluster by exploring the heterogeneous hardware approach for a large-scale system with up to 160 nodes. Starting with a conventional commodity based cluster we leverage the high bandwidth of graphics processing units (GPUs) to increase the overall system bandwidth that is the decisive performance factor in this scenario. Thus, even the addition of low-end, out of date GPUs leads to improvements in both performance- and power-related metrics. Key words: graphics processors, heterogeneous computing, parallel multigrid solvers, commodity based clusters, Finite Elements PACS: 02.70.-c (Computational Techniques (Mathematics)), 02.70.Dc (Finite Element Analysis), 07.05.Bx (Computer Hardware and Languages), 89.20.Ff (Computer Science and Technology) ∗ Corresponding author. Address: Vogelpothsweg 87, 44227 Dortmund, Germany. Email: [email protected], phone: (+49) 231 755-7218, fax: -5933 1 Supported by the German Science Foundation (DFG), project TU102/22-1 2 Supported by a Max Planck Center for Visual Computing and Communication fellowship 3 Partially supported by the U.S.
    [Show full text]
  • Evaluation of AMD EPYC
    Evaluation of AMD EPYC Chris Hollowell <[email protected]> HEPiX Fall 2018, PIC Spain What is EPYC? EPYC is a new line of x86_64 server CPUs from AMD based on their Zen microarchitecture Same microarchitecture used in their Ryzen desktop processors Released June 2017 First new high performance series of server CPUs offered by AMD since 2012 Last were Piledriver-based Opterons Steamroller Opteron products cancelled AMD had focused on low power server CPUs instead x86_64 Jaguar APUs ARM-based Opteron A CPUs Many vendors are now offering EPYC-based servers, including Dell, HP and Supermicro 2 How Does EPYC Differ From Skylake-SP? Intel’s Skylake-SP Xeon x86_64 server CPU line also released in 2017 Both Skylake-SP and EPYC CPU dies manufactured using 14 nm process Skylake-SP introduced AVX512 vector instruction support in Xeon AVX512 not available in EPYC HS06 official GCC compilation options exclude autovectorization Stock SL6/7 GCC doesn’t support AVX512 Support added in GCC 4.9+ Not heavily used (yet) in HEP/NP offline computing Both have models supporting 2666 MHz DDR4 memory Skylake-SP 6 memory channels per processor 3 TB (2-socket system, extended memory models) EPYC 8 memory channels per processor 4 TB (2-socket system) 3 How Does EPYC Differ From Skylake (Cont)? Some Skylake-SP processors include built in Omnipath networking, or FPGA coprocessors Not available in EPYC Both Skylake-SP and EPYC have SMT (HT) support 2 logical cores per physical core (absent in some Xeon Bronze models) Maximum core count (per socket) Skylake-SP – 28 physical / 56 logical (Xeon Platinum 8180M) EPYC – 32 physical / 64 logical (EPYC 7601) Maximum socket count Skylake-SP – 8 (Xeon Platinum) EPYC – 2 Processor Inteconnect Skylake-SP – UltraPath Interconnect (UPI) EYPC – Infinity Fabric (IF) PCIe lanes (2-socket system) Skylake-SP – 96 EPYC – 128 (some used by SoC functionality) Same number available in single socket configuration 4 EPYC: MCM/SoC Design EPYC utilizes an SoC design Many functions normally found in motherboard chipset on the CPU SATA controllers USB controllers etc.
    [Show full text]
  • Multiprocessing Contents
    Multiprocessing Contents 1 Multiprocessing 1 1.1 Pre-history .............................................. 1 1.2 Key topics ............................................... 1 1.2.1 Processor symmetry ...................................... 1 1.2.2 Instruction and data streams ................................. 1 1.2.3 Processor coupling ...................................... 2 1.2.4 Multiprocessor Communication Architecture ......................... 2 1.3 Flynn’s taxonomy ........................................... 2 1.3.1 SISD multiprocessing ..................................... 2 1.3.2 SIMD multiprocessing .................................... 2 1.3.3 MISD multiprocessing .................................... 3 1.3.4 MIMD multiprocessing .................................... 3 1.4 See also ................................................ 3 1.5 References ............................................... 3 2 Computer multitasking 5 2.1 Multiprogramming .......................................... 5 2.2 Cooperative multitasking ....................................... 6 2.3 Preemptive multitasking ....................................... 6 2.4 Real time ............................................... 7 2.5 Multithreading ............................................ 7 2.6 Memory protection .......................................... 7 2.7 Memory swapping .......................................... 7 2.8 Programming ............................................. 7 2.9 See also ................................................ 8 2.10 References .............................................
    [Show full text]
  • Cray XT and Cray XE Y Y System Overview
    Crayyy XT and Cray XE System Overview Customer Documentation and Training Overview Topics • System Overview – Cabinets, Chassis, and Blades – Compute and Service Nodes – Components of a Node Opteron Processor SeaStar ASIC • Portals API Design Gemini ASIC • System Networks • Interconnection Topologies 10/18/2010 Cray Private 2 Cray XT System 10/18/2010 Cray Private 3 System Overview Y Z GigE X 10 GigE GigE SMW Fibre Channels RAID Subsystem Compute node Login node Network node Boot /Syslog/Database nodes 10/18/2010 Cray Private I/O and Metadata nodes 4 Cabinet – The cabinet contains three chassis, a blower for cooling, a power distribution unit (PDU), a control system (CRMS), and the compute and service blades (modules) – All components of the system are air cooled A blower in the bottom of the cabinet cools the blades within the cabinet • Other rack-mounted devices within the cabinet have their own internal fans for cooling – The PDU is located behind the blower in the back of the cabinet 10/18/2010 Cray Private 5 Liquid Cooled Cabinets Heat exchanger Heat exchanger (XT5-HE LC only) (LC cabinets only) 48Vdc flexible Cage 2 buses Cage 2 Cage 1 Cage 1 Cage VRMs Cage 0 Cage 0 backplane assembly Cage ID controller Interconnect 01234567 Heat exchanger network cable Cage inlet (LC cabinets only) connection air temp sensor Airflow Heat exchanger (slot 3 rail) conditioner 48Vdc shelf 3 (XT5-HE LC only) 48Vdc shelf 2 L1 controller 48Vdc shelf 1 Blower speed controller (VFD) Blooewer PDU line filter XDP temperature XDP interface & humidity sensor
    [Show full text]
  • Motherboards, Processors, and Memory
    220-1001 COPYRIGHTED MATERIAL c01.indd 03/23/2019 Page 1 Chapter Motherboards, Processors, and Memory THE FOLLOWING COMPTIA A+ 220-1001 OBJECTIVES ARE COVERED IN THIS CHAPTER: ✓ 3.3 Given a scenario, install RAM types. ■ RAM types ■ SODIMM ■ DDR2 ■ DDR3 ■ DDR4 ■ Single channel ■ Dual channel ■ Triple channel ■ Error correcting ■ Parity vs. non-parity ✓ 3.5 Given a scenario, install and configure motherboards, CPUs, and add-on cards. ■ Motherboard form factor ■ ATX ■ mATX ■ ITX ■ mITX ■ Motherboard connectors types ■ PCI ■ PCIe ■ Riser card ■ Socket types c01.indd 03/23/2019 Page 3 ■ SATA ■ IDE ■ Front panel connector ■ Internal USB connector ■ BIOS/UEFI settings ■ Boot options ■ Firmware upgrades ■ Security settings ■ Interface configurations ■ Security ■ Passwords ■ Drive encryption ■ TPM ■ LoJack ■ Secure boot ■ CMOS battery ■ CPU features ■ Single-core ■ Multicore ■ Virtual technology ■ Hyperthreading ■ Speeds ■ Overclocking ■ Integrated GPU ■ Compatibility ■ AMD ■ Intel ■ Cooling mechanism ■ Fans ■ Heat sink ■ Liquid ■ Thermal paste c01.indd 03/23/2019 Page 4 A personal computer (PC) is a computing device made up of many distinct electronic components that all function together in order to accomplish some useful task, such as adding up the numbers in a spreadsheet or helping you to write a letter. Note that this defi nition describes a computer as having many distinct parts that work together. Most PCs today are modular. That is, they have components that can be removed and replaced with another component of the same function but with different specifi cations in order to improve performance. Each component has a specifi c function. Much of the computing industry today is focused on smaller devices, such as laptops, tablets, and smartphones.
    [Show full text]
  • ECS P4M890T-M2 Manual.Pdf
    Preface Copyright This publication, including all photographs, illustrations and software, is protected under international copyright laws, with all rights reserved. Neither this manual, nor any of the material contained herein, may be reproduced without written consent of the author. Version 1.0B Disclaimer The information in this document is subject to change without notice. The manufacturer makes no representations or warranties with respect to the contents hereof and specifically disclaims any implied warranties of merchantability or fitness for any particular purpose. The manufacturer reserves the right to revise this publication and to make changes from time to time in the content hereof without obligation of the manufacturer to notify any person of such revision or changes. Trademark Recognition Microsoft, MS-DOS and Windows are registered trademarks of Microsoft Corp. MMX, Pentium, Pentium-II, Pentium-III, Pentium-4, Celeron are registered trademarks of Intel Corporation. Other product names used in this manual are the properties of their respective owners and are acknowledged. Federal Communications Commission (FCC) This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reason- able protection against harmful interference in a residential installation. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference
    [Show full text]
  • The Wysiwyg and Vivien Hardware Guide
    The wysiwyg and Vivien Hardware Guide [Updated June 5, 2019] This guide describes the principles of selecting hardware components for a wysiwyg or Vivien workstation, and it is meant to be a guideline for choosing the right hardware for your intended use of the software. As such, actual hardware models are not specified for most components, but based on the information provided below, you will be able to decide on these yourself. Once you have, if you wish to confirm the components you selected, our Technical Support Department will be happy to look over your list; please see the end of this article for information on how to get in touch with us. Before discussing the various components and the criteria for selecting them, there are four important things to note. It strongly recommended that you read all this information and follow the advice before considering the purchase of new hardware. 1. Geometry must always be properly optimized, in all files, regardless of their complexity. Even the best/fastest/most expensive hardware will not be able to properly- handle a file that is not optimized and therefore contains inefficient geometry. If you haven’t done so already, please read through this thread on our Forum, in order to learn how to optimize your files. The key to understanding the optimization principles described here lies with the articles mentioned in the first paragraph of the first message, “Part 1” and “Part 2”; please ensure that you also click those links and read the information they reveal. 2. Even in an optimized file performance can be poor if your video card driver is out-of-date and/or your video card’s settings and/or Shaded View options are inappropriate.
    [Show full text]
  • Comparison of High Performance Northbridge Architectures in Multiprocessor Servers
    Comparison of High Performance Northbridge Architectures in Multiprocessor Servers Michael Koontz MS CpE Scholarly Paper Advisor: Dr. Jens-Peter Kaps Co-Advisor: Dr. Daniel Tabak - 1 - Table of Contents 1. Introduction ...............................................................................................................3 2. The x86-64 Instruction Set Architecture (ISA) ...................................................... 3 3. Memory Coherency ...................................................................................................4 4. The MOESI and MESI Cache Coherency Models.................................................8 5. Scalable Coherent Interface (SCI) and HyperTransport ....................................14 6. Fully-Buffered DIMMS ..........................................................................................16 7. The AMD Opteron Northbridge ............................................................................19 8. The Intel Blackford Northbridge Architecture .................................................... 27 9. Performance and Power Consumption .................................................................32 10. Additional Considerations ..................................................................................34 11. Conclusion ............................................................................................................ 36 - 2 - 1. Introduction With the continuing growth of today’s multi-media, Internet based culture, businesses are becoming more dependent
    [Show full text]
  • Solaris 10 OS on AMD Opteron Processor-Based Systems
    Solaris™ 10 OS on AMD Opteron™ Processor-based Systems < A powerful combination for your business Sun and AMD take x64 computing to a new level with the breakthrough performance of AMD Opteron™ processor-based systems combined with the Solaris™ 10 OS — the most advanced operating system on the planet. By combining the best of free and open source software with the most powerful industry-standard platforms, customers can take advantage of the most robust and secure, yet economical Web, database, and application servers. A unique partnership Price/performance Coengineering and technology collaboration World-record performance Sun and AMD software engineers work jointly Leveraging more than 20 years of Symmetric on a range of codevelopment efforts including Multiprocessing (SMP) expertise, Sun has future development of HyperTransport, virtu- tuned and optimized Solaris 10 for the AMD alization, fault management, compiler perform- Opteron platform to deliver exceptional ance, and other ways Solaris may take advantage performance and near-linear scalability. For Highlights of the AMD Opteron architecture. Solaris 10 enterprises with demanding compute, net- • Supports the latest generation 5/08 also includes support for the latest gener- work, and Web applications, the combination of AMD x64 processors ation of AMD x64 processors and UltraSPARC of Solaris 10 and AMD Opteron processor-based • PowerNow! enhancements CMT systems. systems is often an ideal fit. Dozens of perform- provide additional power man- ance and price/performance world record agement capabilities Growing the Solaris™ OS ecosystem for AMD64 benchmarks demonstrate this exceptional • Remote client display virtualiztion Sun and AMD are working together with key combination. Solaris 10 has set more than • Solaris Trusted Extensions optimi- target ISVs, system builders, and independent 50 world records, employing various industry- zations for better interoperability hardware vendors (IHVs) to fuel growth of the standard benchmarks or workload scenarios and security Solaris 10 ecosystem around AMD64.
    [Show full text]
  • Lewis University Dr. James Girard Summer Undergraduate Research Program 2021 Faculty Mentor - Project Application
    Lewis University Dr. James Girard Summer Undergraduate Research Program 2021 Faculty Mentor - Project Application Exploring the Use of High-level Parallel Abstractions and Parallel Computing for Functional and Gate-Level Simulation Acceleration Dr. Lucien Ngalamou Department of Engineering, Computing and Mathematical Sciences Abstract System-on-Chip (SoC) complexity growth has multiplied non-stop, and time-to- market pressure has driven demand for innovation in simulation performance. Logic simulation is the primary method to verify the correctness of such systems. Logic simulation is used heavily to verify the functional correctness of a design for a broad range of abstraction levels. In mainstream industry verification methodologies, typical setups coordinate the validation e↵ort of a complex digital system by distributing logic simulation tasks among vast server farms for months at a time. Yet, the performance of logic simulation is not sufficient to satisfy the demand, leading to incomplete validation processes, escaped functional bugs, and continuous pressure on the EDA1 industry to develop faster simulation solutions. In this research, we will explore a solution that uses high-level parallel abstractions and parallel computing to boost the performance of logic simulation. 1Electronic Design Automation 1 1 Project Description 1.1 Introduction and Background SoC complexity is increasing rapidly, driven by demands in the mobile market, and in- creasingly by the fast-growth of assisted- and autonomous-driving applications. SoC teams utilize many verification technologies to address their complexity and time-to-market chal- lenges; however, logic simulation continues to be the foundation for all verification flows, and continues to account for more than 90% [10] of all verification workloads.
    [Show full text]