Exploiting the Altivec Unit for Commercial Applications

Total Page:16

File Type:pdf, Size:1020Kb

Exploiting the Altivec Unit for Commercial Applications Exploiting the AltiVec Unit for Commercial Applications Daniel Citron Hiroshi Inoue, Takao Moriyama, Motohiro Kawahito, Hideaki Komatsu, Toshio Nakatani IBM Haifa Labs IBM Tokyo Research Laboratory Haifa University Campus 1623-14 Shimotsuruma, Yamato-shi Haifa 31905, Israel Kanagawa-ken, 242-8502 Japan [email protected] inouehrs,moriyama,jl25131,komatsu,nakatani ¡ @jp.ibm.com ¢ ¢ ¢ Abstract signed jointly by IBM R , Motorola R , and Apple R 1. His- torically, such vector units have been used for applications The introduction of the PowerPC 970 JS20 blade server that manipulate large amounts of small data items [12, 5] opens opportunities for vectorizing commercial applica- and have been integrated into PowerPC processors since tions using the integrated AltiVec unit. We examined the 1999 [7]. For desktop computers that sported such proces- vectorization of applications from diverse fields such as sors, graphics and image processing are classic examples of XML parsing, UTF-8 encoding, life sciences, string manip- AltiVec use. ulations, and sorting. We obtained performance speedups However, the relative weakness of the host processors (over optimized scalar code) for string comparisons (2-3), precluded the exploitation of the AltiVec unit for commer- XML delimiter lookup (1.5-5), and UTF-8 conversion (2-4). cial applications. The IBM PowerPC 970FX [17] is the first The focus of this paper is on the process rather than on processor to combine 64-bit computing, large caches, high the results. Vectorizing commercial applications vastly dif- instruction level parallelism (ILP), and the AltiVec unit. fers from vectorizing graphic and image processing appli- This paper describes the process of vectorizing a collection cations. In addition to the results achieved, we describe the of such applications on commercial servers manufactured pitfalls encountered, the advantages and disadvantages of by IBM. the AltiVec unit, and what is missing in its current imple- Commercial applications differ from media applications mentation. in several key features that hamper straightforward, and in some cases efficient, vectorization. The object of this paper Sorting presents an interesting example. Vectorizing the is to show how these obstacles were approached, overcome quicksort algorithm was not successful due to low paral- (or not), and what should be done to enhance the AltiVec lelism and misaligned data accesses. Vectorization of the unit in particular, and the vectorization process in general. combsort algorithm was very successful, with speedups of The main issues that must be addressed are: 5.0, until the data spilled from the L2 cache. Combining both approaches, by first partitioning the input using quick- Data Layout Data elements in commercial applications sort and then continuing with combsort, yielded speedups are usually heterogeneous; the data to be vectorized of over 2.0. is embedded in structures and must be extracted first. This research led to several patent disclosures, many al- This is opposed to media data, which is usually homo- gorithmic enhancements, and an insight into the correct in- geneous and streamed to the execution unit. Another tegration of software with the AltiVec unit. The wealth of problem is alignment; AltiVec loads and stores data information collected during this study is being conveyed to only at 16-byte boundaries. the auto-vectorization teams of the relevant compilers. Element Size In media applications, data sizes of 8- and 16-bits are abundant; this allows high levels of paral- lelism. In commercial applications, 32- and even 64- bit values are commonplace, limiting the degree of par- 1 Introduction allelism that vectorization can achieve. I/O bound Many commercial applications are I/O bound ¢ and the benefits of vectorization are not clear. Expend- The AltiVec R [9] unit is the Single Instruction Multiple ¢ £ Data (SIMD) unit of the PowerPC R architecture. It was de- 1It is also known as VMX or the Velocity Engine R . ing effort on vectorizing non-critical code is not cost effective. Correctness Many media applications can produce lossy results, which commercial applications cannot allow. Furthermore, they are full of tests and checks that ham- per effective vectorization. Consistency Vectorization may change the order in which elements are processed. This can cause results to have small inconsistencies which are unacceptable for com- mercial applications. Application Analysis Many commercial applications are complex and composed of many modules. Finding the bottlenecks that can benefit from vectorization is dif- ficult. Even finding the right developer to approach is no simple task in enterprise applications. Figure 1. The AltiVec unit on the PowerPC 970FX (excerpted from [17]) While the successes and failures are of interest (and de- scribed in Section 2), we believe that the insights obtained from the experience are of far greater importance. Section Unit Instructions Latency 3 lists the advantages and disadvantages of the AltiVec unit, VSIU ALU 2 what can be done to improve it, and the pitfalls to avoid VCIU mul, sum, max 5 when vectorizing a complex commercial application. The VFPU FP 8 rest of this section gives a brief overview of the AltiVec unit VPERM permute 2 and the evaluation methods used in this paper. LSU ¤ load, store 4 Table 1. Latencies of AltiVec in- structions on the PowerPC 970FX 1.1 Vector Processing on the PowerPC Architec­ [1] ture ¤ Load/Store instructions are handled by the pro- cessor's Load/Store Unit (LSU). The AltiVec unit contains 32 128-bit registers. Opera- 1.2 Evaluation Infrastructure tions can be applied to 8/16/32-bit signed and unsigned inte- ger values, single precision (32-bit) floating-point (FP) val- Vectorization was performed manually on C source code ues, and 16/32-bit pixel values. Thus, each instruction per- using AltiVec intrinsics [1]. Compiler-generated auto- forms 16, 8, or 4 operations. The instructions are divided vectorization in GCC [14] and XLC [19] could not vectorize into five groups that are executed by different functional the analyzed applications. Indeed, one of the major goals of units. Figure 1 shows a block level diagram of the AltiVec the project is to transfer the accumulated wealth of data to unit on the PowerPC 970FX. Although there are four dis- the auto-vectorization teams of the GCC, XLC, and J9/TR tinct functional units, only two instructions can be issued Java JIT compilers. (Some preliminary results are available per cycle, and one instruction is to the permute unit. All in [11] and [15].) units are fully pipelined. The units, instruction groups, and The PowerPC 970FX is currently used in two major sys- latencies are shown in Table 1. Memory accesses are per- tems: IBM's JS20 blade servers and Apple's G5 comput- formed in 16-byte quantities that have to be aligned on 16- ers. These systems run various versions of Linux and Mac byte boundaries. This disadvantage is overcome by a ver- OS and possess many variants of the GCC, XLC, and other satile permute instruction that can reorder misaligned data compilers. Experimentation was performed using many of effectively. A complete description of the instruction set is the aforementioned options. However, for the sake of uni- available in [9]. formity we display results using the following configura- tion: The choice of applications was influenced by availability, interest to parties in IBM, and diversity. The specific vector- Computer: IBM JS20 BladeCenter with two 2.2GHz Pow- ization points in each application were determined by run- erPC 970FX processors time profile analysis, source code examination, and prior Operating System: SuSE Linux SLES9, 2.6.5-7.97- knowledge. pseries64 kernel Vectorization Method: Manual vectorization using Al- 2.1 String Operations tiVec intrinsics Compiler: GCC 4.0.1 compiled with -O3 The ubiquitous string (zero-terminated array of charac- -maltivec -mabi=altivec -mcpu=970 ters) operations exemplify the advantages and disadvan- -mtune=970 -mpowerpc64 -unroll loops tages of the AltiVec unit. For example, the strlen func- -falign-functions=32 tion requires finding the first occurrence of a character with Measurements: When short functions are measured, they a value of zero. A single, two-cycle, instruction determines are called 1,000,000 times with varying alignments if a vector of sixteen characters contains a zero. However, (lower bits range from 0 to 127). In addition, they are prologue and epilogue computations are relatively complex; called via pointers in order to equalize the overhead of the data first has to be aligned on 16-byte boundaries and standard library calls. obtaining the exact location of the zero in the vector is not Metrics: Speedup compared to optimized scalar code is supported by a single instruction. shown for all graphs. For short functions, CPU time Complicating matters even more is the fact that many is measured, and for full applications, elapsed time is strings are shorter than sixteen bytes, and this is not known measured. a priori. Thus, any vector version has to be competitive even for short strings. Figure 2 shows that when using the GCC n compiler, nice speedups (relative to GCC's standard scalar e l 2.5 r t implementation) are obtained for string lengths as short as s gcc 4.0 xlc 7.0 r 10 characters. For longer string lengths, the speedups im- a l 2 a c prove and surpass 2.0. However, when the XLC compiler s r 1.5 is used, the speedup (relative to XLC's standard scalar im- e v o plementation) disappears. This discrepancy is due to the 1 p u superior scalar implementation of XLC; the use of 64-bit d e 0.5 e logic enables 'vectorization' using scalar instructions. This p S stresses yet another important point: vectorization must be 0 1 9 17 25 33 41 49 57 compared with the best scalar effort. 5 13 21 29 37 45 53 61 string length in bytes Figure 2.
Recommended publications
  • Vxworks Architecture Supplement, 6.2
    VxWorks Architecture Supplement VxWorks® ARCHITECTURE SUPPLEMENT 6.2 Copyright © 2005 Wind River Systems, Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means without the prior written permission of Wind River Systems, Inc. Wind River, the Wind River logo, Tornado, and VxWorks are registered trademarks of Wind River Systems, Inc. Any third-party trademarks referenced are the property of their respective owners. For further information regarding Wind River trademarks, please see: http://www.windriver.com/company/terms/trademark.html This product may include software licensed to Wind River by third parties. Relevant notices (if any) are provided in your product installation at the following location: installDir/product_name/3rd_party_licensor_notice.pdf. Wind River may refer to third-party documentation by listing publications or providing links to third-party Web sites for informational purposes. Wind River accepts no responsibility for the information provided in such third-party documentation. Corporate Headquarters Wind River Systems, Inc. 500 Wind River Way Alameda, CA 94501-1153 U.S.A. toll free (U.S.): (800) 545-WIND telephone: (510) 748-4100 facsimile: (510) 749-2010 For additional contact information, please visit the Wind River URL: http://www.windriver.com For information on how to contact Customer Support, please visit the following URL: http://www.windriver.com/support VxWorks Architecture Supplement, 6.2 11 Oct 05 Part #: DOC-15660-ND-00 Contents 1 Introduction
    [Show full text]
  • SIMD Extensions
    SIMD Extensions PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 12 May 2012 17:14:46 UTC Contents Articles SIMD 1 MMX (instruction set) 6 3DNow! 8 Streaming SIMD Extensions 12 SSE2 16 SSE3 18 SSSE3 20 SSE4 22 SSE5 26 Advanced Vector Extensions 28 CVT16 instruction set 31 XOP instruction set 31 References Article Sources and Contributors 33 Image Sources, Licenses and Contributors 34 Article Licenses License 35 SIMD 1 SIMD Single instruction Multiple instruction Single data SISD MISD Multiple data SIMD MIMD Single instruction, multiple data (SIMD), is a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data simultaneously. Thus, such machines exploit data level parallelism. History The first use of SIMD instructions was in vector supercomputers of the early 1970s such as the CDC Star-100 and the Texas Instruments ASC, which could operate on a vector of data with a single instruction. Vector processing was especially popularized by Cray in the 1970s and 1980s. Vector-processing architectures are now considered separate from SIMD machines, based on the fact that vector machines processed the vectors one word at a time through pipelined processors (though still based on a single instruction), whereas modern SIMD machines process all elements of the vector simultaneously.[1] The first era of modern SIMD machines was characterized by massively parallel processing-style supercomputers such as the Thinking Machines CM-1 and CM-2. These machines had many limited-functionality processors that would work in parallel.
    [Show full text]
  • Multi-Platform Auto-Vectorization
    H-0236 (H0512-002) November 30, 2005 Computer Science IBM Research Report Multi-Platform Auto-Vectorization Dorit Naishlos, Richard Henderson* IBM Research Division Haifa Research Laboratory Mt. Carmel 31905 Haifa, Israel *Red Hat Research Division Almaden - Austin - Beijing - Haifa - India - T. J. Watson - Tokyo - Zurich LIMITED DISTRIBUTION NOTICE: This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. I thas been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g ,. payment of royalties). Copies may be requested from IBM T. J. Watson Research Center , P. O. Box 218, Yorktown Heights, NY 10598 USA (email: [email protected]). Some reports are available on the internet at http://domino.watson.ibm.com/library/CyberDig.nsf/home . Multi-Platform Auto-Vectorization Dorit Naishlos Richard Henderson IBM Haifa Labs Red Hat [email protected] [email protected] Abstract. The recent proliferation of the Single Instruction Multiple Data (SIMD) model has lead to a wide variety of implementations. These have been incorporated into many platforms, from gaming machines and em- bedded DSPs to general purpose architectures. In this paper we present an automatic vectorizer as implemented in GCC - the most multi-targetable compiler available today. We discuss the considerations that are involved in developing a multi-platform vectorization technology, and demonstrate how our vectorization scheme is suited to a variety of SIMD architectures.
    [Show full text]
  • A Bibliography of Publications in IEEE Micro
    A Bibliography of Publications in IEEE Micro Nelson H. F. Beebe University of Utah Department of Mathematics, 110 LCB 155 S 1400 E RM 233 Salt Lake City, UT 84112-0090 USA Tel: +1 801 581 5254 FAX: +1 801 581 4148 E-mail: [email protected], [email protected], [email protected] (Internet) WWW URL: http://www.math.utah.edu/~beebe/ 16 September 2021 Version 2.108 Title word cross-reference -Core [MAT+18]. -Cubes [YW94]. -D [ASX19, BWMS19, DDG+19, Joh19c, PZB+19, ZSS+19]. -nm [ABG+16, KBN16, TKI+14]. #1 [Kah93i]. 0.18-Micron [HBd+99]. 0.9-micron + [Ano02d]. 000-fps [KII09]. 000-Processor $1 [Ano17-58, Ano17-59]. 12 [MAT 18]. 16 + + [ABG+16]. 2 [DTH+95]. 21=2 [Ste00a]. 28 [BSP 17]. 024-Core [JJK 11]. [KBN16]. 3 [ASX19, Alt14e, Ano96o, + AOYS95, BWMS19, CMAS11, DDG+19, 1 [Ano98s, BH15, Bre10, PFC 02a, Ste02a, + + Ste14a]. 1-GHz [Ano98s]. 1-terabits DFG 13, Joh19c, LXB07, LX10, MKT 13, + MAS+07, PMM15, PZB+19, SYW+14, [MIM 97]. 10 [Loc03]. 10-Gigabit SCSR93, VPV12, WLF+08, ZSS+19]. 60 [Gad07, HcF04]. 100 [TKI+14]. < [BMM15]. > [BMM15]. 2 [Kir84a, Pat84, PSW91, YSMH91, ZACM14]. [WHCK18]. 3 [KBW95]. II [BAH+05]. ∆ 100-Mops [PSW91]. 1000 [ES84]. 11- + [Lyl04]. 11/780 [Abr83]. 115 [JBF94]. [MKG 20]. k [Eng00j]. µ + [AT93, Dia95c, TS95]. N [YW94]. x 11FO4 [ASD 05]. 12 [And82a]. [DTB01, Dur96, SS05]. 12-DSP [Dur96]. 1284 [Dia94b]. 1284-1994 [Dia94b]. 13 * [CCD+82]. [KW02]. 1394 [SB00]. 1394-1955 [Dia96d]. 1 2 14 [WD03]. 15 [FD04]. 15-Billion-Dollar [KR19a].
    [Show full text]
  • Compiler-Based Data Prefetching and Streaming Non-Temporal Store Generation for the Intel R Xeon Phitm Coprocessor
    Compiler-based Data Prefetching and Streaming Non-temporal Store Generation for the Intel R Xeon PhiTM Coprocessor Rakesh Krishnaiyer†, Emre K¨ult¨ursay†‡, Pankaj Chawla†, Serguei Preis†, Anatoly Zvezdin†, and Hideki Saito† † Intel Corporation ‡ The Pennsylvania State University TM Abstract—The Intel R Xeon Phi coprocessor has software used, and generating them. In this paper, we: TM prefetching instructions to hide memory latencies and spe- • Present how the Intel R Xeon Phi coprocessor soft- cial store instructions to save bandwidth on streaming non- ware prefetch and non-temporal streaming store instructions temporal store operations. In this work, we provide details on R compiler-based generation of these instructions and evaluate are generated by the Intel Composer XE 2013, TM their impact on the performance of the Intel R Xeon Phi • Evaluate the impact of these mechanisms on the overall coprocessor using a wide range of parallel applications with performance of the coprocessor using a variety of parallel different characteristics. Our results show that the Intel R applications with different characteristics. Composer XE 2013 compiler can make effective use of these Our experimental results demonstrate that (i) a large mechanisms to achieve significant performance improvements. number of applications benefit significantly from software prefetching instructions (on top of hardware prefetching) that I. INTRODUCTION are generated automatically by the compiler for the Intel R TM TM The Intel R Xeon Phi coprocessor based on Intel R Xeon Phi coprocessor, (ii) some benchmarks can further Many Integrated Core Architecture (Intel R MIC Architec- improve when compiler options that control prefetching ture) is a many-core processor with long vector (SIMD) units behavior are used (e.g., to enable indirect prefetching), targeted for highly parallel workloads in the High Perfor- and (iii) many applications benefit from compiler generated mance Computing (HPC) segment.
    [Show full text]
  • Vybrid Controllers Technical Overview
    TM June 2013 Freescale, the Freescale logo, AltiVec, C-5, CodeTEST, CodeWarrior, ColdFire, ColdFire+, C- Ware, the Energy Efficient Solutions logo, Kinetis, mobileGT, PEG, PowerQUICC, Processor Expert, QorIQ, Qorivva, SafeAssure, the SafeAssure logo, StarCore, Symphony and VortiQa are trademarks of Freescale Semiconductor, Inc., Reg. U.S. Pat. & Tm. Off. Airfast, BeeKit, BeeStack, CoreNet, Flexis, Layerscape, MagniV, MXC, Platform in a Package, QorIQ Qonverge, QUICC Engine, Ready Play, SMARTMOS, Tower, TurboLink, Vybrid and Xtrinsic are trademarks of Freescale Semiconductor, Inc. All other product or service names are the property of their respective owners. © 2013 Freescale Semiconductor, Inc. • Overview of Vybrid Family • Vybrid Tower Board • Vybrid System Modules • QuadSPI Flash • Vybrid Clock System • Vybrid Power System • Vybrid Boot Operation • High Assurance Boot • Vybrid Trusted Execution • LinuxLink and MQX Embedded Software • DS-5 compiler TM Freescale, the Freescale logo, AltiVec, C-5, CodeTEST, CodeWarrior, ColdFire, ColdFire+, C-Ware, the Energy Efficient Solutions logo, Kinetis, mobileGT, PEG, PowerQUICC, Processor Expert, QorIQ, Qorivva, SafeAssure, the SafeAssure logo, StarCore, Symphony and VortiQa are trademarks of Freescale Semiconductor, Inc., Reg. U.S. Pat. & Tm. Off. 2 Airfast, BeeKit, BeeStack, CoreNet, Flexis, Layerscape, MagniV, MXC, Platform in a Package, QorIQ Qonverge, QUICC Engine, Ready Play, SMARTMOS, Tower, TurboLink, Vybrid and Xtrinsic are trademarks of Freescale Semiconductor, Inc. All other product or service names are the property of their respective owners. © 2013 Freescale Semiconductor, Inc. TM Freescale, the Freescale logo, AltiVec, C-5, CodeTEST, CodeWarrior, ColdFire, ColdFire+, C- Ware, the Energy Efficient Solutions logo, Kinetis, mobileGT, PEG, PowerQUICC, Processor Expert, QorIQ, Qorivva, SafeAssure, the SafeAssure logo, StarCore, Symphony and VortiQa are trademarks of Freescale Semiconductor, Inc., Reg.
    [Show full text]
  • POWER Block Course Assignment 3: SIMD with Altivec
    POWER Block Course Assignment 3: SIMD with AltiVec Hasso Plattner Institute Agenda 1. SIMD & AltiVec 2. Programming AltiVec in C 3. Example programs 4. Assignment 3 5. Performance measurement 6. Useful resources Assignment 3 What is this assignment about? In this assignment you will: □ Get to know AltiVec □ Use AltiVec instructions to speed up computation □ Do performance comparison on Linux using perf Submission Deadline: 04.11.2016 by any issues: [email protected] SIMD & AltiVec Sven Köhler, 29.09.2016 Chart 3 SIMD & AltiVec Sven Köhler, SIMD & AltiVec 29.09.2016 1 Chart 4 1. SIMD & AltiVec AltiVec is SIMD on POWER SIMD ::= Single Instruction Multiple Data The same instruction is performed simultaneously on multiple data points (data-level parallelism). Many architectures provide SIMD instruction set extensions. Intel: MMX, SSE, AVX ARM: NEON SIMD & AltiVec Sven Köhler, POWER: AltiVec (VMX), VSX 29.09.2016 Chart 5 Multiprocessor: Flynn‘s Taxonomy 1. (1966)SIMD & AltiVec Flynn’s Taxonomy on Multiprocessors (1966) 20 ■ Classify multiprocessor architectures among instruction and data processing dimension Single Instruction, Single Data (SISD) Single Instruction, Multiple Data (SIMD) (C) Blaise Barney Barney (C) Blaise SIMD & AltiVec Sven Köhler, 29.09.2016 Single Data (MISD) Multiple Instruction, Multiple Instruction, Multiple Data (MIMD) ) Chart 6 ( Single Instruction, Single Data (SISD) 1. SIMD & AltiVec Scalar vs. SIMD How many instructions are needed to add four numbers from memory? scalar 4 element SIMD + A0 B0 = C0 A0 B0 C0 A + B = C 1 1 1 A1 B1 C1 + = A B C A2 + B2 = C2 2 2 2 A B C + 3 3 3 A3 B3 = C3 SIMD & AltiVec 4 additions 1 addition Sven Köhler, 29.09.2016 8 loads 2 loads Chart 7 4 stores 1 store 1.
    [Show full text]
  • Cell Processor and Playstation 3
    Cell Processor and Playstation 3 Guillem Borrell i Nogueras February 24, 2009 • Cell systems • Bad news • More bad news • Good news • Q&A • No SPU Double precision improvements expected from IBM IBM Blades • QS21 • Cell BE based. • 8 SPE • 460 Gflops Float • 20 GFLops Double • QS22 • PowerXCell 8i based • 8 SPE* • 460 GFlops Float • 200 GFlops Double • Some of them already installed in BSC IBM Blades • QS21 • Cell BE based. • 8 SPE • 460 Gflops Float • 20 GFLops Double • QS22 • PowerXCell 8i based • 8 SPE* • 460 GFlops Float • 200 GFlops Double • Some of them already installed in BSC • No SPU Double precision improvements expected from IBM • 256 MB RAM Playstation 3 • Cell BE based. • 6 SPE • 460 Gflops Float • 20 GFLops Double Playstation 3 • Cell BE based. • 6 SPE • 460 Gflops Float • 20 GFLops Double • 256 MB RAM • 1 TFlop on a chip IBM Power 7 • 8 PowerPC cores • 4 threads per core (32 Threads!) • ? SPE IBM Power 7 • 8 PowerPC cores • 4 threads per core (32 Threads!) • ? SPE • 1 TFlop on a chip Cell Broadband Engine PPE PowerPC Processor Element PPU PowerPC Processor Unit EIB Element Interconnect Bus SPE Synergistic Processor Element SPU Synergistic Processor Unit MFC Memory Flow Controller DMA Direct Memory Access SIMD Single Instruction Multiple Data • I have said nothing about the data ¿How does it compute? • PPU starts a program • PPU loads an SPU context on a thread • SPU acquires the thread • SPU executes context • SPU ends the task and returns control to PPU ¿How does it compute? • PPU starts a program • PPU loads an SPU context on a thread • SPU acquires the thread • SPU executes context • SPU ends the task and returns control to PPU • I have said nothing about the data Overview • LS is regiser based • No type distinction • Data should be aligned by hand • MFC is a DMA controller • Data moved with DMA primitives.
    [Show full text]
  • Product Brief Rev
    Freescale Semiconductor Document Number: P5020PB Product Brief Rev. 1, 02/2013 P5020 QorIQ Communications Processor Product Brief This product brief provides an overview of the P5020 Contents QorIQ communications processor features as well as 1 P5020 Application Use Cases. 2 application use cases. 2 P5020 Dual-Core Processing Options . 4 3 Features . 5 The P5020 combines two Power Architecture® 4 Developer Environment. 29 5 Document Revision History. 31 processor cores with high-performance datapath acceleration logic and network and peripheral bus interfaces required for control processing in applications such as routers, switches, internet access devices, firewall and other packet filtering processors, network attached storage, storage area networks, imaging and general-purpose embedded computing. Its high level of integration offers significant performance benefits and greatly helps to simplify board design. © 2011-2013 Freescale Semiconductor, Inc. All rights reserved. P5020 Application Use Cases 1 P5020 Application Use Cases 1.1 Router Control Processor The following figure shows the P5020 in a linecard control plane application, where the linecard is part of a high-end network router. P5020 e5500 e5500 Figure 1. Control Plane Processor for a Router P5020 QorIQ Communications Processor Product Brief, Rev. 1 2 Freescale Semiconductor P5020 Application Use Cases 1.2 DSP Farm Control Processor The following figure shows a DSP farm enabled by the P5020 utilizing serial RapidIO. P5020 e5500 e5500 Figure 2. Control Plane Processor for a DSP Farm 1.3 SAN RAID 6 Controller The following figure shows a RAID-enabled Disk Array Controller in an redundant active-active system for block-oriented storage systems. The P5020 Data Path Acceleration Architecture (DPAA) accelerates RAID 5/6 calculations and low-overhead data movement while optionally supporting data-at rest encryption and Data Integrity Field support.
    [Show full text]
  • Enhancing the Matrix Transpose Operation Using Intel Avx Instruction Set Extension
    International Journal of Computer Science & Information Technology (IJCSIT) Vol 6, No 3, June 2014 ENHANCING THE MATRIX TRANSPOSE OPERATION USING INTEL AVX INSTRUCTION SET EXTENSION Ahmed Sherif Zekri1,2 1Department of Mathematics & Computer Science, Alexandria University, Egypt 2Department of Mathematics & Computer Science, Beirut Arab University, Lebanon ABSTRACT General-purpose microprocessors are augmented with short-vector instruction extensions in order to simultaneously process more than one data element using the same operation. This type of parallelism is known as data-parallel processing. Many scientific, engineering, and signal processing applications can be formulated as matrix operations. Therefore, accelerating these kernel operations on microprocessors, which are the building blocks or large high-performance computing systems, will definitely boost the performance of the aforementioned applications. In this paper, we consider the acceleration of the matrix transpose operation using the 256-bit Intel advanced vector extension (AVX) instructions. We present a novel vector-based matrix transpose algorithm and its optimized implementation using AVX instructions. The experimental results on Intel Core i7 processor demonstrates a 2.83 speedup over the standard sequential implementation, and a maximum of 1.53 speedup over the GCC library implementation. When the transpose is combined with matrix addition to compute the matrix update, B + AT, where A and B are squared matrices, the speedup of our implementation over the sequential algorithm increased to 3.19. KEYWORDS Matrix Transpose, vector instructions, streaming and advanced vector extensions, data-parallel computations 1. INTRODUCTION Matrix transpose is a main operation in many matrix- and vector-based computations of image, video, and scientific and image/signal processing applications.
    [Show full text]
  • The X86 Is Dead. Long Live the X86!
    the x86 is dead. long live the x86! CC3.0 share-alike attribution copyright c 2013 nick black with diagrams by david kanter of http://realworldtech.com “Upon first looking into Intel’s x86” that upon which we gaze is mankind’s triumph, and we are its stewards. use it well. georgia tech ◦ summer 2013 ◦ cs4803uws ◦ nick black The x86 is dead. Long live the x86! Why study the x86? Used in a majority of servers, workstations, and laptops Receives the most focus in the kernel/toolchain Very complex processor, thus large optimization space Excellent documentation and literature Fascinating, revealing, lengthy history Do not think that x86 is all that’s gone on over the past 30 years1. That said, those who’ve chased peak on x86 can chase it anywhere. 1Commonly expressed as “All the world’s an x86.” georgia tech ◦ summer 2013 ◦ cs4803uws ◦ nick black The x86 is dead. Long live the x86! In the grim future of computing there are 10,000 ISAs Alpha + BWX/FIX/CIX/MVI SPARC V9 + VIS3a AVR32 + JVM JVMb CMS PTX/SASSc PA-RISC + MAX-2 TILE-Gxd SuperH ARM + NEONe i960 Blackfin IA64 (Itanium) PowerISA + AltiVec/VSXf MIPS + MDMX/MIPS-3D MMIX IBMHLA (s390 + z) a Most recently the “Oracle SPARC Architecture 2011”. b m68k Most recently the Java SE 7 spec, 2013-02-28. c Most recently the PTX ISA 3.1 spec, 2012-09-13. VAX + VAXVA d TILE-Gx ISA 1.2, 2013-02-26. e z80 / MOS6502 ARMv8: A64, A32, and T32, 2011-10-27. f MIX PowerISA v.2.06B, 2010-11-03.
    [Show full text]
  • Release History
    Release History TRACE32 Online Help TRACE32 Directory TRACE32 Index TRACE32 Technical Support ........................................................................................................... Release History ............................................................................................................................. 1 General Information ................................................................................................................... 4 Code 4 Release Information ................................................................................................................... 4 Software Release from 01-Feb-2021 5 Build 130863 5 Software Release from 01-Sep-2020 8 Build 125398 8 Software Release from 01-Feb-2020 11 Build 117056 11 Software Release from 01-Sep-2019 13 Build 112182 13 Software Release from 01-Feb-2019 16 Build 105499 16 Software Release from 01-Sep-2018 19 Build 100486 19 Software Release from 01-Feb-2018 24 Build 93173 24 Software Release from 01-Sep-2017 27 Build 88288 27 Software Release from 01-Feb-2017 32 Build 81148 32 Build 80996 33 Software Release from 01-Sep-2016 36 Build 76594 36 Software Release from 01-Feb-2016 39 Build 69655 39 Software Release from 01-Sep-2015 42 Build 65657 42 Software Release from 02-Feb-2015 45 Build 60219 45 Software Release from 01-Sep-2014 48 Build 56057 48 Software Release from 16-Feb-2014 51 ©1989-2021 Lauterbach GmbH Release History 1 Build 51144 51 Software Release from 16-Aug-2013 54 Build 50104 54 Software Release from 16-Feb-2013 56
    [Show full text]