Representation Range Needs for 16-Bit Neural Network Training

Total Page:16

File Type:pdf, Size:1020Kb

Representation Range Needs for 16-Bit Neural Network Training Representation range needs for 16-bit neural network training 1st Valentina Popescu 1st Abhinav Venigalla 2nd Di Wu 3rd Robert Schreiber Cerebras Systems University of Wisconsin–Madison Cerebras Systems [email protected] [email protected] [email protected] [email protected] *work done while at Cerebras Systems *work done while at Cerebras Systems Abstract—Deep learning has grown rapidly thanks to its state- representation range of representable values, even when de- of-the-art performance across a wide range of real-world appli- normalized floating point numbers (or denormals — see a cations. While neural networks have been trained using IEEE- formal definition in Section III), which add three orders of 754 binary32 arithmetic, the rapid growth of computational demands in deep learning has boosted interest in faster, low magnitude to the representation range, are used. Therefore, precision training. Mixed-precision training that combines IEEE- different allocations of the 15 bits beyond the sign bit, to allow 754 binary16 with IEEE-754 binary32 has been tried, and greater representation range, have been tried. Most notably, other 16-bit formats, for example Google’s bfloat16, have Google introduced the bfloat16 format, with 8 exponent bits, become popular. which until recently was only available on TPU clouds, but In floating-point arithmetic there is a tradeoff between pre- cision and representation range as the number of exponent bits is now also included in Nvidia’s latest architecture, Ampere. changes; denormal numbers extend the representation range. Experiments have shown that both binary16 and bfloat16 can This raises questions of how much exponent range is needed, work, either out of the box or in conjunction with techniques of whether there is a format between binary16 (5 exponent like loss scaling that help control the range of values in bits) and bfloat16 (8 exponent bits) that works better than software. either of them, and whether or not denormals are necessary. In the current paper we study the need for denormal numbers Our understanding of these issues needs to be deepened. for mixed-precision training, and we propose a 1/6/9 format, i.e., To our knowledge, no study on the width of the range of 6-bit exponent and 9-bit explicit mantissa, that offers a better values encountered during training has been performed and 1 6 9 range-precision tradeoff. We show that / / mixed-precision it has not been determined whether denormal numbers are training is able to speed up training on hardware that incurs a performance slowdown on denormal operations or eliminates necessary. To fill this gap, we study here the distribution the need for denormal numbers altogether. And, for a number of of tensor elements during training on public models like fully connected and convolutional neural networks in computer ResNet and BERT. The data show that the representation range vision and natural language processing, 1/6/9 achieves numerical offered by binary16 is fully utilized, with a high percentage parity to standard mixed-precision. of the values falling in the denormal range. Unfortunately, Index Terms—floating-point, deep learning, denormal num- on many machines performance suffers with every operation bers, neural network training involving denormals, due to special encoding and handling of the range [1], [2]. This raises the question of whether they are I. INTRODUCTION needed, or are encountered much less often, when more than 5 exponent bits are utilized. Indeed, bfloat16, with its large Larger and more capable neural networks demand ever representation range, does not support and does not appear to arXiv:2103.15940v2 [cs.AI] 6 Apr 2021 more computation to be trained. Industry has responded with require denormal numbers; but it loses an order of magnitude specialized processors that can speed up machine arithmetic. in numerical accuracy vis-a-vis binary16. Thus, we consider Most deep learning training is done using GPUs and/or CPUs, an alternative 1/6/9 format, i.e., one having 6 exponent bits which support IEEE-754 binary32 (single-precision) and (for and 9 explicit mantissa bits, both with the denormal range GPUs) binary16 (half -precision). and without it. We demonstrate that this format dramatically Arithmetic and load/store instructions with 16-bit numbers decreases the frequency of denormals, which will improve can be significantly faster than arithmetic with longer 32- performance on some machines. bit numbers. Since 32-bit is expensive for large workloads, industry and the research community have tried to use mixed- In Section II we discuss 16-bit formats that have been precision methods that perform some of the work in 16-bit previously used in neural network training. In Section III we and the rest in 32-bit. Training neural networks using 16-bit recall basic notions of floating-point arithmetic and define the formats is challenging because the magnitudes and the range notations we use. Section IV presents our testing methodology of magnitudes of the network tensors vary greatly from layer and details the training settings for each tested network (Sec- to layer, and can sometimes shift as training progresses. tion IV-B). The results we gathered are detailed in Sections V In this effort, binary16 has not proved to be an unqualified and, finally, we conclude in Section VI. success, mainly because its 5 exponent bits offer a narrow II. RELATED WORK by single-precision operations whose results are then rounded to values that are also representable in bfloat16. The results There have been several 16-bit floating-point (FP) formats suggest that bfloat16 requires no denormal support and no loss proposed for mixed-precision training of neural networks, scaling, and therefore, decreases complexity in the context including half -precision [3]–[5], Google’s bfloat16 [6], [7], of mixed-precision training. Recently, Nvidia also made it IBM’s DLFloat [8] and Intel’s Flexpoint [9], each with dedi- available in its latest architecture, Ampere [14]. cated configurations for the exponent and mantissa bits. IBM’s DLFloat [8] is a preliminary study of the 1/6/9 Mixed-precision training using half -precision was first ex- format that we also examine. DLFloat was tested on convo- plored by Baidu and Nvidia in [3]. Using half -precision for lutional neural networks, LSTMs [15] and transformers [16] all training data reduces final accuracy, because the formats of small sizes, implying the potential for minimal or even no fail to cover a wide enough representation range, i.e., small accuracy loss. The studied format did not support a denormal values in half -precision might fall into the denormal range range. The authors briefly discuss the factors contributing to or even become zero.To mitigate this issue, the authors of [3] the hardware saving, without providing statistics on how they proposed a mixed-precision training scheme. Three techniques actually influence the training results. were introduced to maintain SOTA accuracy (comparable to Flexpoint [9] is a blocked floating-point format proposed by single-precision): Intel. It is different from the previously mentioned formats by • ”master weights”, i.e., a single-precision copy of the that it uses 16 bits for mantissa and shares a 5-bit exponent weights, is maintained in memory and updated with the across each layer. Flexpoint was proposed together with an products of the unscaled half -precision weight gradients exponent management algorithm, called Autoflex, aiming to and the learning rate. This ensures that small gradients predict tensor exponents based on gathered statistics from for each minibatch are not discarded as rounding error previous iterations. The format allowed for un-normalized during weight updates; values to be stored, which invalidated the possibility of a • during the forward and backward pass, matrix multiplica- denormal range. For this reason we are not going to study tions take half -precision inputs and compute reductions it further in this work. using single-precision addition (using FMACS instruc- For all these studies, single-precision training accuracy was tions as described in Section III). For that activations and achieved. But there is no mention of whether denormals, which gradients are kept in half -precision, halving the required significantly impact hardware area and compute speed, are storage and bandwidth, while weights are downcasted on encountered; neither is it tested whether or not they must be the fly; supported to achieve that accuracy. • ”loss scaling”, which empirically rescales the loss before backpropagation in order to prevent small activation gra- III. FLOATING-POINT ARITHMETIC dients that would otherwise underflow in half-precision. Weight gradients must then be unscaled by the same Formally defined in Def.3.1, a floating-point (FP) number amount before performing weight updates. is most often used in computing as an approximation of a Loss scale factors were initially chosen on a model to real number. Given a fixed bit width for representation, it model basis, empirically. Automated methods like dynamic offers a trade-off between representation range and numerical loss scaling (DLS) [4], [5], and adaptive loss scale [10] accuracy. were later developed. These methods allow for the scale to Definition 3.1: A real number X is approximated in a be computed according to the gradient distribution. Using machine by a nearby floating-point number: the above techniques has become the standard workflow for E−p+1 mixed-precision training. It is reported to be 2 to 11× faster x = M · 2 , (1) than single-precision training [3]–[5], while also being able to where, achieve SOTA accuracy on widely used models. • E is the exponent, a signed integer Emin ≤ E ≤ Emax; Another mixed-precision format available in specialized − • M · 2 p+1 is the significand (sometimes improperly hardware is Google’s bfloat16 [6]. It was first reported as a referred to as the mantissa), where M is also a signed technique to accelerate large-batch training on TPUs (Tensor integer represented on p bits.
Recommended publications
  • Bibliography
    Bibliography [1] M. Abramowitz, I.A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables. Applied Math. Series 55 (National Bureau of Standards, Washington, D.C., 1964) [2] R.C. Agarwal, J.C. Cooley, F.G. Gustavson, J.B. Shearer, G. Slishman, B. Tuckerman, New scalar and vector elementary functions for the IBM system/370. IBM J. Res. Dev. 30(2), 126–144 (1986) [3] H.M. Ahmed, Efficient elementary function generation with multipliers, in Proceedings of the 9th IEEE Symposium on Computer Arithmetic (1989), pp. 52–59 [4] H.M. Ahmed, J.M. Delosme, M. Morf, Highly concurrent computing structures for matrix arithmetic and signal processing. Computer 15(1), 65–82 (1982) [5] H. Alt, Comparison of arithmetic functions with respect to Boolean circuits, in Proceedings of the 16th ACM STOC (1984), pp. 466–470 [6] American National Standards Institute and Institute of Electrical and Electronic Engineers. IEEE Standard for Binary Floating-Point Arithmetic. ANSI/IEEE Standard 754–1985 (1985) [7] C. Ancourt, F. Irigoin, Scanning polyhedra with do loops, in Proceedings of the 3rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP’91), Apr 1991 (ACM Press, New York, NY, 1991), pp. 39–50 [8] I.J. Anderson, A distillation algorithm for floating-point summation. SIAM J. Sci. Comput. 20(5), 1797–1806 (1999) [9] M. Andrews, T. Mraz, Unified elementary function generator. Microprocess. Microsyst. 2(5), 270–274 (1978) [10] E. Antelo, J.D. Bruguera, J. Villalba, E. Zapata, Redundant CORDIC rotator based on parallel prediction, in Proceedings of the 12th IEEE Symposium on Computer Arithmetic, July 1995, pp.
    [Show full text]
  • Instruction Set Reference, A-M
    Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 2A: Instruction Set Reference, A-M NOTE: The Intel 64 and IA-32 Architectures Software Developer's Manual consists of five volumes: Basic Architecture, Order Number 253665; Instruction Set Reference A-M, Order Number 253666; Instruction Set Reference N-Z, Order Number 253667; System Programming Guide, Part 1, Order Number 253668; System Programming Guide, Part 2, Order Number 253669. Refer to all five volumes when evaluating your design needs. Order Number: 253666-023US May 2007 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANT- ED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. INTEL PRODUCTS ARE NOT INTENDED FOR USE IN MEDICAL, LIFE SAVING, OR LIFE SUSTAINING APPLICATIONS. Intel may make changes to specifications and product descriptions at any time, without notice. Developers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined.” Improper use of reserved or undefined features or instructions may cause unpredictable be- havior or failure in developer's software code when running on an Intel processor. Intel reserves these fea- tures or instructions for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from their unauthorized use.
    [Show full text]
  • Numerical Computing
    Numerical computing How computers store real numbers and the problems that result Overview • Integers • Reals, floats, doubles etc • Arithmetical operations and rounding errors • We write: x = sqrt(2.0) – but how is this stored? L02 Numerical Computing 2 Mathematics vs Computers • Mathematics is an ideal world – integers can be as large as you want – real numbers can be as large or as small as you want – can represent every number exactly: 1, -3, 1/3, 1036237, 10-232322, √2, π, .... • Numbers range from - ∞ to +∞ – there is also infinite numbers in any interval • This not true on a computer – numbers have a limited range (integers and real numbers) – limited precision (real numbers) L02 Numerical Computing 3 Integers • We like to use base 10 – we only write the 10 characters 0,1,2,3,4,5,6,7,8,9 – use position to represent each power of 10 2 1 0 125 = 1 * 10 + 2 * 10 + 5 * 10 = 1*100 + 2*10 + 5*1 = 125 – represent positive or negative using a leading “+” or “-” • Computers are binary machines – can only store ones and zeros – minimum storage unit is 8 bits = 1 byte • Use base 2 1111101=1*26 +1*25 +1*24 +1*23 +1*22 +0*21 +1*20 =1*64 +1*32 +1*16 +1*8 +1*4 +0*2 +1*1 =125 L02 Numerical Computing 4 Storage and Range • Assume we reserve 1 byte (8 bits) for integers – minimum value 0 – maximum value 28 – 1 = 255 – if result is out of range we will overflow and get wrong answer! • Standard storage is 4 bytes = 32 bits – minimum value 0 – maximum value 232 – 1 = 4294967291 = 4 billion = 4G • Is this a problem? – question: what is a 32-bit operating
    [Show full text]
  • Floating-Point (FP) Numbers
    Floating-point (FP) numbers Computers need to deal with real numbers • Fractional numbers (e.g., 3.1416) • Very small numbers (e.g., 0.000001) • Very larger numbers (e.g., 2.7596 ×10 9) Components in a binary FP number • (-1) sign ×significand (a.k.a. mantissa )×2exponent • More bits in significand gives higher accuracy • More bits in exponent gives wider range A case for FP representation standard • Portability issues • Improved implementations ⇒ IEEE-754 CS/CoE0447: Computer Organization and Assembly Language University of Pittsburgh 12 Representing “floats” with binary We can add a “point” in binary notation: • 101.1010b • integral part is simply 5d • fractional part is 1 ×2-1 + 1×2-3 = 0.5 + 0.125 = 0.625 • thus, 101.1010b is 5.625d Normal form : shift “point” so there’s only a leading 1 • 101.1010b = 1.011010 × 22, shift to the left by 2 positions • 0.0001101 = 1.101 × 2-4, shift to the right by 4 positions • typically, we use the normal form (much like scientific notation) Just like integers, we have a choice of representation • IEEE 754 is our focus (there are other choices, though) CS/CoE0447: Computer Organization and Assembly Language University of Pittsburgh 13 1 Format choice issues Example floating-point numbers (base-10) • 1.4 ×10 -2 • -20.0 = -2.00 ×10 1 What components do we have? (-1) sign ×significand (a.k.a. mantissa )×2exponent Sign Significand Exponent Representing sign is easy (0=positive, 1=negative) Significand is unsigned (sign-magnitude) Exponent is a signed integer. What method do we use? CS/CoE0447:
    [Show full text]
  • Cross-Platform Issues with Floating-Point Arithmetics in C++
    Cross-Platform Issues With Floating-Point Arithmetics in C++ Günter Obiltschnig, Applied Informatics Software Engineering GmbH [email protected] ACCU Conference 2006 Abstract. The C++ standard does not specify a binary representation for the floating-point types float, double and long double. Although not required by the standard, the implementation of floating point arithmetic used by most C++ compilers conforms to a standard, IEEE 754-1985, at least for types float and double. This is directly related to the fact that the floating point units of modern CPUs also support this standard. The IEEE 754 standard specifies the binary format for floating point numbers, as well as the semantics for floating point operations. Nevertheless, the degree to which the various compilers implement all the features of IEEE 754 varies. This creates various pitfalls for anyone writing portable floating-point code in C++. These issues, and ways how to work around them, are the topic of this paper. 1 Introduction As with integer types, the C++ standard does not specify a storage size and binary representation for floating-point types. The standard specifies three floating point types: float, double and long double. Type double must provide at least as much precision as float, and long double must provide at least as much precision as double. Furthermore, the set of values of the type float must be a subset of the values of type double and the set of values of the type double must be a subset of the values of type long double. 2 IEEE 754 Floating Point Representation Although not required by the standard, the implementation of floating point arithmetic used by most C++ compilers conforms to a standard, IEEE 754-1985 [1], at least for types float and double1.
    [Show full text]
  • Floating-Point IP Cores User Guide
    Floating-Point IP Cores User Guide Last updated for Quartus Prime Design Suite: 16.1 Subscribe UG-01058 101 Innovation Drive 2016.12.09 San Jose, CA 95134 Send Feedback www.altera.com TOC-2 Contents About Floating-Point IP Cores........................................................................... 1-1 List of Floating-Point IP Cores...................................................................................................................1-1 Installing and Licensing IP Cores.............................................................................................................. 1-2 Design Flow.................................................................................................................................................. 1-3 IP Catalog and Parameter Editor................................................................................................... 1-3 Generating IP Cores (Quartus Prime Pro Edition).....................................................................1-6 Generating IP Cores (Quartus Prime Standard Edition)......................................................... 1-11 Upgrading IP Cores................................................................................................................................... 1-12 Migrating IP Cores to a Different Device................................................................................... 1-14 Floating-Point IP Cores General Features..............................................................................................1-16
    [Show full text]
  • 80387 Programmers Reference Manual 1987
    inter LITERATURE SALES ORDER FORM NAME: _________________________________________________ COMPANY: _______________________________________________ ADDRESS: _________________________________________________ CITY: _________________ STATE: ____ ZIP: _____ COUNTRY: _______________________________ PHONE NO.:('- __.....!- ___________________________ ORDER NO. TITLE QTY. PRICE TOTAL __ X ___ = _____ __ X ___ = _____ __ X ___ = _____ __ X ____ = _____ __ X ___ = _____ __ X ___ = _____ __ X ___ = _____ __ X ___ = _____ __ X ___ = _____ ___ X ___ = ______ Subtotal _____ Must Add Your Local Sales Tax ______ Must add appropriate postage to subtotal ------------!» Postage ______ (10% U.S. and Canada, 20% all other) Total _____ Pay by Visa, MasterCard, American Express, Check, Money Order, or company purchase order payable to Intel Literature Sales. Allow 2-4 weeks for delivery. o Visa 0 MasterCard 0 American Express Expiration Date _____ Account No. _______________________________ Signature: ______________________________ Mail To: Intel Literature Sales International Customers outside the U.S. and Canada P.O. Box 58130 should contact their local Intel Sales Office or Distributor Santa Clara, CA listed in the back of most Intel literature. 95052-8130 Call Toll Free: (800) 548-4725 for phone orders Prices good unli112/31/87. Source HB 80387 PROGRAMMER'S REFERENCE MANUAL 1987 Intel Corporation makes no warranty for the use of its products and assumes no responsibility for any errors which may appear in this document nor does it make a commitment to update the information contained herein. Intel retains the right to make changes to these specifications at any time, without notice. Contact your local sales office to obtain the latest specifications before placing your order.
    [Show full text]
  • Floating Point - Wikipedia, the Free Encyclopedia Page 1 of 12 Floating Point
    Floating point - Wikipedia, the free encyclopedia Page 1 of 12 Floating point From Wikipedia, the free encyclopedia Floating-point is a numeral-interpretation system in which a string of digits (or bits) represents a real number. A system of arithmetic is defined that allows these representations to be manipulated with results that are similar to the arithmetic operations over real numbers. The representation uses an explicit designation of where the radix point (decimal point, or, more commonly in computers, binary point) is to be placed relative to that string. The designated location of the radix point is permitted to be far to the left or right of the digit string, allowing for the representation of very small or very large numbers. Floating-point could be thought of as a computer realization of scientific notation. The large dynamic range of floating-point numbers frees the programmer from explicitly encoding number representations for their particular application. For this reason computation with floating-point numbers plays a very important role in an enormous variety of applications in science, engineering, and industry, particularly in meteorology, simulation, and mechanical design. The ability to perform floating point operations is an important measure of performance for computers intended for such applications. It is measured in "MegaFLOPS" (million FLoating-point Operations Per Second), or Gigaflops, etc. World-class supercomputer installations are generally rated in Teraflops. Contents 1 Overview 1.1 Nomenclature 1.2 Alternative
    [Show full text]
  • University of Warwick Institutional Repository
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Warwick Research Archives Portal Repository University of Warwick institutional repository: http://go.warwick.ac.uk/wrap A Thesis Submitted for the Degree of PhD at the University of Warwick http://go.warwick.ac.uk/wrap/3713 This thesis is made available online and is protected by original copyright. Please scroll down to view the document itself. Please refer to the repository record for this item for information to help you to cite it. Our policy information is available from the repository home page. Addressing Concerns in Performance Prediction: The Impact of Data Dependencies and Denormal Arithmetic in Scientific Codes by Brian Patrick Foley A thesis submitted to the University of Warwick in partial fulfilment of the requirements for admission to the degree of Doctor of Philosophy Department of Computer Science September 2009 Abstract To meet the increasing computational requirements of the scientific community, the use of parallel programming has become commonplace, and in recent years distributed applications running on clusters of computers have become the norm. Both parallel and distributed applications face the problem of predictive uncertainty and variations in runtime. Modern scientific applications have varying I/O, cache, and memory profiles that have significant and difficult to predict effects on their runtimes. Data-dependent sensitivities such as the costs of denormal floating point calculations introduce more variations in runtime, further hindering predictability. Applications with unpredictable performance or which have highly variable run- times can cause several problems. If the runtime of an application is unknown or varies widely, workflow schedulers cannot efficiently allocate them to compute nodes, leading to the under-utilisation of expensive resources.
    [Show full text]
  • Floating-Point IP Cores User Guide
    Floating-Point IP Cores User Guide Subscribe UG-01058 101 Innovation Drive 2015.07.30 San Jose, CA 95134 Send Feedback www.altera.com TOC-2 Contents About Floating-Point IP Cores........................................................................... 1-1 List of Floating-Point IP Cores.................................................................................................................. 1-1 Installing and Licensing IP Cores..............................................................................................................1-2 Design Flow.................................................................................................................................................. 1-2 IP Catalog and Parameter Editor...................................................................................................1-3 Specifying IP Core Parameters and Options................................................................................1-5 Specifying IP Core Parameters and Options (Legacy Parameter Editors)...............................1-9 Upgrading IP Cores...................................................................................................................................1-10 Migrating IP Cores to a Different Device...................................................................................1-13 Floating-Point IP Cores General Features..............................................................................................1-14 IEEE-754 Standard for Floating-Point Arithmetic..............................................................................
    [Show full text]
  • Intel Architecture Software Developer's Manual
    Intel Architecture Software Developer’s Manual Volume 2: Instruction Set Reference NOTE: The Intel Architecture Software Developer’s Manual consists of three volumes: Basic Architecture, Order Number 243190; Instruction Set Reference, Order Number 243191; and the System Programming Guide, Order Number 243192. Please refer to all three volumes when evaluating your design needs. 1999 Information in this document is provided in connection with Intel products. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted by this document. Except as provided in Intel's Terms and Conditions of Sale for such products, Intel assumes no liability whatsoever, and Intel disclaims any express or implied warranty, relating to sale and/or use of Intel products including liability or warranties relating to fitness for a particular purpose, merchantability, or infringement of any patent, copyright or other intellectual property right. Intel products are not intended for use in medical, life saving, or life sustaining applications. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined.” Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. Intel’s Intel Architecture processors (e.g., Pentium®, Pentium® II, Pentium® III, and Pentium® Pro processors) may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
    [Show full text]
  • C/C++ Language Extensions for Cell Broadband Engine Architecture
    C/C++ Language Extensions for Cell Broadband Engine Architecture Version 2.6 CBEA JSRE Series Cell Broadband Engine Architecture Joint Software Reference Environment Series August 25, 2008 ® © Copyright International Business Machines Corporation, Sony Computer Entertainment Incorporated, Toshiba Corporation 2002 - 2008 All Rights Reserved Printed in the United States of America August 2008 The following are trademarks of International Business Machines Corporation in the United States, or other countries, or both. IBM PowerPC IBM Logo PowerPC Architecture ibm.com Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. Other company, product and service names may be trademarks or service marks of others. All information contained in this document is subject to change without notice. The products described in this document are NOT intended for use in applications such as implantation, life support, or other hazardous uses where malfunction could result in death, bodily injury or catastrophic property damage. The information contained in this document does not affect or change IBM product specifications or warranties. Nothing in this document shall operate as an express or implied license or indemnity under the intellectual property rights of IBM or third parties. All information contained in this document was obtained in specific environments, and is presented as an illustration. The results obtained in other operating environments may vary. THE INFORMATION CONTAINED IN THIS DOCUMENT IS PROVIDED ON AN “AS IS” BASIS. In no event will IBM be liable for damages arising directly or indirectly from any use of the information contained in this document. IBM Systems and Technology Group 2070 Route 52, Bldg.
    [Show full text]