Lifejacket: Verifying Precise Floating-Point Optimizations in LLVM

Total Page:16

File Type:pdf, Size:1020Kb

Lifejacket: Verifying Precise Floating-Point Optimizations in LLVM LifeJacket: Verifying precise floating- point optimizations in LLVM Andres Notzli¨ Fraser Brown Stanford University fnoetzli,[email protected] Abstract Name: PR26746 %a = fsub -0.0, %x Optimizing floating-point arithmetic is vital because it is ubiqui- %r = fsub +0.0, %a tous, costly, and used in compute-heavy workloads. Implementing => precise optimizations correctly, however, is difficult, since devel- %r = %x opers must account for all the esoteric properties of floating-point arithmetic to ensure that their transformations do not alter the out- Figure 1. Incorrect transformation involving floating-point in- put of a program. Manual reasoning is error prone and stifles incor- structions in LLVM 3.7.1. poration of new optimizations. We present an approach to automate reasoning about floating- point optimizations using satisfiability modulo theories (SMT) on a fragile interplay between developers, programming languages, solvers. We implement the approach in LifeJacket, a system for compilers, and hardware implementations. automatically verifying precise floating-point optimizations for the Compiler optimizations that alter the semantics of programs, LLVM assembly language. We have used LifeJacket to verify 43 even in subtle ways, can confuse users, make problems hard to LLVM optimizations and to discover eight incorrect ones, includ- debug, and cause cascading issues. IEEE 754-2008 acknowledges ing three previously unreported problems. LifeJacket is an open this by recommending that language standards and implementa- source extension of the Alive system for optimization verification. tions provide means to generate reproducible results for programs, independent from optimizations. In practice, many transformations 1. Introduction that are valid for real numbers, change the precision of floating- point expressions. As a result, compilers optimizing floating-point In this paper, we present LifeJacket, a system for automatically programs face the dilemma of choosing between speed and repro- verifying floating-point optimizations. Floating-point arithmetic is ducibility. They often address this dilemma by dividing floating- ubiquitous—modern hardware architectures natively support it and point optimizations into two groups, precise and imprecise op- programming languages treat it as a canonical representation of real timizations, where imprecise optimizations are optional (e.g. the numbers—but writing correct floating-point programs is difficult. -ffast-math flag in clang). While precise optimizations always Optimizing these programs is even more difficult. Unfortunately, produce the same result, imprecise ones produce reasonable results despite hardware support, floating-point computations are still ex- on common inputs (e.g. not for special values) but are arbitrarily pensive, so avoiding optimization is undesirable. bad in the general case. To implement precise optimizations, de- Reasoning about floating-point optimizations and programs is velopers have to reason about all edge cases of floating-point arith- difficult because of floating-point arithmetic’s unintuitive seman- metic, making it challenging to avoid bugs. tics. Floating-point arithmetic is inherently imprecise and lossy, To illustrate the challenge of developing floating-point opti- and programmersDRAFT must account for rounding, signed zeroes, special mizations, Figure 1 shows an example of an invalid transformation values, and non-associativity [7]. Before the standardization, a wide implemented in LLVM 3.7.1. We discuss the specification language range of incompatible floating-point hardware with varying sup- in more detail in Section 3.2 but, at a high-level, the transformation arXiv:1603.09290v1 [cs.PL] 30 Mar 2016 port for range, precision, and rounding existed. These implementa- simplifies +0:0 − (−0:0 − x) to x, an optimization that is cor- tions were not only incompatible but also had undesirable proper- rect in the realm of real numbers. Because floating-point numbers ties such as numbers that were not equal to zero for comparisons but distinguish between negative and positive zero, however, the opti- were treated as zeros for multiplication and division [13]. The IEEE mization is not valid if x = −0:0, because the original code returns 754-1985 standard and its stricter IEEE 754-2008 successor were +0:0 and the optimized code returns −0:0. While the zero’s sign carefully designed to avoid many of these pitfalls and designed for may be insignificant for many applications, the unexpected sign (contrary to popular opinion, perhaps) non-expert users. Despite change may cause a ripple effect. For example, the reciprocal of these advances, program correctness and reproducibility still rests zero is defined as 1=+0:0 = +1 and 1=−0:0 = −∞. Since reasoning manually about floating-point operations and optimizations is difficult, we argue that automated reasoning can help ensure correct optimizations. The goal of LifeJacket is to allow LLVM developers to automatically verify precise floating-point optimizations. Our work focuses on precise optimizations because they are both more amenable to verification and arguably harder to get right. LifeJacket builds on Alive [9], a tool for verifying LLVM optimizations, extending it with floating-point support. [Copyright notice will appear here once ’preprint’ option is removed.] Our contributions are as follows: 1 2016/3/31 • We describe the background for verifying precise floating-point already verifies some InstCombine optimizations, but it does not optimizations in LLVM and propose an approach using SMT support optimizations involving floating-point arithmetic. Instead solvers. of building LifeJacket from scratch, we extends Alive with the ma- • We implemented the approach in LifeJacket, an open source chinery to verify floating-point optimizations. To give the neces- fork of Alive that adds support for floating-point types, floating- sary context for discussing our implementation in Section 4, we point instructions, floating-point predicates, and certain fast- describe LLVM’s floating-point types and instructions and give a math flags. brief overview of Alive. • We validated the approach by verifying 43 optimizations. Life- 3.1 Floating-point arithmetic in LLVM Jacket finds 8 incorrect optimizations, including three previ- ously unreported problems in LLVM 3.7.1. In the following, we discuss LLVM’s semantics of floating-point types and instructions. The information is largely based on the In addition to the core contributions, our work also lead to LLVM Language Reference Manual for LLVM 3.7.1 [2] and the the discovery of two issues in Z3 [6], the SMT solver used by IEEE 754-2008 standard. For completeness, we note that the lan- LifeJacket, related to floating-point support. guage reference does not explicitly state that LLVM floating-point arithmetic is based on IEEE 754. However, the language reference 2. Related Work refers to the IEEE standard multiple times, and LLVM’s floating- point software implementation APFloat is explicitly based on the Alive is a system that verifies LLVM peephole optimizations. Life- standard. Jacket is a fork of this project that extends it with support for floating-point arithmetic. We are not the only ones interested in ver- Floating-point types LLVM defines six different floating-point ifying floating-point optimizations; close to the submission dead- types with bit-widths ranging from 16 bit to 128 bit. Floating-point line, we found that one of the Alive authors had independently be- values are stored in the IEEE binary interchange format, which gun a reimplementation of Alive that seems to include support for 1 encodes them in three parts: the sign s, the exponent e and the floating-point arithmetic. significand t. The value of a normal floating-point number is given Our work intersects with the areas of compiler correctness, by: (−1)s ×(1+21−p ×t)×2e−bias, where bias = 2w−1 −1 and optimization correctness, and analysing floating-point expressions. w is the number of bits in the exponent. The range of the exponents Research on compiler correctness has addressed floating-point for normal floating-point numbers is [1; 2w −2]. Exponents outside and floating-point optimizations. CompCert, a formally-verified of this range are used to encode special values: subnormal numbers, compiler, supports IEEE 754-2008 floating-point types and imple- Not-a-Number values (NaNs), and infinities. ments two floating-point optimizations [3]. In CompCert, develop- Floating-point zeros are signed, meaning that −0:0 and +0:0 ers use Coq to prove optimizations correct, while LifeJacket proves are distinct. While most operations ignore the sign of a zero, the optimization correctness automatically. sign has an observable effect in some situations: a division by zero Regarding optimization correctness, researchers have explored (generally) returns +1 or −∞ depending on the zero’s sign, for both the consequences of existing optimizations and techniques for example. As a consequence, x = y does not imply 1 = 1 . If x = 0 generating new optimizations. Recent work has discussed conse- x y and y = −0, x = y is true, since floating point 0 = −0. On the quences of unexpected optimizations [14]. In terms of new op- other hand, 1 = 1 is false, since 1 = 1 6= −∞ = 1 . timizations, STOKE [12] is a stochastic optimizer that supports x y 0 −0 ±∞ floating-point arithmetic and verifies instances of floating-point op- Infinities ( ) are used to represent an overflow or a division t = 0 e = 2w − 1 timizations with random testing. Souper [1] discovers new LLVM by zero. They are encoded
Recommended publications
  • Decimal Layouts for IEEE 754 Strawman3
    IEEE 754-2008 ECMA TC39/TG1 – 25 September 2008 Mike Cowlishaw IBM Fellow Overview • Summary of the new standard • Binary and decimal specifics • Support in hardware, standards, etc. • Questions? 2 Copyright © IBM Corporation 2008. All rights reserved. IEEE 754 revision • Started in December 2000 – 7.7 years – 5.8 years in committee (92 participants + e-mail) – 1.9 years in ballot (101 voters, 8 ballots, 1200+ comments) • Removes many ambiguities from 754-1985 • Incorporates IEEE 854 (radix-independent) • Recommends or requires more operations (functions) and more language support 3 Formats • Separates sets of floating-point numbers (and the arithmetic on them) from their encodings (‘interchange formats’) • Includes the 754-1985 basic formats: – binary32, 24 bits (‘single’) – binary64, 53 bits (‘double’) • Adds three new basic formats: – binary128, 113 bits (‘quad’) – decimal64, 16-digit (‘double’) – decimal128, 34-digit (‘quad’) 4 Why decimal? A web page… • Parking at Manchester airport… • £4.20 per day … … for 10 days … … calculated on-page using ECMAScript Answer shown: 5 Why decimal? A web page… • Parking at Manchester airport… • £4.20 per day … … for 10 days … … calculated on-page using ECMAScript Answer shown: £41.99 (Programmer must have truncated to two places.) 6 Where it costs real money… • Add 5% sales tax to a $ 0.70 telephone call, rounded to the nearest cent • 1.05 x 0.70 using binary double is exactly 0.734999999999999986677323704 49812151491641998291015625 (should have been 0.735) • rounds to $ 0.73, instead of $ 0.74
    [Show full text]
  • Floating Point Representation (Unsigned) Fixed-Point Representation
    Floating point representation (Unsigned) Fixed-point representation The numbers are stored with a fixed number of bits for the integer part and a fixed number of bits for the fractional part. Suppose we have 8 bits to store a real number, where 5 bits store the integer part and 3 bits store the fractional part: 1 0 1 1 1.0 1 1 $ 2& 2% 2$ 2# 2" 2'# 2'$ 2'% Smallest number: Largest number: (Unsigned) Fixed-point representation Suppose we have 64 bits to store a real number, where 32 bits store the integer part and 32 bits store the fractional part: "# "% + /+ !"# … !%!#!&. (#(%(" … ("% % = * !+ 2 + * (+ 2 +,& +,# "# "& & /# % /"% = !"#× 2 +!"&× 2 + ⋯ + !&× 2 +(#× 2 +(%× 2 + ⋯ + ("%× 2 0 ∞ (Unsigned) Fixed-point representation Range: difference between the largest and smallest numbers possible. More bits for the integer part ⟶ increase range Precision: smallest possible difference between any two numbers More bits for the fractional part ⟶ increase precision "#"$"%. '$'#'( # OR "$"%. '$'#'(') # Wherever we put the binary point, there is a trade-off between the amount of range and precision. It can be hard to decide how much you need of each! Scientific Notation In scientific notation, a number can be expressed in the form ! = ± $ × 10( where $ is a coefficient in the range 1 ≤ $ < 10 and + is the exponent. 1165.7 = 0.0004728 = Floating-point numbers A floating-point number can represent numbers of different order of magnitude (very large and very small) with the same number of fixed bits. In general, in the binary system, a floating number can be expressed as ! = ± $ × 2' $ is the significand, normally a fractional value in the range [1.0,2.0) .
    [Show full text]
  • IEEE Standard 754 for Binary Floating-Point Arithmetic
    Work in Progress: Lecture Notes on the Status of IEEE 754 October 1, 1997 3:36 am Lecture Notes on the Status of IEEE Standard 754 for Binary Floating-Point Arithmetic Prof. W. Kahan Elect. Eng. & Computer Science University of California Berkeley CA 94720-1776 Introduction: Twenty years ago anarchy threatened floating-point arithmetic. Over a dozen commercially significant arithmetics boasted diverse wordsizes, precisions, rounding procedures and over/underflow behaviors, and more were in the works. “Portable” software intended to reconcile that numerical diversity had become unbearably costly to develop. Thirteen years ago, when IEEE 754 became official, major microprocessor manufacturers had already adopted it despite the challenge it posed to implementors. With unprecedented altruism, hardware designers had risen to its challenge in the belief that they would ease and encourage a vast burgeoning of numerical software. They did succeed to a considerable extent. Anyway, rounding anomalies that preoccupied all of us in the 1970s afflict only CRAY X-MPs — J90s now. Now atrophy threatens features of IEEE 754 caught in a vicious circle: Those features lack support in programming languages and compilers, so those features are mishandled and/or practically unusable, so those features are little known and less in demand, and so those features lack support in programming languages and compilers. To help break that circle, those features are discussed in these notes under the following headings: Representable Numbers, Normal and Subnormal, Infinite
    [Show full text]
  • IEEE 754 Format (From Last Time)
    IEEE 754 Format (from last time) ● Single precision format S Exponent Significand 1 8 23 The exponent is represented in bias 127 notation. Why? ● Example: 52.25 = 00110100.010000002 5 Normalize: 001.10100010000002 x 2 (127+5) 0 10000100 10100010000000000000000 0100 0010 0101 0001 0000 0000 0000 0000 52.25 = 0x42510000 08/29/2017 Comp 411 - Fall 2018 1 IEEE 754 Limits and Features ● SIngle precision limitations ○ A little more than 7 decimal digits of precision ○ Minimum positive normalized value: ~1.18 x 10-38 ○ Maximum positive normalized value: ~3.4 x 1038 ● Inaccuracies become evident after multiple single precision operations ● Double precision format 08/29/2017 Comp 411 - Fall 2018 2 IEEE 754 Special Numbers ● Zero - ±0 A floating point number is considered zero when its exponent and significand are both zero. This is an exception to our “hidden 1” normalization trick. There are also a positive and negative zeros. S 000 0000 0 000 0000 0000 0000 0000 0000 ● Infinity - ±∞ A floating point number with a maximum exponent (all ones) is considered infinity which can also be positive or negative. S 111 1111 1 000 0000 0000 0000 0000 0000 ● Not a Number - NaN for ±0/±0, ±∞/±∞, log(-42), etc. S 111 1111 1 non-zero 08/29/2017 Comp 411 - Fall 2018 3 A Quick Wake-up exercise What decimal value is represented by 0x3f800000, when interpreted as an IEEE 754 single precision floating point number? 08/29/2017 Comp 411 - Fall 2018 4 Bits You can See The smallest element of a visual display is called a “pixel”.
    [Show full text]
  • Automatic Hardware Generation for Reconfigurable Architectures
    Razvan˘ Nane Automatic Hardware Generation for Reconfigurable Architectures Automatic Hardware Generation for Reconfigurable Architectures PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus prof. ir. K.C.A.M Luyben, voorzitter van het College voor Promoties, in het openbaar te verdedigen op donderdag 17 april 2014 om 10:00 uur door Razvan˘ NANE Master of Science in Computer Engineering Delft University of Technology geboren te Boekarest, Roemenie¨ Dit proefschrift is goedgekeurd door de promotor: Prof. dr. K.L.M. Bertels Samenstelling promotiecommissie: Rector Magnificus voorzitter Prof. dr. K.L.M. Bertels Technische Universiteit Delft, promotor Prof. dr. E. Visser Technische Universiteit Delft Prof. dr. W.A. Najjar University of California Riverside Prof. dr.-ing. M. Hubner¨ Ruhr-Universitat¨ Bochum Dr. H.P. Hofstee IBM Austin Research Laboratory Dr. ir. A.C.J. Kienhuis Universiteit van Leiden Dr. ir. J.S.S.M Wong Technische Universiteit Delft Prof. dr. ir. Geert Leus Technische Universiteit Delft, reservelid Automatic Hardware Generation for Reconfigurable Architectures Dissertation at Delft University of Technology Copyright c 2014 by R. Nane All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without permission of the author. ISBN 978-94-6186-271-6 Printed by CPI Koninklijke Wohrmann,¨ Zutphen, The Netherlands To my family Automatic Hardware Generation for Reconfigurable Architectures Razvan˘ Nane Abstract ECONFIGURABLE Architectures (RA) have been gaining popularity rapidly in the last decade for two reasons.
    [Show full text]
  • View of Methodology 21
    KNOWLEDGE-GUIDEDMETHODOLOGYFOR SOFTIPANALYSIS BY BHANUPRATAPSINGH Submitted in partial fulfillment of the requirements For the degree of Doctor of Philosophy Dissertation Advisor: Dr. Christos Papachristou Department of Electrical Engineering and Computer Science CASEWESTERNRESERVEUNIVERSITY January 2015 CASEWESTERNRESERVEUNIVERSITY SCHOOLOFGRADUATESTUDIES we hereby approve the thesis of BHANUPRATAPSINGH candidate for the PH.D. degree. chair of the committee DR.CHRISTOSPAPACHRISTOU DR.FRANCISMERAT DR.DANIELSAAB DR.HONGPINGZHAO DR.FRANCISWOLFF date AUG 2 1 , 2 0 1 4 ∗ We also certify that written approval has been obtained for any propri- etary material contained therein. Dedicated to my family and friends. CONTENTS 1 introduction1 1.1 Motivation and Statement of Problem . 1 1.1.1 Statement of Problem . 6 1.2 Major Contributions of the Research . 7 1.2.1 Expert System Based Approach . 7 1.2.2 Specification Analysis Methodology . 9 1.2.3 RTL Analysis Methodology . 9 1.2.4 Spec and RTL Correspondence . 10 1.3 Thesis Outline . 10 2 background and related work 11 Abstract . 11 2.1 Existing Soft-IP Verification Techniques . 11 2.1.1 Simulation Based Methods . 11 2.1.2 Formal Techniques . 13 2.2 Prior Work . 15 2.2.1 Specification Analysis . 15 2.2.2 Reverse Engineering . 17 2.2.3 IP Quality and RTL Analysis . 19 2.2.4 KB System . 20 3 overview of methodology 21 Abstract . 21 3.1 KB System Overview . 21 3.1.1 Knowledge Base Organization . 24 3.2 Specification KB Overview . 26 3.3 RTL KB Overview . 29 iv contents v 3.4 Confidence and Coverage Metrics . 30 3.5 Introduction to CLIPS and XML .
    [Show full text]
  • Desperately Needed Remedies for the Undebuggability of Large Floating-Point Computations in Science and Engineering
    File: Boulder Desperately Needed Remedies … Version dated August 15, 2011 6:47 am Desperately Needed Remedies for the Undebuggability of Large Floating-Point Computations in Science and Engineering W. Kahan, Prof. Emeritus Math. Dept., and Elect. Eng. & Computer Sci. Dept. Univ. of Calif. @ Berkeley Prepared for the IFIP / SIAM / NIST Working Conference on Uncertainty Quantification in Scientific Computing 1 - 4 Aug. 2011, Boulder CO. Augmented for the Annual Conference of the Heilbronn Institute for Mathematical Research 8 - 9 Sept. 2011, University of Bristol, England. This document is now posted at <www.eecs.berkeley.edu/~wkahan/Boulder.pdf>. Prof. W. Kahan Subject to Revision Page 1/73 File: Boulder Desperately Needed Remedies … Version dated August 15, 2011 6:47 am Contents Abstract and Cautionary Notice Pages 3 - 4 Users need tools to help them investigate evidence of miscomputation 5 - 7 Why Computer Scientists haven’t helped; Accuracy is Not Transitive 8 - 10 Summaries of the Stories So Far, and To Come 11 - 12 Kinds of Evidence of Miscomputation 13 EDSAC’s arccos, Vancouver’s Stock Index, Ranks Too Small, etc. 14 - 18 7090’s Abrupt Stall; CRAYs’ discordant Deflections of an Airframe 19 - 20 Clever and Knowledgeable, but Numerically Naive 21 How Suspicious Results may expose a hitherto Unsuspected Singularity 22 Pejorative Surfaces 23 - 25 The Program’s Pejorative Surface may have an Extra Leaf 26 - 28 Default Quad evaluation, the humane but unlikely Prophylaxis 29 - 33 Two tools to localize roundoff-induced anomalies by rerunning
    [Show full text]
  • Floating Point Converter (ALTFP CONVERT)Megafunction User Guide
    Floating Point Converter (ALTFP_CONVERT) Megafunction User Guide 101 Innovation Drive Software Version: 8.0 San Jose, CA 95134 Document Version: 1.0 www.altera.com Document Date: May 2008 Copyright © 2008 Altera Corporation. All rights reserved. Altera, The Programmable Solutions Company, the stylized Altera logo, specific device des- ignations, and all other words and logos that are identified as trademarks and/or service marks are, unless noted otherwise, the trademarks and service marks of Altera Corporation in the U.S. and other countries. All other product or service names are the property of their respective holders. Al- tera products are protected under numerous U.S. and foreign patents and pending applications, maskwork rights, and copyrights. Altera warrants performance of its semiconductor products to current specifications in accordance with Altera's standard warranty, but reserves the right to make changes to any products and services at any time without notice. Altera assumes no responsibility or liability arising out of the ap- plication or use of any information, product, or service described herein except as expressly agreed to in writing by Altera Corporation. Altera customers are advised to obtain the latest version of device specifications before relying on any published in- formation and before placing orders for products or services. UG-01031-1.0 ii Altera Corporation Floating Point Converter (ALTFP_CONVERT) Megafunction User GuideConfidential—Internal Use Only Contents About this User Guide ............................................................................
    [Show full text]
  • Test Construction for Mathematical Functions
    Test Construction for Mathematical Functions Victor Kuliamin Institute for System Programming Russian Academy of Sciences 109004, B. Kommunistitcheskaya, 25, Moscow, Russia [email protected] Abstract. The article deals with problems of testing implementations of mathematical functions working with floating-point numbers. It consid- ers current standards’ requirements to such implementations and demon- strates that those requirements are not sufficient for correct operation of modern systems using sophisticated mathematical modeling. Correct rounding requirement is suggested to guarantee preservation of all im- portant properties of implemented functions and to support high level of interoperability between different mathematical libraries and modeling software using them. Test construction method is proposed for confor- mance test development for current standards supplemented with correct rounding requirement. The idea of the method is to use three differ- ent sources of test data: floating-point numbers satisfying specific pat- terns, boundaries of intervals of uniform function behavior, and points where correct rounding requires much higher precision than in average. Some practical results obtained by using the method proposed are also presented. 1 Introduction In modern world computers help to visualize and understand behavior of very complex systems. Confirmation of this behavior with the help of real experi- ments is too expensive and often even impossible. To ensure correct results of such modeling we need to have adequate models and correctly working model- ing systems. Constructing adequate models is very interesting problem, which, unfortunately, cannot be considered in this article in detail. The article con- cerns the second part – how to ensure correct operation of modeling systems. Such systems often use very sophisticated and peculiar algorithms, but in most cases they need for work basic mathematical functions implemented in software libraries or in hardware.
    [Show full text]
  • Lossless Compression of Double-Precision Floating-Point Data for Numerical Simulations: Highly Parallelizable Algorithms for GPU Computing
    IEICE TRANS. INF. & SYST., VOL.E95–D, NO.12 DECEMBER 2012 2778 PAPER Special Section on Parallel and Distributed Computing and Networking Lossless Compression of Double-Precision Floating-Point Data for Numerical Simulations: Highly Parallelizable Algorithms for GPU Computing Mamoru OHARA†a) and Takashi YAMAGUCHI†, Members SUMMARY In numerical simulations using massively parallel com- stored; this is especially true for checkpoint data, which is puters like GPGPU (General-Purpose computing on Graphics Processing used only when failures occur. Considering overheads for Units), we often need to transfer computational results from external de- transferring data between the devices and host memories, it vices such as GPUs to the main memory or secondary storage of the host machine. Since size of the computation results is sometimes unacceptably is preferable that the data is shrunk in the devices. For ex- large to hold them, it is desired that the data is compressed and stored. In ample, we can implement compressor circuits for simulation addition, considering overheads for transferring data between the devices data in the same FPGA in which we also implement accel- and host memories, it is preferable that the data is compressed in a part erator circuits. Some techniques for implementing the com- of parallel computation performed on the devices. Traditional compres- sion methods for floating-point numbers do not always show good paral- pressor in FPGAs have been proposed [3], [4]. On the other lelism. In this paper, we propose a new compression method for massively- hand, numerous lossy compression algorithms for graphics parallel simulations running on GPUs, in which we combine a few succes- data running on GPUs have been reported, however, there sive floating-point numbers and interleave them to improve compression are only a few ones for simulation data.
    [Show full text]
  • Edition 3 Final Draft Ecmascript Language Specification
    ECMAScript Language Specification Edition 3 13-Oct-99 Edition 3 Final Draft ECMAScript Language Specification 13 October 1999 ECMAScript Language Specification Edition 3 13-Oct-99 Brief History This ECMA Standard is based on several originating technologies, the most well known being JavaScript (Netscape) and JScript (Microsoft). The language was invented by Brendan Eich at Netscape and first appeared in that company’s Navigator 2.0 browser. It has appeared in all subsequent browsers from Netscape and in all browsers from Microsoft starting with Internet Explorer 3.0. The development of this Standard started in November 1996. The first edition of this ECMA Standard was adopted by the ECMA General Assembly of June 1997. That ECMA Standard was submitted to ISO/IEC JTC 1 for adoption under the fast-track procedure, and approved as international standard ISO/IEC 16262, in April 1998. The ECMA General Assembly of June 1998 approved the second edition of ECMA-262 to keep it fully aligned with ISO/IEC 16262. Changes between the first and the second edition are editorial in nature. The current document defines the third edition of the Standard and includes powerful regular expressions, better string handling, new control statements, try/catch exception handling, tighter definition of errors, formatting for numeric output and minor changes in anticipation of forthcoming internationalisation facilities and future language growth. Work on the language is not complete. The technical committee is working on significant enhancements, including mechanisms for scripts to be created and used across the Internet, and tighter coordination with other standards bodies such as groups within the World Wide Web Consortium and the Wireless Application Protocol Forum.
    [Show full text]
  • Vivado Customer Overview with 4 Modules
    Announcing Vivado™ Built from the Ground Up for the Next Decade of ‘All Programmable’ Devices © Copyright 2012 Xilinx Announcing Vivado Design Suite IP & System Centric Next Generation Design Environment For the Next Decade of ‘All Programmable’ Devices Accelerating Integration & Implementation up to 4X Built from the Ground Up for the Next Decade of Programmable Design Page 3 © Copyright 2012 Xilinx Why Now? Programmable Logic Devices ALL Programmable Devices Enables Programmable Enables Programmable ‘Logic’ Systems ‘Integration’ Page 4 © Copyright 2012 Xilinx Bottlenecks are Shifting System Integration Bottlenecks – Design and IP reuse – Integrating algorithmic and RTL level IP – Mixing DSP, embedded, connectivity, logic – Verification of blocks and “systems” Implementation Bottlenecks – Hierarchical chip planning – Multi-domain and multi-die physical optimization – Predictable ‘design’ vs. ‘timing’ closure – Late ECOs and rippling effect of changes Page 5 © Copyright 2012 Xilinx Vivado: Accelerating Productivity up to 4X Accelerating Integration IP & System-centric Vivado Next up to Integration with Fast Generation 4X Verification Design System RTL to Bit-stream with Iterative Approach 1X Fast, Hierarchical and Deterministic Closure Accelerating Automation w/ ECO Implementation 1X up to 4X Page 6 © Copyright 2012 Xilinx Vivado Design Suite Elements Integrated Design Environment Shared Scalable Data Model Data Scalable Shared Accelerating IP & System-centric Integration Integration with Fast Analysis and Debug Verification Fast, Hierarchical
    [Show full text]