IBM Research Report

Total Page:16

File Type:pdf, Size:1020Kb

IBM Research Report H-0140 October 10, 2002 Computer Science IBM Research Report FPgen - A Deep-Knowledge Test-Generator for Floating Point Verification Laurent Fournier, Sigal Asaf IBM Research Division Haifa Research Laboratory Haifa 31905, Israel Research Division Almaden - Austin - Beijing - Haifa - India - T. J. Watson - Tokyo - Zurich LIMITED DISTRIBUTION NOTICE: This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. I thas been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). Copies may be requested from IBM T. J. Watson Research Center , P. O. Box 218, Yorktown Heights, NY 10598 USA (email: [email protected]). Some reports are available on the internet at http://domino.watson.ibm.com/library/CyberDig.nsf/home . FPgen – A Deep-Knowledge Test-Generator for Floating Point Verification Laurent Fournier and Sigal Asaf IBM Research Laboratory in Haifa [email protected], [email protected] correctness of all the cases. The difficult question is: “How should such a representative set of test cases be built?” Since Abstract both the architecture specification and the micro-architecture This paper describes FPgen, a test-program generator for implementation yield a myriad of special cases, pure (uniform) datapath verification of floating-point units in microproces- random generation of test cases would be largely inefficient. sors. FPgen is a convenient and powerful platform for gener- As described below, FPgen offers full control to bias test gen- ating interesting data combinations for floating-point eration so it reflects the underlying distribution of interesting instruction operands. Its main purpose is to provide a means cases. to fulfill floating-point test-plans which typically include a How does one know that a certain set of tests is sufficient? myriad of tasks that stem from both the architecture and the This question is related to the notion of coverage, which de- micro-architecture. FPgen focuses on the IEEE standard defi- fines the comprehensiveness of the set related to the verifica- nitions and therefore supports architectures that comply with tion target [6,7,8]. Usually, coverage models, which are sets of these definitions. related tasks, are defined and the set of tests should fulfill all 1. Introduction the defined tasks. For example, a common - albeit far from Traditionally, achieving IEEE compliance for floating-point trivial to fulfill - coverage model is one that requires enume- hardware in microprocessors has been a challenging task. ration of all major IEEE floating point types, simultaneously, Many escape bugs, including the infamous Pentium bug [16], for all operands of all floating-point instructions (the “All belong to the floating-point unit and reveal that the verification IEEE Types” model). For a given instruction with three oper- process in this area is still far from optimal. The growing de- ands, this potentially yields a thousand (10**3) cases that must mand for performance, quality, and faster time-to-market be covered, assuming 10 major floating-point types (+/- NaNs, causes the verification work to become increasingly difficult. +/- Infinity, +/- Zero, +/- Denorm, +/- Norm). Table 1 illus- Problems in the floating-point unit implementation have many trates this model, with a few common floating point instruc- sources. These problems range from data operations on indi- tions. This model can be further refined by including vidual instructions, to the correct handling of sequences of in- additional floating-point types, such as minimum and maxi- structions in which back-to-back events challenge superscalar mum Denorm, etc. Obviously, not all cases are possible (e.g., implementations. The complexity stems both from the richness the addition of two positive denorm numbers cannot reach in- of the specification (architecture) and from the peculiarities of finity), so that the actual number of cases is in fact lower than the implementation (micro-architecture). the size of the full Cartesian product. Verification has traditionally been handled by simulation of Instruction Op1 Op2 Output test-programs [3,5]. Recently, the area of formal methods has Fadd +/- NaN +/- NaN +/- NaN evolved significantly, especially for the floating-point data- Fsub +/- Infinity +/- Infinity +/- Infinity path [9,10,11,13]; however, it is nevertheless far from Fmul +/-0 +/-0 +/-0 providing a comprehensive answer to the problem. Fdiv +/-Denorm +/-Denorm +/-Denorm In most environments, the simulation of test cases is still a Fsqrt +/-Norm +/-Norm +/-Norm cornerstone of the verification process. This paper presents etc. FPgen, a new test generator developed in IBM, targeted to- Table 1: The “All IEEE Types” Coverage Model ward the functional verification of the datapath of floating A coverage model, or the set of all coverage models, is of- point units in microprocessors. FPgen is designed to yield a ten an attempt to partition the set of all the calculation cases in quasi-optimal framework for the generation of test cases in this such a way that the probability distribution is uniform over all area. subsets. As explained below, FPgen provides coverage by gen- When dealing with floating point verification by simulation, eration, i.e., it takes the request of a coverage model (such as there are an enormous, practically unbounded number of dif- the one in Table 1) as input, and outputs a set of tests that ferent calculation cases which need to be tested. In practice, cover all the achievable tasks of the model. simulation can be done on only a very small portion of the ex- The verification process is commonly defined and moni- isting space. The rationale behind verification by simulation is tored through a verification plan, which leads to multiple veri- that one acquires confidence in the design correctness by run- fication tasks. FPgen’s main purpose is to provide a ning a set of test cases that exercise a sufficiently large number convenient platform for performing all these tasks, both quali- of different cases, which in some sense are assumed to be a tatively and quantitatively. On the one hand, it should enable representative sample of the entire space. It is inferred that the the most complex verification requirements, involving intricate correct handling of the tested cases is a testimony to the design 1 sets of constraints, to be fulfilled. On the other hand, it should Other sources of tests widely used in the industry are the enable the generation of a practically unlimited number of dif- IEEE test suites. These test suites are an important quality ferent cases for each given set of constraints. FPgen provides threshold, and assist in reaching confidence in the design cor- these two properties in a context of randomness. This is mean- rectness. The tests provide good coverage of IEEE corner ingful since bug locations are mostly unpredictable. Thus, cases. However, the bug-free running of these suites is neces- FPgen provides a comprehensive, quasi-optimal simulation- sary, but far from sufficient, since their scope is confined to based solution for datapath verification of floating point imple- the IEEE standard and the implementation itself, with its host mentations. While its scope is not limited to a certain of specific cases, is not targeted. architecture or design, its first target is the IEEE Standard for The second technology, Formal Verification (FV), has Binary Floating Point Arithmetic [1], and architectures that evolved significantly during the last five to ten years. While comply with this standard. FV has traditionally been more focused on the verification of FPgen provides a convenient platform for biasing and gen- control paths, it has lately started to target floating-point data- erating operand data for floating point instructions. Simply path verification as well. Verification of control paths [20] put, a bias (or constraint) on an operand data, is a set of values [21], is improving, but still collides with the BDD size explo- to which the operand data is constrained. Resolving biases on sion problem relatively fast as the control block size increases. input operands is usually relatively straightforward, even Two new approaches, both relying on theorem proving [18] though uniformity among all the solutions is sometimes ex- technology, have emerged for datapaths. The first approach tremely hard to obtain. In contrast, resolving constraints on the translates the data-path circuit into the Theorem Prover’s lan- data, for both the intermediate result(s) and the output of in- guage. The proof of the circuit’s correctness is done entirely structions, adds a layer of complexity since it involves instruc- with the Theorem Prover, by proving a succession of manually tion semantics. FPgen’s scope, however, goes beyond the sin- proposed and deducted lemmas (e.g., the work in [9,10] with gle instruction domain, and includes the generation of se- ACL2). Because of the circuit’s complexity and the Theorem quences of instructions possibly driven by the definition of Prover’s limitations, parts of the proof are performed manually coverage models. [9,10]. The second approach combines traditional FV tools FPgen belongs to a family of Deep-Knowledge Test Gen- with the Theorem Prover. The circuit is split into blocks and erators. These generators, developed in IBM, focus on specific each block is verified by traditional FV tools. The Theorem areas not sufficiently stimulated by other means. This family Prover then combines the blocks into one unit [11]. Both ap- includes test generators for micro-architecture flow and for proaches require a significant amount of manual work by ex- Memory Management (Address Translation) Unit verification. perts in mathematics, floating-point, formal methods, and FP micro-architecture.
Recommended publications
  • Wdv-Notes Stand: 29.DEZ.1994 (2.) 329 Intel Pentium – Business Must Learn from the Debacle
    wdv-notes Stand: 29.DEZ.1994 (2.) 329 Intel Pentium – Business Must Learn from the Debacle. Wiss.Datenverarbeitung © 1994–1995 Edited by Karl-Heinz Dittberner FREIE UNIVERSITÄT BERLIN Theo Die kanadische Monatszeitschrift The Im folgenden sowie in den wdv-notes verbreitet werden, wenn dabei die folgen- Computer Post, Winnipeg veröffentlicht Nr. 330 werden diese Artikel im Original den Spielregeln beachtet werden. in mehreren Artikeln in ihrer Januar-Aus- nachgedruckt. Der Dank dafür geht an Permission is hereby granted to copy gabe 1995 eine exzellente erste Zusam- Sylvia Douglas von The Computer Post, this article electronically or in any other menfassung des Debakels um den Defekt 301 – 68 Higgins Avenue, Winnipeg, Ma- form, provided it is reproduced without des Pentium-Mikroprozessors [1–2] des nitoba, Canada, Email: SDouglas@post. alteration, and you credit it to The Compu- Computergiganten Intel. mb.ca. Diese Artikel dürfen auch weiter- ter Post. Intel’s top of the line Pentium™ microproc- This particular error in the Pentium was in essor chip has turned out to have a slight flaw the floating-point divide unit. Intel manage- Editorial: The Computer Post – Jan.95 in its character: when dividing certain rare ment was concerned enough about it that pairs of floating point numbers, it gives the they pulled together a special team to assess The Way it Will Be wrong answer. For anyone who owns a Pen- the implications. tium-based computer, or was thinking about This month you’ll find we’re reporting a In the words of Andrew Grove, Intel’s CEO, buying one, or is just feeling curious, here lot of background information on Intel’s posting later to the Internet [3], “We were are..
    [Show full text]
  • Detecting and Removing Malicious Hardware Automatically
    Appears in Proceedings of the 31st IEEE Symposium on Security & Privacy (Oakland), May 2010 Overcoming an Untrusted Computing Base: Detecting and Removing Malicious Hardware Automatically Matthew Hicks, Murph Finnicum, Samuel T. King Milo M. K. Martin, Jonathan M. Smith University of Illinois at Urbana-Champaign University of Pennsylvania Abstract hardware-level vulnerabilities would likely require phys- ically replacing the compromised hardware components. The computer systems security arms race between at- A hardware recall similar to Intel’s Pentium FDIV bug tackers and defenders has largely taken place in the do- (which cost 500 million dollars to recall five million main of software systems, but as hardware complexity chips) has been estimated to cost many billions of dollars and design processes have evolved, novel and potent today [7]. Furthermore, the skill required to replace hard- hardware-based security threats are now possible. This ware and the rise of deeply embedded systems ensure paper presents a hybrid hardware/software approach to that vulnerable systems will remain in active use after the defending against malicious hardware. discovery of the vulnerability. Second, hardware is the We propose BlueChip, a defensive strategy that has lowest layer in the computer system, providing malicious both a design-time component and a runtime component. hardware with control over the software running above. During the design verification phase, BlueChip invokes This low-level control enables sophisticated and stealthy a new technique, unused circuit identification (UCI), attacks aimed at evading software-based defenses. to identify suspicious circuitry—those circuits not used Such an attack might use a special, or unlikely, event to or otherwise activated by any of the design verification trigger deeply buried malicious logic which was inserted tests.
    [Show full text]
  • Can the Computer Be Wrong?
    Can the computer be wrong? Wojciech Myszka Department of Mechanics, Materials and Biomedical Engineering January 2021 1 Data 2 Human (operator) 3 Hardware (inevitable) 4 Hardware 5 Manufacturer fault 6 Software 7 Bug free software 8 List of software bugs Data Absolute error I I was talking about this in one of the previous lectures. I One of the main sources of errors are data. I Knowing the range of each value allows us to predict values of the result. I This can be difficult and tricky. I However, in most cases we assume that the data values are correct and believe in computer calculations. Human (operator) Operator I In general, it is difficult to take into account human errors. I These include: I not understanding the problem solved by the program, I errors in preparing input data, I wrong answer to the computer prompt, I ... Hardware (inevitable) Number of bits This is a quite different source of error. 1. Most of todays computers have I 32 or 64 bits processors 2. What does this mean? Number of bits — rangeI 1. The biggest integer value I 32 bits: from −231 to 231 − 1 (−2; 147; 483; 648 to 2; 147; 483; 647) or two billion, one hundred forty-seven million, four hundred eighty-three thousand, six hundred forty-seven I 64 bits: from −263 to 263 − 1 (−9; 223; 372; 036; 854; 775; 808 to 9; 223; 372; 036; 854; 775; 807) nine quintillion two hundred twenty three quadrillion three hundred seventy two trillion thirty six billion eight hundred fifty four million seven hundred seventy five thousand eight hundred and seven 2.
    [Show full text]
  • A Hybrid-Parallel Architecture for Applications in Bioinformatics
    A Hybrid-parallel Architecture for Applications in Bioinformatics M.Sc. Jan Christian Kässens Dissertation zur Erlangung des akademischen Grades Doktor der Ingenieurwissenschaften (Dr.-Ing.) der Technischen Fakultät der Christian-Albrechts-Universität zu Kiel eingereicht im Jahr 2017 Kiel Computer Science Series (KCSS) 2017/4 dated 2017-11-08 URN:NBN urn:nbn:de:gbv:8:1-zs-00000335-a3 ISSN 2193-6781 (print version) ISSN 2194-6639 (electronic version) Electronic version, updates, errata available via https://www.informatik.uni-kiel.de/kcss The author can be contacted via [email protected] Published by the Department of Computer Science, Kiel University Computer Engineering Group Please cite as: Ź Jan Christian Kässens. A Hybrid-parallel Architecture for Applications in Bioinformatics Num- ber 2017/4 in Kiel Computer Science Series. Department of Computer Science, 2017. Dissertation, Faculty of Engineering, Kiel University. @book{Kaessens17, author = {Jan Christian K\"assens}, title = {A Hybrid-parallel Architecture for Applications in Bioinformatics}, publisher = {Department of Computer Science, CAU Kiel}, year = {2017}, number = {2017/4}, doi = {10.21941/kcss/2017/4}, series = {Kiel Computer Science Series}, note = {Dissertation, Faculty of Engineering, Kiel University.} } © 2017 by Jan Christian Kässens ii About this Series The Kiel Computer Science Series (KCSS) covers dissertations, habilitation theses, lecture notes, textbooks, surveys, collections, handbooks, etc. written at the Department of Computer Science at Kiel University. It was initiated in 2011 to support authors in the dissemination of their work in electronic and printed form, without restricting their rights to their work. The series provides a unified appearance and aims at high-quality typography. The KCSS is an open access series; all series titles are electronically available free of charge at the department’s website.
    [Show full text]
  • P5 (Microarchitecture)
    P5 (microarchitecture) The Intel P5 Pentium family Produced From 1993 to 1999 Common manufacturer(s) • Intel Max. CPU clock rate 60 MHz to 300 MHz FSB speeds 50 MHz to 66 MHz Min. feature size 0.8pm to 0.25pm Instruction set x86 Socket(s) • Socket 4, Socket 5, Socket 7 Core name(s) P5. P54C, P54CS, P55C, Tillamook The original Pentium microprocessor was introduced on March 22, 1993.^^ Its microarchitecture, deemed P5, was Intel's fifth-generation and first superscalar x86 microarchitecture. As a direct extension of the 80486 architecture, it included dual integer pipelines, a faster FPU, wider data bus, separate code and data caches and features for further reduced address calculation latency. In 1996, the Pentium with MMX Technology (often simply referred to as Pentium MMX) was introduced with the same basic microarchitecture complemented with an MMX instruction set, larger caches, and some other enhancements. The P5 Pentium competitors included the Motorola 68060 and the PowerPC 601 as well as the SPARC, MIPS, and Alpha microprocessor families, most of which also used a superscalar in-order dual instruction pipeline configuration at some time. Intel's Larrabee multicore architecture project uses a processor core derived from a P5 core (P54C), augmented by multithreading, 64-bit instructions, and a 16-wide vector processing unit. T31 Intel's low-powered Bonnell [4i microarchitecture employed in Atom processor cores also uses an in-order dual pipeline similar to P5. Development The P5 microarchitecture was designed by the same Santa Clara team which designed the 386 and 486.^ Design work started in 1989;^ the team decided to use a superscalar architecture, with on-chip cache, floating-point, and branch prediction.
    [Show full text]
  • APA Newsletters NEWSLETTER on PHILOSOPHY and COMPUTERS
    APA Newsletters NEWSLETTER ON PHILOSOPHY AND COMPUTERS Volume 11, Number 1 Fall 2011 FROM THE EDITOR, PETER BOLTUC FROM THE CHAIR, DAN KOLAK SPECIAL SESSION DAVID L. ANDERSON “Special Session on ‘Machine Consciousness’” FEATURED ARTICLE JAAKKO HINTIKKA “Logic as a Theory of Computability” ARTICLES DARREN ABRAMSON AND LEE PIKE “When Formal Systems Kill: Computer Ethics and Formal Methods” HECTOR ZENIL “An Algorithmic Approach to Information and Meaning: A Formal Framework for a Philosophical Discussion” PENTTI O A HAIKONEN “Too Much Unity: A Reply to Shanahan” PHILOSOPHY AND ONLINE EDUCATION RON BARNETTE “Reflecting Back Twenty Years” © 2011 by The American Philosophical Association ISSN 2155-9708 FRANK MCCLUSKEY “Reflections from Teaching Philosophy Online” TERRY WELDIN-FRISCH “A Comparison of Four Distance Education Models” KRISTEN ZBIKOWSKI “An Invitation for Reflection: Teaching Philosophy Online” THOMAS URBAN “Distance Learning and Philosophy: The Term-Length Challenge” FEDERICO GOBBO “The Heritage of Gaetano Aurelio Lanzarone” APA NEWSLETTER ON Philosophy and Computers Piotr Bołtuć, Editor Fall 2011 Volume 11, Number 1 Terry Weldin-Frish, in his informative paper, compares the ROM THE DITOR experiences he had with online learning in philosophy first as a F E graduate student, reaching a Ph.D. entirely online, and later as a faculty member, at four different educational institutons in the UK and the US. Kristen Zbikowski presents a spirited defense Peter Boltuc of teaching philosophy online based on her experiences as an University of Illinois at Springfield online student and then a faculty member also teaching online courses. Thomas Urban raises a specific problem of the length I used to share the general enthusiasm about web-only of viable online philosophy courses.
    [Show full text]
  • On Improving Cybersecurity Through Memory Isolation Using Systems Management Mode
    On Improving Cybersecurity Through Memory Isolation Using Systems Management Mode A thesis submitted for the degree of Doctor of Philosophy James Andrew Sutherland School of Design and Informatics University of Abertay Dundee August 2018 i Abstract This thesis describes research into security mechanisms for protecting sensitive areas of memory from tampering or intrusion using the facilities of Systems Man- agement Mode. The essence and challenge of modern computer security is to isolate or contain data and applications in a variety of ways, while still allowing sharing where desir- able. If Alice and Bob share a computer, Alice should not be able to access Bob’s passwords or other data; Alice’s web browser should not be able to be tricked into sending email, and viewing a social networking web page in that browser should not allow that page to interact with her online banking service. The aim of this work is to explore techniques for such isolation and how they can be used usefully on standard PCs. This work focuses on the creation of a small dedicated area to perform cryp- tographic operations, isolated from the rest of the system. This is a sufficiently useful facility that many modern devices such as smartphones incorporate dedic- ated hardware for this purpose, but other approaches have advantages which are discussed. As a case study, this research included the creation of a secure web server whose encryption key is protected using this approach such that even an intruder with full Administrator level access cannot extract the key. A proof of concept backdoor which captures and exfiltrates encryption keys using a modified processor wasalso demonstrated.
    [Show full text]
  • Harnessing Simulation Acceleration to Solve the Digital Design Verification Challenge
    Harnessing Simulation Acceleration to Solve the Digital Design Verification Challenge by Debapriya Chatterjee A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Science and Engineering) in The University of Michigan 2013 Doctoral Committee: Associate Professor Valeria M. Bertacco, Chair Professor Todd M. Austin Professor Igor L. Markov Assistant Professor Zhengya Zhang c Debapriya Chatterjee All Rights Reserved 2013 To my parents ii Acknowledgments I would like to thank my advisor Professor Valeria Bertacco, who introduced me to re- search. Her willingness to let me explore new ideas has been a cherished freedom, while our discussions on research directions have enabled me to pursue a fixed direction in an interesting and uncharted research territory. Moreover, my writing and presentation skills have grown significantly as a result of her mentoring and her careful attention. I am also grateful for my committee members. I am appreciative of Professor Igor Markov’s contributions: often pointing me to relevant research articles as well as our friendly interactions in the department and various conferences. Professor Todd Austin has been a consistent source of extremely valuable feedback on various matters throughout my studies. I am grateful to Professor Zhengya Zhang for his kind feedback. Early in my graduate school career, I was very fortunate to work with Ilya Wagner and Andrew DeOrio – they instilled the fundamentals of the role of a graduate student into me. I am especially grateful to Joseph Greathouse, Andrea Pellegrini and Biruk Mammo for the innumerable illuminating discussions we have had in the office, often extending into late hours.
    [Show full text]
  • Information Assurance MELANI
    Federal IT Steering Unit FITSU Federal Intelligence Service FIS Reporting and Analysis Centre for Information Assurance MELANI https://www.melani.admin.ch/ INFORMATION ASSURANCE SITUATION IN SWITZERLAND AND INTERNATIONALLY Semi-annual report 2018/I (January – June) 8 NOVEMBER 2018 REPORTING AND ANALYSIS CENTRE FOR INFORMATION ASSURANCE MELANI https://www.melani.admin.ch/ 1 Overview / Content 1 Overview / Content .............................................................................................. 2 2 Editorial................................................................................................................. 5 3 Key topic: vulnerabilities in the hardware ....................................................... 6 3.1 Spectre and Meltdown ..................................................................................... 6 3.2 Why this design error? .................................................................................... 6 3.3 Possible solutions ........................................................................................... 7 3.4 Possible developments.................................................................................... 8 4 Situation in Switzerland...................................................................................... 9 4.1 Espionage ........................................................................................................ 9 Spiez Laboratory name misused as sender of "Olympic Destroyer" ........................... 9 4.2 Industrial control systems ............................................................................
    [Show full text]
  • A Fault Tolerant Approach to Microprocessor Design
    Appears in Dependable Systems and Networks (DSN), July 2001 A Fault Tolerant Approach to Microprocessor Design Chris Weaver Todd Austin Advanced Computer Architecture Laboratory University of Michigan {chriswea, taustin}@eecs.umich.edu Abstract 1.1.1 Design faults Design faults are the result of human error, either in the We propose a fault-tolerant approach to reliable micropro- design or specification of a system component, that renders the cessor design. Our approach, based on the use of an on-line part unable to correctly respond to certain inputs. The typical checker component in the processor pipeline, provides signifi- approach used to detect these bugs is simulation-based verifi- cant resistance to core processor design errors and opera- cation. A model of the processor being designed executes a tional faults such as supply voltage noise and energetic series of tests and compares the model’s results to expected particle strikes. We show through cycle-accurate simulation results. Unfortunately, design errors sometimes slip through and timing analysis of a physical checker design that our this testing process due to the immense size of the test space. approach preserves system performance while keeping area To minimize the probability of undetected errors, designers overheads and power demands low. Furthermore, analyses employ various techniques to improve the quality of verifica- suggest that the checker is a fairly simple state machine that tion including co-simulation [4], coverage analysis, random can be formally verified, scaled in performance, and reused. test generation [5], and model-driven test generation [6]. Further simulation analyses show virtually no performance Another popular technique, formal verification, uses equal- impacts when our simple checker design is coupled with a ity checking to compare a design under test with the specifica- high-performance microprocessor model.
    [Show full text]
  • Historia Procesorów Firmy Intel. Modele Pentium P5 I Pentium
    Historia procesorów firmy Intel Pentium P5 ₥@ʁ€₭ ‽ud3£k0 Urządzenia Techniki Komputerowej Spis treści • Nazwa Pentium • Pentium MMX • Marketing Intela • Charakterystyka Intel • Pentium P5 Pentium MMX • Charakterystyka Intel • Architektura Intel Pentium Pentium MMX • Architektura Intel Pentium • Rozwiązania zastosowane w • Wnętrze Intel Pentium Pentium MMX A80501 66 SX950 • Wnętrze Intel Pentium MMX • Rozwiązania zastosowane • Modele Pentium MMX w Pentium • Tillamook • Modele Pentium • Instrukcje MMX • Pentium FDIV bug • It's All About the Pentiums • Narzędzie pentNT 2 Nazwa Pentium • Procesor Pentium miał się początkowo nazywać 80586 lub i586. Jednak Intel nie mógł zarejestrować samych cyfr jako znaku towarowego. Wybrał więc nazwę „Pentium”. • Nazwa wzięła się z greckiej cyfry „pięć” (πέντε 'pente') - oznacza piątą generację procesorów - i końcówki łacińskiej -ium. – W pierwszych programach powstałych w tym czasie i w ich dokumentacji używano jednak terminu „i586”. • Nazwa Pentium była zabiegiem marketingowym symbolizującym nową jakość. Stworzyła wyraźną i rozpoznawalną markę komputerową. – Wynajęto firmę marketingową Lexington Branding w celu rozpropagowania nowej nazwy procesora. • Było to coś odmiennego od nazywania komputera samą liczbą. • David Placek z Lexington Branding wyjaśnia dlaczego konieczna była zmiana. – Sprzedawca mówi tobie, że ten komputer ma procesor 286, a ten ma 386. Ty pytasz „A co to znaczy?”. A on odpowiada: „Jest szybszy”. Ale poza tym nie niesie to żadnej innej informacji. Procesor Intela potrafił przetwarzać grafikę i wideo tak, że stał się podstawą komputerów domowych. • Był nowa jakością. Konieczna była wyraźna i dobrze brzmiąca nazwa. 3 Marketing Intela • Pragnieniem firmy była chęć wyróżnienia się na rynku wśród grupy producentów tańszych odpowiedników. • Celem Intela było zwiększenie świadomości marki wśród klientów i dystrybutorów sprzętu komputerowego.
    [Show full text]
  • On Hardware and Hardware Models for Embedded Real-Time Systems
    On Hardware and Hardware Models for Embedded Real-Time Systems Jakob Engblom∗ Dept. of Information Technology, Uppsala University P.O. Box 325, SE-751 05 Uppsala, Sweden [email protected] / http://www.docs.uu.se/~jakob effects if deadlines are missed or some other timing- Abstract related bugs manifest themselves. When developing embedded real-time systems, de- When building an embedded real-time systems, the signers and programmers rely on various forms of choice of hardware platform is very important to create scheduling and timing analysis. At some point, all an analyzable and predictable system. Also, the quality such analyses must account for the hardware used to of the models of the hardware used in software tools obtain execution time information, and if the analysis is very important to the correctness of timing analysis method does not accurately reflect the hardware char- and the integrity of the system. acteristics, the result is likely to be a bad analysis and In this paper, we discuss some of the aspects of how potentially a bad system. to build hardware models that are correct visavi the The choice of hardware, specifically the microproces- hardware, and how to select hardware that allows real- sor or microcontroller to use, has a profound influence time systems to be constructed in a reliable fashion. on the analyzability of a system. The timing behav- The purpose of this paper is to inspire some discus- ior of a complex CPU core is very hard to understand sion regarding how real-time systems are designed and (even without a cached memory system).
    [Show full text]