United States Patent and Trademark Office Before

Total Page:16

File Type:pdf, Size:1020Kb

United States Patent and Trademark Office Before UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD ARM LTD. and ARM, INC., Petitioners, v. COMPLEX MEMORY, LLC, Patent Owner IPR2019-00053 PATENT 5,890,195 PATENT OWNER PRELIMINARY RESPONSE TO PETITION PURSUANT TO 37 C.F.R. § 42.107(a) IPR2019-00053 U.S. Patent 5,890,195 Table of Contents I. INTRODUCTION 1 II. THE ’195 PATENT 1 A. Overview of the ’195 Patent 1 B. Prosecution History of the ’195 Patent 4 C. Claim Construction for the ’195 Patent 4 III. THERE IS NO REASONABLE LIKELIHOOD THAT AT LEAST ONE OF THE CHALLENGED CLAIMS IS UNPATENTABLE 5 A. GROUND 1 - Fukuda and Lin 5 1. Brief Overview of Fukuda 5 2. Brief Overview of Lin 6 3. Analysis of Claim 6 8 a. The combination of Fukuda and Lin fails to render obvious storing addresses in latches (a first storage type) for address lookup and storing data in registers (a second storage type) 8 b. Petitioners’ Assertions Regarding “Latches” and “Registers” are Technically and Legally Flawed 16 4. Ground 1 fails 20 B. GROUND 2 – Fukuda, Lin, and Matsuda 20 C. GROUND 3 – Smith and Horowitz 21 1. Brief Overview of Smith 21 2. Brief Overview of Horowitz 22 3. Analysis of Claim 6 22 a. The combination of Smith and Horowitz fails to render obvious storing addresses in latches (a first storage type) for address lookup and storing data in registers (a second storage type) 22 4. Ground 3 fails 24 D. GROUND 4 – Smith, Horowitz, and Matsuda 25 IV. PETITIONERS FAIL TO IDENTIFY ALL REAL PARTIES-IN- INTEREST 25 ii IPR2019-00053 U.S. Patent 5,890,195 V. CONCLUSION 29 Table of Exhibits Exhibits to Petition U.S. Patent No. 5,890,195 (“the ‘195 patent”) Ex. 1001 File History for the ’195 patent Ex. 1002 Declaration of Michael Shamos Ex. 1003 Curriculum Vitae of Michael Shamos Ex. 1004 U.S. Patent No. 5,619,676 to Fukuda et al. Ex. 1005 (“Fukuda”) Alan Smith, Cache Memories, Computing Surveys, Ex. 1006 Vol. 14, No. 3, pp. 473-530 (Sep. 1982) (“Smith”) U.S. Patent No. 5,423,019 to Lin et al. (“Lin”) Ex. 1007 Excerpt (pp. 523-24, 542, 677) from: Paul Horowitz Ex. 1008 and Winfield Hill, The Art of Electronics, Second Edition, Cambridge University Press (1989) (“Horowitz”) U.S. Patent No. 5,257,220 to Shin et al. (“Shin”) Ex. 1009 U.S. Patent No. 5,509,132 to Matsuda et al. Ex. 1010 (“Matsuda”) 4004 Single Chip 4-bit P-Channel Microprocessor, Ex. 1011 Intel Corporation, March 1987 (“4004 Datasheet”) Texas Instruments Inc. v. Complex Memory LLC, Ex. 1012 IPR2018-00823, EX1012: Exhibit A of Plaintiff Complex Memory LLC’s Infringement Contentions pursuant to L.R. 3-1 for Complex Memory, LLC v. Texas Instruments, Inc. et al., Case No. 2:17-cv-699 (“Infringement Contentions”) Texas Instruments Inc. v. Complex Memory LLC, Ex. 1013 IPR2018-00823, EX1013: Complaint for Patent Infringement for Complex Memory, LLC v. Texas Instruments, Inc. et al., Case No. 2:17-cv-699 (“Complaint”) Merriam-Webster Collegiate Dictionary, 10th Ed., Ex. 1014 Merriam-Webster, Incorporated, 1993, p. 404 (“Merriam-Webster Dictionary”) iii IPR2019-00053 U.S. Patent 5,890,195 CLIPPER 32-Bit Microprocessor: Introduction to the Ex. 1015 CLIPPER Architecture, Fairchild (Mar. 1986) (“CLIPPER”) Declaration of Rachel J. Watters regarding Smith Ex. 1016 (“Watters”) Declaration of Dr. Sylvia Hall-Ellis regarding Ex. 1017 Horowitz (“Hall-Ellis”) Texas Instruments Inc. v. Complex Memory LLC, Ex. 1018 IPR2018-00823, Paper No. 12 (PTAB Aug. 3, 2018) Exhibits for POPR Declaration of Steve Novak Ex. 2001 Appendix A: Curriculum Vitae of Steve Novak Ex. 2002 ARM Governance and Financial Report 2015, Ex. 2003 retrieved on February 5, 2019 from https://www.arm.com/company/investors/- /media/arm- com/company/Legacy%20Financial%20PDFs/ARM GFReport2015.pdf?la=en “STM32 32-bit Arm Cortex MCUs,” retrieved on Ex. 2004 February 5, 2019 from www.st.com/en/microcontrollers/stm32-32-bit-arm- cortex-mcus.html “ARM and Broadcom Extend Relationship With Ex. 2005 ARMv7 and ARMv8 Architecture Licenses,” retrieved on February 5, 2019 from www.arm.com/about/newsroom/arm-and-broadcom- extend-relationship-with-armv7-and-armv8- architecture-licenses.php “Motorola’s new X8 ARM chip: The cornerstone of Ex. 2006 Google’s always-on Android vision,” retrieved on February 5, 2019 from www.extremetech.com/computing/162139- motorolas-new-x8-arm-chip-the-cornerstone-of- googles-always-on-android-vision Complaint from Complex Memory, LLC v. Texas Ex. 2007 Instruments, Inc. et al., Case No. 2:17-cv-00699 (E.D.Tex. October 13, 2017) iv IPR2019-00053 U.S. Patent 5,890,195 I. INTRODUCTION Patent Owner (“PO”) Complex Memory, LLC submits this Preliminary Response to the Petition for Inter Partes Review (“’195 Pet.”) filed by Petitioners ARM LTD. and ARM, INC. Petitioners challenge Claims 6, 7, and 8. Claim 6 is in independent format, and claims 7 and 8 each depend from claim 6. The Board should dismiss the Petition in its entirety at least because, as PO shows below, (1) a dispositive claim element is entirely missing from the combination asserted in each of the Grounds of the Petition, and (2) Petitioners have failed to name at least one real party-in-interest. II. THE ’195 PATENT A. Overview of the ’195 Patent U.S. Patent No. 5,890,195 (“the ‘195 Patent”) is titled “DRAM with Integral SRAM Comprising a Plurality of Sets of Address Latches Each Associated with One of a Plurality of SRAM.” The ’195 Patent issued March 30, 1999 from United States Patent Application No. 08/855,944 and is a continuation-in-part of United States Patent No. 5,835,932, filed March 13, 1997. Petitioners challenge Claims 6, 7, and 8. Claim 6 is in independent format. Claim 6, annotated as referred to herein, reads: 6. [Preamble] A method of accessing blocks of data in a 1 IPR2019-00053 U.S. Patent 5,890,195 memory having a plurality registers and a memory array, comprising the steps of: [6a] receiving an address through an address port; [6b] comparing the received address with addresses previously stored in each of a plurality of latches; [6c] when a match occurs between the received address and a matching address stored in a one of the latches performing the substep of accessing a register corresponding to the latches storing the matching address through a data port; [6d and 6e] when a match does not occur between the received address and an address stored in one of the latches, performing the substeps of: exchanging data between a location in the memory array addressed by the received address and a selected one of the registers; and storing the received address in one of the latches corresponding to the selected register; [6f] modifying the received address to generate a modified address; [6g] exchanging data between a location in the memory array addressed by the modified address and a second selected one of the registers; and [6h] storing the modified address in of one of the latches corresponding to the second selected register. ʼ195 Patent at 19:8-33. The ’195 Patent describes a system that includes registers, to store data that was previously read from a memory (e.g., a DRAM cell array 402). ‘195 Patent at 3:57-4:4, 8:51-55, 9:26-36, 10:10-12, and 11:66-12:22. The registers provide 2 IPR2019-00053 U.S. Patent 5,890,195 faster access to cached data as compared to retrieving the data from the memory. ‘195 Patent at 3:57-4:4. When data is read from the memory, the data is stored in the registers, and the address of the data is stored in address latches (e.g., last row read (LRR) latches 502 or LRR latches 701) that are associated with the particular registers that store the data. In addition to reading the requested data and storing the data (in the registers) and storing the address of the data (in the latches), the system also caches non-requested data from the memory. ‘195 Patent at 10:54-11:18. In an example, the system modifies the address of the requested data and uses the modified address to access the memory. Id. The non-requested data read from the memory is stored into the registers. Id. The modified address is stored in latches associated with the non-requested data. Id. When a subsequent request for data is received, the address of the newly requested data is compared to the addresses in the latches. An address match indicates that the newly requested data is currently stored in the registers and can be accessed with reduced latency from the registers as compared to the memory. Because data is typically accessed within temporally or spatially adjacent areas in the memory, there is a substantial probability that an address match will occur. ‘195 Patent at 11:31-43. 3 IPR2019-00053 U.S. Patent 5,890,195 B. Prosecution History of the ’195 Patent United States Patent Application No. 08/855,944 (“the ’944 Application”) eventually issued as the ’195 Patent and was filed on May 14, 1997 as a continuation-in-part of United States Patent No. 5,835,932 filed March 13, 1997. The ’944 Application received a first action notice of allowance on December 3, 1998 without any rejections based on prior art. C. Claim Construction for the ’195 Patent Petitioners state that the challenged claims of the ’195 Patent should be interpreted under the Philips standard.
Recommended publications
  • CLIPPER™ C100 32-Bit Compute Engine
    INTErG?RH CLIPPER™ C100 32-Bit Compute Engine Dec. 1987 Advanced Processor Division Data Sheet Intergraph is a registered trademark and CLIPPER is a trademark of Intergraph Corporation. UNIX is a trademark of AT&T Bell Laboratories. Copyright © 1987 Intergraph Corporation. Printed in U.S.A. T CLIPPER !! C100 32-Bit Compute Engine Advance Information Table of Contents 7. Cache and MMU .......................... 40 7.1. Functional Overview ...................... 40 1. Introduction ............................... 1 7.2. Memory Management Unit (MMU) ........... 43 1.1. CPU .................................... 3 7.2.1. Translation Lookaside Buffer (TLB) .... 43 1 .1.1. Pipelining and Concurrency ............ 3 7.2.2. Fixed Address Translation ........... 46 1.1.2. Integer Execution Unit ................ 4 7.2.3. Dynamic Translation Unit (DTU) ....... 48 1.1.3. Floating-Point Execution Unit (FPU) ..... 4 7.3. Cache ................................. 51 1.1.4. Macro Instruction Unit ................ 5 7.3.1. Cache Une Description .............. 51 1.2. CAMMU ................................. 6 7.3.2. Cache Data Selection ............... 52 1.2.1. Instruction and Data Caches ........... 6 7.3.3. Prefetch .......................... 52 1.2.2. Memory Management Unit (MMU) ...... 6 7.3.4. Quadword Data Transfers ............ 53 1.3. Clock Control Unit ......................... 6 7.4. System Tag ............................ 53 2. Memory Organization ....................... 6 7.4.1. System Tags 0 - 5 .................. 53 2.1. Data Types .............................. 8 7.4.2. System Tag 6-Cache Purge ......... 54 3. Programming Model ........................ 8 7.4.3. System Tag 7-Slave 110 ........... 54 3.1. Register Sets ........................... 10 7.5. Bus Watch Modes ....................... 55 3.1.1. User and Supervisor Registers ........ 11 7.6. Internal Registers ........................ 56 3.1.2.
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • A Rapid and Scalable Approach to Planetary Defense Against Asteroid Impactors
    THE LEAGUE OF EXTRAORDINARY MACHINES: A RAPID AND SCALABLE APPROACH TO PLANETARY DEFENSE AGAINST ASTEROID IMPACTORS Version 1.0 NASA INSTITUTE FOR ADVANCED CONCEPTS (NIAC) PHASE I FINAL REPORT THE LEAGUE OF EXTRAORDINARY MACHINES: A RAPID AND SCALABLE APPROACH TO PLANETARY DEFENSE AGAINST ASTEROID IMPACTORS Prepared by J. OLDS, A. CHARANIA, M. GRAHAM, AND J. WALLACE SPACEWORKS ENGINEERING, INC. (SEI) 1200 Ashwood Parkway, Suite 506 Atlanta, GA 30338 (770) 379-8000, (770)379-8001 Fax www.sei.aero [email protected] 30 April 2004 Version 1.0 Prepared for ROBERT A. CASSANOVA NASA INSTITUTE FOR ADVANCED CONCEPTS (NIAC) UNIVERSITIES SPACE RESEARCH ASSOCIATION (USRA) 75 5th Street, N.W. Suite 318 Atlanta, GA 30308 (404) 347-9633, (404) 347-9638 Fax www.niac.usra.edu [email protected] NIAC CALL FOR PROPOSALS CP-NIAC 02-02 PUBLIC RELEASE IS AUTHORIZED The League of Extraordinary Machines: NIAC CP-NIAC 02-02 Phase I Final Report A Rapid and Scalable Approach to Planetary Defense Against Asteroid Impactors Table of Contents List of Acronyms ________________________________________________________________________________________ iv Foreword and Acknowledgements___________________________________________________________________________ v Executive Summary______________________________________________________________________________________ vi 1.0 Introduction _________________________________________________________________________________________ 1 2.0 Background _________________________________________________________________________________________
    [Show full text]
  • Programming Environments Manual for 32-Bit Implementations
    Programming Environments Manual For 32- bit Implementations Of The Powerpctm Architecture Programming Environments Manual for 32-Bit Implementations. PowerPC™ Architecture (freescale.com/files/product/doc/MPCFPE32B.pdf) Some implementations of these architectures recognize data prefetch (6) The IA-32 Intel Architecture Software Developer's Manual, Volume 2: Instruction (14) PowerPC Microprocessor 32-bit Family: The Programming Environments, page. 1 /* 2 * Contains the definition of registers common to all PowerPC variants. used in the Programming Environments Manual For 32-Bit 6 * Implementations of the 0x4 /* Architecture 2.06 */ 353 #define PCR_ARCH_205 0x2 /* Architecture. x86 and x86_64. ARM32 and ARM64. POWER and PowerPC. MIPS32 Implementations may return false when they should have returned true but not vice versa. For shared-memory JS programming (if applicable) it will be natural to The ARMv8 manual states that 8 byte aligned accesses (in 64-bit mode). 5.1 32-bit PowerPC, 5.2 64-bit PowerPC, 5.3 Gaming consoles, 5.4 Desktop computers The RT was a rapid design implementing the RISC architecture. Programming Environments Manual for 32-bit Implementations of the PowerPC. IBM PowerPC Microprocessor Family. Vector/SIMD Multimedia Extension Technology Programming Environments Manual, 2005. 10. Intel Corporation. IA-32 Architectures Optimization Reference Manual, 2007. 11. Christoforos E. Technology: Architecture and Implementations, IEEE Micro, v.19 n.2, p.37-48, March 1999. Programming Environments Manual For 32-bit Implementations Of The Powerpctm Architecture Read/Download The 32-bit mtmsr is _ implemented on all POWER ISA compliant CPUs (see Programming Environments Manual for 64-bit Microprocessors _ _ Version 3.0 _ _ July 15, 2005.
    [Show full text]
  • The Design and Implementation of Low-Latency Prediction Serving Systems
    The Design and Implementation of Low-Latency Prediction Serving Systems Daniel Crankshaw Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2019-171 http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-171.html December 16, 2019 Copyright © 2019, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Acknowledgement To my parents The Design and Implementation of Low-Latency Prediction Serving Systems by Daniel Crankshaw A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Joseph Gonzalez, Chair Professor Ion Stoica Professor Fernando Perez Fall 2019 The Design and Implementation of Low-Latency Prediction Serving Systems Copyright 2019 by Daniel Crankshaw 1 Abstract The Design and Implementation of Low-Latency Prediction Serving Systems by Daniel Crankshaw Doctor of Philosophy in Computer Science University of California, Berkeley Professor Joseph Gonzalez, Chair Machine learning is being deployed in a growing number of applications which demand real- time, accurate, and cost-efficient predictions under heavy query load. These applications employ a variety of machine learning frameworks and models, often composing several models within the same application.
    [Show full text]
  • Using the GNU Compiler Collection
    Using the GNU Compiler Collection Richard M. Stallman Last updated 20 April 2002 for GCC 3.2.3 Copyright c 1988, 1989, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002 Free Software Foundation, Inc. For GCC Version 3.2.3 Published by the Free Software Foundation 59 Temple Place—Suite 330 Boston, MA 02111-1307, USA Last printed April, 1998. Printed copies are available for $50 each. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with the Invariant Sections being “GNU General Public License”, the Front-Cover texts being (a) (see below), and with the Back-Cover Texts being (b) (see below). A copy of the license is included in the section entitled “GNU Free Documentation License”. (a) The FSF’s Front-Cover Text is: A GNU Manual (b) The FSF’s Back-Cover Text is: You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development. i Short Contents Introduction ...................................... 1 1 Compile C, C++, Objective-C, Ada, Fortran, or Java ....... 3 2 Language Standards Supported by GCC ............... 5 3 GCC Command Options .......................... 7 4 C Implementation-defined behavior ................. 153 5 Extensions to the C Language Family ................ 157 6 Extensions to the C++ Language ................... 255 7 GNU Objective-C runtime features.................. 267 8 Binary Compatibility ........................... 273 9 gcov—a Test Coverage Program ................... 277 10 Known Causes of Trouble with GCC ...............
    [Show full text]
  • Encyclopedia of Windows
    First Edition, 2012 ISBN 978-81-323-4632-6 © All rights reserved. Published by: The English Press 4735/22 Prakashdeep Bldg, Ansari Road, Darya Ganj, Delhi - 110002 Email: [email protected] Table of Contents Chapter 1 - Microsoft Windows Chapter 2 - Windows 1.0 and Windows 2.0 Chapter 3 - Windows 3.0 Chapter 4 - Windows 3.1x Chapter 5 - Windows 95 Chapter 6 - Windows 98 Chapter 7 - Windows Me Chapter 8 - Windows NT Chapter 9 - Windows CE Chapter 10 - Windows 9x Chapter 11 - Windows XP Chapter 12 - Windows 7 Chapter- 1 Microsoft Windows Microsoft Windows The latest Windows release, Windows 7, showing the desktop and Start menu Company / developer Microsoft Programmed in C, C++ and Assembly language OS family Windows 9x, Windows CE and Windows NT Working state Publicly released Source model Closed source / Shared source Initial release November 20, 1985 (as Windows 1.0) Windows 7, Windows Server 2008 R2 Latest stable release NT 6.1 Build 7600 (7600.16385.090713-1255) (October 22, 2009; 14 months ago) [+/−] Windows 7, Windows Server 2008 R2 Service Pack 1 RC Latest unstable release NT 6.1 Build 7601 (7601.17105.100929-1730) (September 29, 2010; 2 months ago) [+/−] Marketing target Personal computing Available language(s) Multilingual Update method Windows Update Supported platforms IA-32, x86-64 and Itanium Kernel type Hybrid Default user interface Graphical (Windows Explorer) License Proprietary commercial software Microsoft Windows is a series of software operating systems and graphical user interfaces produced by Microsoft. Microsoft first introduced an operating environment named Windows on November 20, 1985 as an add-on to MS-DOS in response to the growing interest in graphical user interfaces (GUIs).
    [Show full text]
  • MISP Objects
    MISP Objects MISP Objects Introduction. 7 Funding and Support . 9 MISP objects. 10 ail-leak . 10 ais-info . 11 android-app. 12 android-permission. 13 annotation . 15 anonymisation . 16 asn . 20 attack-pattern . 22 authentication-failure-report . 22 authenticode-signerinfo . 23 av-signature. 24 bank-account. 25 bgp-hijack. 29 bgp-ranking . 30 blog . 30 boleto . 32 btc-transaction . 33 btc-wallet . 34 cap-alert . 35 cap-info. 39 cap-resource . 43 coin-address . 44 command . 46 command-line. 46 cookie . 47 cortex . 48 cortex-taxonomy . 49 course-of-action . 49 covid19-csse-daily-report . 51 covid19-dxy-live-city . 53 covid19-dxy-live-province . 54 cowrie . 55 cpe-asset . 57 1 credential . 67 credit-card . 69 crypto-material. 70 cytomic-orion-file. 73 cytomic-orion-machine . 74 dark-pattern-item. 74 ddos . 75 device . 76 diameter-attack . 77 dkim . 79 dns-record . ..
    [Show full text]
  • Cpu Rec.Py, Un Outil Statistique Pour La Reconnaissance D'architectures
    cpu_rec.py, un outil statistique pour la reconnaissance d’architectures binaires exotiques Louis Granboulan SSTIC, 9 juin 2017, Rennes Ces travaux ont été en partie réalisés au sein du projet EIC de l’IRT SystemX cpu_rec.py Résumé Résumé 1 Le problème à résoudre : reconnaissance d’architecture dans un binaire 2 Contenu scientifique : apprentissage et statistiques 3 Mode d’emploi : binwalk, ou bien standalone 4 Exemples 5 Conclusion, perspectives SSTIC, 9 juin 2017, Rennes 2 cpu_rec.py Le problème à résoudre : reconnaissance d’architecture dans un binaire Qu’est-ce que la « reconnaissance d’architectures dans un binaire » ? 1. D’abord, c’est quoi un binaire ? Évidemment, tout le contenu d’un ordinateur est binaire... on se limite à des fichiers contenant des instructions à faire exécuter par un microprocesseur directement lisibles (pas de compression ni de chiffrement) dont on ne connaît pas bien le contenu et qu’on veut analyser (rétro-ingénierie) Exemples Format inconnu (pas COFF, ELF, Mach-O, ...) Firmware bare metal (directement exécuté sur le microprocesseur) Dump mémoire (RAM, flash, ROM...) SSTIC, 9 juin 2017, Rennes 3 cpu_rec.py Le problème à résoudre : reconnaissance d’architecture dans un binaire Qu’est-ce que la « reconnaissance d’architectures dans un binaire » ? 2. Et c’est quoi une architecture ? ISA (Instruction Set Architecture) décrit un jeu d’instruction de microprocesseurs Deux microprocesseurs différents peuvent avoir le même ISA ou des ISA sous-ensembles l’un de l’autre (e.g. Core i5 vs. Core i3 vs. Pentium III) Un microprocesseur
    [Show full text]
  • United States Patent (19) 11 Patent Number: 5,440,749 Moore Et Al
    USOO544O749A United States Patent (19) 11 Patent Number: 5,440,749 Moore et al. 45 Date of Patent: Aug. 8, 1995 54 HIGH PERFORMANCE, LOW COST 4,713,749 12/1987 Magar et al. ........................ 395/375 MCROPROCESSOR ARCHITECTURE 4,714,994 12/1987 Oklobdzija et al. ................ 395/375 4,720,812 1/1988 Kao et al. ............................ 395/700 75 Inventors: Charles H. Moore, Woodside; 4,772,888 9/1988 Kimura .......... ... 340/825.5 Russell H. Fish, III, Mt. View, both 4,777,591 10/1988 Chang et al. ........................ 395/800 of Calif. 4,787,032 11/1988 Culley et al. ........................ 364/200 - 4,803,621 2/1989 Kelly ................................... 395/400 (73) Assignee: Nanotronics Corporation, Eagle 4,860,198 8/1989 Takenaka ... ... 364/DIG. Point, Oreg. 4,870,562 9/1989 Kimoto ...... ... 364/DIG. 1 4,931,986 6/1990 Daniel et al. ........................ 395/550 21 Appl. No.: 389,334 5,036,460 7/1991 Takahira ............................. 395/425 22 Filed: Aug. 3, 1989 5,070,451 12/1991 Moore et al. ....................... 395/375 5,127,091 6/1992 Bonfarah ............................. 395/375 511 Int. Cl'................................................ GO6F 9/22 52 U.S. Cl. .................................... 395/800; 364/931; OTHER PUBLICATIONS 364/925.6; 364/937.1; 364/965.4; (2. Intel 80386 Programmer's Reference Manual, 1986. 58) Field of Search ................ 395/425,725,775, 800 Attorney,Primary ExaminerAgent, or Firm-CooleyDavid Y. Eng Godward Castro 56) References Cited Huddleson & Tatum U.S. PATENT DOCUMENTS 57 ABSTRACT 3,603,934 9/1971 Heath ........................... 364/DIG. 4,003,033 1/1977 O'Keefe et al.
    [Show full text]
  • Nachos-Java-Reader.Pdf
    09/02/06 19:43:58 nachos/README 1 Nachos for Java README directory. (Again, this has already been done for you on the instructional machines.) Welcome to Nachos for Java. We believe that working in Java rather than C++ will greatly simplify the development process by preventing bugs Compiling Nachos: arising from memory management errors, and improving debugging support. You should now have a directory called nachos, containing a Makefile, Getting Nachos: this README, and a number of subdirectories. Download nachos-java.tar.gz from the Projects section of the class First, put the ’nachos/bin’ directory on your PATH. This directory homepage at: contains the script ’nachos’, which simply runs the Nachos code. http://www-inst.EECS.Berkeley.EDU/˜cs162/ To compile Nachos, go to the subdirectory for the project you wish to compile (I will assume ’proj1/’ for Project 1 in my examples), Unpack it with these commands: and run: gunzip -c nachos-java.tar.gz | tar xf - gmake Additional software: This will compile those portions of Nachos which are relevant to the project, and place the compiled .class files in the proj1/nachos Nachos requires the Java Devlopment Kit, version 1.5 or later. This is directory. installed on all instructional machines in: /usr/sww/lang/jdk-1.5.0_05 You can now test Nachos from the proj1/ directory with: To use this version of the JDK, be sure that /usr/sww/lang/jdk-1.5.0_05/bin nachos is on your PATH. (This should be the case for all class accounts already.) You should see output resembling the following: If you are working at home, you will need to download the JDK.
    [Show full text]
  • The Design and Implementation of Low-Latency Prediction Serving Systems
    The Design and Implementation of Low-Latency Prediction Serving Systems by Daniel Crankshaw A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Joseph Gonzalez, Chair Professor Ion Stoica Professor Fernando Perez Fall 2019 The Design and Implementation of Low-Latency Prediction Serving Systems Copyright 2019 by Daniel Crankshaw 1 Abstract The Design and Implementation of Low-Latency Prediction Serving Systems by Daniel Crankshaw Doctor of Philosophy in Computer Science University of California, Berkeley Professor Joseph Gonzalez, Chair Machine learning is being deployed in a growing number of applications which demand real- time, accurate, and cost-efficient predictions under heavy query load. These applications employ a variety of machine learning frameworks and models, often composing several models within the same application. However, most machine learning frameworks and systems are optimized for model training and not deployment. In this thesis, I discuss three prediction serving systems designed to meet the needs of modern interactive machine learning applications. The key idea in this work is to utilize a decoupled, layered design that interposes systems on top of training frameworks to build low-latency, scalable serving systems. Velox introduced this decoupled architecture to enable fast online learning and model personalization in response to feedback. Clipper generalized this system architecture to be framework-agnostic and introduced a set of optimizations to reduce and bound prediction latency and improve prediction throughput, accuracy, and robustness without modifying the underlying machine learning frameworks. And InferLine provisions and manages the individual stages of prediction pipelines to minimize cost while meeting end-to-end tail latency constraints.
    [Show full text]