Dissertation

Total Page:16

File Type:pdf, Size:1020Kb

Dissertation An Agile and Rapidly Reconfigurable Test Bed for Hardware-Based Security Features by Daniel Smith Beard Master of Science Computer Information Systems Florida Institute of Technology 2009 Bachelor of Science Engineering, Electrical Option University of South Florida 1980 A dissertation submitted to the College of Engineering and Computer Science at Florida Institute of Technology in partial fulfillment of the requirements for the degree of Doctorate of Philosophy in Computer Science Melbourne, Florida December, 2019 © Copyright 2019 Daniel Smith Beard All Rights Reserved The author grants permission to make single copies. We the undersigned committee hereby approve the attached dissertation An Agile and Rapidly Reconfigurable Test Bed for Hardware-Based Security Features by Daniel Smith Beard Marco Carvalho, Ph.D. Professor and Dean College of Engineering and Science Committee Chair Stephen K. Cusick, J.D. Associate Professor College of Aeronautics Outside Committee Member William H. Allen, Ph.D. Associate Professor Computer Engineering and Sciences Committee Member Heather Crawford, Ph.D. Assistant Professor Computer Engineering and Sciences Committee Member Philip J. Bernhard, Ph.D. Associate Professor and Department Head Computer Engineering and Sciences ABSTRACT Title: An Agile and Rapidly Reconfigurable Test Bed for Hardware-Based Security Features Author: Daniel Smith Beard Major Advisor: Marco Carvalho, Ph.D. Current general-purpose computing hardware and the software that runs on it have evolved over more than a half century from large mainframe systems in corporate, military, and research use to interconnected commodity devices more common than wrist watches. Computational power, storage capacity, and communication capa- bilities have increased in wonderful and staggering ways; however, when we read about the latest vulnerability or data breach it seems that cybersecurity is stuck somewhere between 1983 when Matthew Broderick first heard a synthesized voice ask \Shall we play a game?", [93] and 1988 when the Morris worm hit the Internet [116]. Multics [82] and Scomp [54] had a shot at establishing secure computing but functionality, cost, and ease of use have largely trumped security so far. For the present, as Jaeger said, \. security features fail to protect the system in a myriad of ways." [77] This study and research effort briefly surveys the roots of secure computing and present vulnerabilities that contribute to insecurity, and presents technological changes that could help stem this tide. We have gleaned a collec- tion of demonstrated security features that could be hardware-based and therefore hardware-enforced, but would require no adaptation of existing legacy applica- tions beyond recompiling already-existing high level source code. In this effort iii we demonstrate a prototype CPU with hardware-based security features that is amenable to FPGA or ASIC implementation and provide a hardware testbed based on DARPA's Cyber Grand Challenge cybersecurity \experimentation ecosystem" [39]. This will answer the question of whether hardware-based security features can produce a significant security improvement in unadapted legacy C/C++ code, and provide a testbed for further evaluation and testing of hardware-based features. iv Table of Contents Abstract iii List of Figures xii List of Tables xv Acknowledgments xvi Dedication xviii 1 Introduction 1 2 Foundations 4 2.1 Foci in Security . 5 2.2 Security Defined . 5 2.3 Problem Statement . 7 2.3.1 Narrowing the Focus { Security in Hardware . 7 2.4 Legacy Secure (Trusted) Systems . 8 2.4.1 Multics . 8 2.4.2 Honeywell Scomp . 10 2.4.3 Drawbacks . 10 2.5 A Modern Trusted Computer System Effort { CHERI . 11 v 2.5.1 Object-Capability Security Overview . 12 2.5.2 Object-Capability Hardware Enhancements . 16 2.5.3 Memory Protection in the Object-Capability Model . 18 2.5.4 CHERI Object-Capability Example . 19 2.5.5 Hardware-Software Integration . 20 2.5.6 Relevance to the Secure Processor . 22 2.6 Common Vulnerability Patterns for Modern Computers . 23 2.6.1 Definition of Terms . 24 2.6.2 From Attack to Intrusion . 27 2.7 Stack Based Buffer Overflows . 31 2.7.1 Stack Basics . 32 2.7.1.1 Stack Operations { Physical View . 32 2.7.1.2 Stack Operations { Computer Memory Represen- tation . 33 2.7.1.3 Stack Width and Growth Direction . 34 2.7.1.4 Stack Operation in Procedure Calls { CALL, RET vs. PUSH, POP . 36 2.7.1.5 Stack Use for Parameters and Variables . 37 2.7.2 Stack Overflow Details . 39 2.7.3 Co-mingled Control and Data on a Common Stack . 41 2.7.4 `Reverse' Stack Growth . 43 2.7.5 Stack Based Buffer Overflow Protection Techniques . 45 2.7.5.1 Stack Execution Prevention . 46 2.7.5.2 Stack Canaries . 49 2.7.5.3 Return Address Protection or Repair . 52 vi 2.7.5.4 Reverse Stack . 55 2.8 Non-Stack Buffer Overflows . 56 2.9 Return- and Jump-Oriented Programming . 59 2.9.1 Gadgets . 60 2.9.2 Return-Oriented Programming Details . 64 2.9.3 Jump-Oriented Programming Details . 65 2.9.4 Control Flow Protection . 67 2.10 Code Injection . 70 2.11 Memory Protection . 71 2.12 Address Space Layout Randomization . 74 2.13 Harvard Architecture . 75 2.14 Instruction Set Architecture . 77 2.15 Instruction Set Randomization . 78 2.16 Hardware-enhanced Authentication . 81 2.16.1 Random Number Sources . 81 2.16.1.1 Physical Uncloneable Functions . 84 2.17 Current State of the Art Summary . 84 3 Secure Host CPU 86 3.1 Introduction . 86 3.2 Secure Host CPU Design Features . 87 3.2.1 High Level Architecture . 87 3.2.2 Memory Architecture . 87 3.2.3 Register Architecture . 88 3.2.4 Stack Architecture . 89 3.2.4.1 Reverse Stack Growth . 90 vii 3.2.4.2 Dual Stack . 91 3.2.5 Instruction Set Architecture . 91 3.2.5.1 LAND Group . 93 3.2.6 Instruction Set Randomization . 94 3.3 Field Programmable Gate Arrays . 97 3.3.1 Example Logic Functions in FPGAs . 98 3.3.2 FPGA Manufacture and Function Implementation . 99 3.3.3 Hardware Description Language . 101 3.3.4 Possible Alternatives to FPGAs . 102 3.4 Exception Handling . 104 3.5 Application Summary . 105 4 Secure Host CPU Implementation 106 4.1 Early FPGA Prototype . 107 4.1.1 C99 Emulator . 108 4.2 Review and Introduction . 109 4.3 CPU High Level Architecture . 109 4.3.1 Secure Host CPU Emulator . 110 4.3.2 Data Types in the C99 Emulator . 112 4.4 Memory Architecture . 113 4.5 Register Architecture . 114 4.5.1 Register Implementation . 116 4.5.1.1 Register Identifiers . 116 4.5.2 Flags Register (eflags) . 119 4.6 Stack Architecture . 119 4.6.1 Reverse Stacks . 120 viii 4.6.2 Dual Stacks . 121 4.7 Instruction Pointer Management . 123 4.8 Instruction Set Architecture . 124 4.8.1 Instruction Word Overview . 124 4.8.2 Instruction Word Architecture . 125 4.8.3 Opclass . 127 4.8.4 Transfer Width . 128 4.8.5 Arguments, Operands, and Operand Types . 128 4.8.6 Operands 1 and 2 Differences . 129 4.8.7 Instruction Transfer and Argument Sizes . 131 4.8.8 Relocation Flag . 132 4.8.9 Instruction Word Binary Implementation . 133 4.9 Other Security Features of the ISA . 133 4.9.1 Jump/Land Flow Control Instructions . 133 4.9.2 Other Flow Control Instructions . 137 4.9.3 Instruction Set Density . 138 4.9.4 Instruction Set Randomization . 139 4.9.4.1 ISR Keys . 140 5 CPU Testbed and Evaluation 141 5.1 Linux Host . 141 5.2 Secure Host Tool Chain . 147 5.2.1 Secure Host CPU Compiler . 148 5.2.1.1 IR to Assembly Register Allocation . 148 5.2.2 Assembler . 149 5.2.2.1 Assembler Dictionaries . 151 ix 5.2.2.2 Assembler Field Codes . 152 5.2.3 Relocating Loader . 153 5.2.4 Console Monitor/Debugger . 153 5.3 OS Support for the Secure Host CPU . 157 5.4 Performance Tuning of the Emulator . 157 5.5 Proof of Concept Demonstration . 158 5.6 Demonstration Results . 159 5.6.1 Invalid Instructions . 159 5.6.2 ROP and JOP Gadget Reduction . 160 5.6.3 Control Flow Protection . 161 5.7 Proof of Concept Demonstration Summary . 162 6 DARPA CGC and DECREE 164 6.1 DARPA Cyber Grand Challenge . 165 6.2 DARPA DECREE . 165 6.2.1 DECREE OS Syscalls . 166 6.2.2 DECREE Syscall Interface . 168 7 Future Work and Concluding Remarks 170 7.1 Architecture Retrospectives . 170 7.1.1 x86 Patterning . 170 7.1.2 Concurrent Registers . 171 7.1.3 Additional Registers . 171 7.1.4 Instruction Set Architecture Changes . 172 7.2 Testbed Enhancements . ..
Recommended publications
  • SIMD Extensions
    SIMD Extensions PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 12 May 2012 17:14:46 UTC Contents Articles SIMD 1 MMX (instruction set) 6 3DNow! 8 Streaming SIMD Extensions 12 SSE2 16 SSE3 18 SSSE3 20 SSE4 22 SSE5 26 Advanced Vector Extensions 28 CVT16 instruction set 31 XOP instruction set 31 References Article Sources and Contributors 33 Image Sources, Licenses and Contributors 34 Article Licenses License 35 SIMD 1 SIMD Single instruction Multiple instruction Single data SISD MISD Multiple data SIMD MIMD Single instruction, multiple data (SIMD), is a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data simultaneously. Thus, such machines exploit data level parallelism. History The first use of SIMD instructions was in vector supercomputers of the early 1970s such as the CDC Star-100 and the Texas Instruments ASC, which could operate on a vector of data with a single instruction. Vector processing was especially popularized by Cray in the 1970s and 1980s. Vector-processing architectures are now considered separate from SIMD machines, based on the fact that vector machines processed the vectors one word at a time through pipelined processors (though still based on a single instruction), whereas modern SIMD machines process all elements of the vector simultaneously.[1] The first era of modern SIMD machines was characterized by massively parallel processing-style supercomputers such as the Thinking Machines CM-1 and CM-2. These machines had many limited-functionality processors that would work in parallel.
    [Show full text]
  • Intermediate X86 Part 2
    Intermediate x86 Part 2 Xeno Kovah – 2010 xkovah at gmail All materials are licensed under a Creative Commons “Share Alike” license. • http://creativecommons.org/licenses/by-sa/3.0/ 2 Paging • Previously we discussed how segmentation translates a logical address (segment selector + offset) into a 32 bit linear address. • When paging is disabled, linear addresses map 1:1 to physical addresses. • When paging is enabled, a linear address must be translated to determine the physical address it corresponds to. • It’s called “paging” because physical memory is divided into fixed size chunks called pages. • The analogy is to books in a library. When you need to find some information first you go to the library where you look up the relevant book, then you select the book and look up a specific page, from there you maybe look for a specific specific sentence …or “word”? ;) • The internets ruined the analogy! 3 Notes and Terminology • All of the figures or references to the manual in this section refer to the Nov 2008 manual (available in the class materials). This is because I think the manuals <= 2008 organized and presented much clearer than >= 2009 manuals. • When I refer to a “frame” it means “a page- sized chunk of physical memory” • When paging is enabled, a “linear address” is the same thing as a “virtual memory ” “ ” address or virtual address 4 The terrifying truth revealed! And now you knowAHHHHHHHH!!!…the rest of the story.(Nah, Good it’s not so bad day. :)) 5 Virtual Memory • When paging is enabled, the 32 bit linear address space can be mapped to a physical address space less than 32 bits.
    [Show full text]
  • Multiprocessing Contents
    Multiprocessing Contents 1 Multiprocessing 1 1.1 Pre-history .............................................. 1 1.2 Key topics ............................................... 1 1.2.1 Processor symmetry ...................................... 1 1.2.2 Instruction and data streams ................................. 1 1.2.3 Processor coupling ...................................... 2 1.2.4 Multiprocessor Communication Architecture ......................... 2 1.3 Flynn’s taxonomy ........................................... 2 1.3.1 SISD multiprocessing ..................................... 2 1.3.2 SIMD multiprocessing .................................... 2 1.3.3 MISD multiprocessing .................................... 3 1.3.4 MIMD multiprocessing .................................... 3 1.4 See also ................................................ 3 1.5 References ............................................... 3 2 Computer multitasking 5 2.1 Multiprogramming .......................................... 5 2.2 Cooperative multitasking ....................................... 6 2.3 Preemptive multitasking ....................................... 6 2.4 Real time ............................................... 7 2.5 Multithreading ............................................ 7 2.6 Memory protection .......................................... 7 2.7 Memory swapping .......................................... 7 2.8 Programming ............................................. 7 2.9 See also ................................................ 8 2.10 References .............................................
    [Show full text]
  • X86 Memory Protection and Translation
    2/5/20 COMP 790: OS Implementation COMP 790: OS Implementation Logical Diagram Binary Memory x86 Memory Protection and Threads Formats Allocators Translation User System Calls Kernel Don Porter RCU File System Networking Sync Memory Device CPU Today’s Management Drivers Scheduler Lecture Hardware Interrupts Disk Net Consistency 1 Today’s Lecture: Focus on Hardware ABI 2 1 2 COMP 790: OS Implementation COMP 790: OS Implementation Lecture Goal Undergrad Review • Understand the hardware tools available on a • What is: modern x86 processor for manipulating and – Virtual memory? protecting memory – Segmentation? • Lab 2: You will program this hardware – Paging? • Apologies: Material can be a bit dry, but important – Plus, slides will be good reference • But, cool tech tricks: – How does thread-local storage (TLS) work? – An actual (and tough) Microsoft interview question 3 4 3 4 COMP 790: OS Implementation COMP 790: OS Implementation Memory Mapping Two System Goals 1) Provide an abstraction of contiguous, isolated virtual Process 1 Process 2 memory to a program Virtual Memory Virtual Memory 2) Prevent illegal operations // Program expects (*x) – Prevent access to other application or OS memory 0x1000 Only one physical 0x1000 address 0x1000!! // to always be at – Detect failures early (e.g., segfault on address 0) // address 0x1000 – More recently, prevent exploits that try to execute int *x = 0x1000; program data 0x1000 Physical Memory 5 6 5 6 1 2/5/20 COMP 790: OS Implementation COMP 790: OS Implementation Outline x86 Processor Modes • x86
    [Show full text]
  • The Technology Behind Crusoe™ Processors
    The Technology Behind Crusoe™ Processors Low-power x86-Compatible Processors Implemented with Code Morphing™ Software Alexander Klaiber Transmeta Corporation January 2000 The Technology Behind Crusoe™ Processors Property of: Transmeta Corporation 3940 Freedom Circle Santa Clara, CA 95054 USA (408) 919-3000 http://www.transmeta.com The information contained in this document is provided solely for use in connection with Transmeta products, and Transmeta reserves all rights in and to such information and the products discussed herein. This document should not be construed as transferring or granting a license to any intellectual property rights, whether express, implied, arising through estoppel or otherwise. Except as may be agreed in writing by Transmeta, all Transmeta products are provided “as is” and without a warranty of any kind, and Transmeta hereby disclaims all warranties, express or implied, relating to Transmeta’s products, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose and non-infringement of third party intellectual property. Transmeta products may contain design defects or errors which may cause the products to deviate from published specifications, and Transmeta documents may contain inaccurate information. Transmeta makes no representations or warranties with respect to the accuracy or completeness of the information contained in this document, and Transmeta reserves the right to change product descriptions and product specifications at any time, without notice. Transmeta products have not been designed, tested, or manufactured for use in any application where failure, malfunction, or inaccuracy carries a risk of death, bodily injury, or damage to tangible property, including, but not limited to, use in factory control systems, medical devices or facilities, nuclear facilities, aircraft, watercraft or automobile navigation or communication, emergency systems, or other applications with a similar degree of potential hazard.
    [Show full text]
  • Operating Systems & Virtualisation Security Knowledge Area
    Operating Systems & Virtualisation Security Knowledge Area Issue 1.0 Herbert Bos Vrije Universiteit Amsterdam EDITOR Andrew Martin Oxford University REVIEWERS Chris Dalton Hewlett Packard David Lie University of Toronto Gernot Heiser University of New South Wales Mathias Payer École Polytechnique Fédérale de Lausanne The Cyber Security Body Of Knowledge www.cybok.org COPYRIGHT © Crown Copyright, The National Cyber Security Centre 2019. This information is licensed under the Open Government Licence v3.0. To view this licence, visit: http://www.nationalarchives.gov.uk/doc/open-government-licence/ When you use this information under the Open Government Licence, you should include the following attribution: CyBOK © Crown Copyright, The National Cyber Security Centre 2018, li- censed under the Open Government Licence: http://www.nationalarchives.gov.uk/doc/open- government-licence/. The CyBOK project would like to understand how the CyBOK is being used and its uptake. The project would like organisations using, or intending to use, CyBOK for the purposes of education, training, course development, professional development etc. to contact it at con- [email protected] to let the project know how they are using CyBOK. Issue 1.0 is a stable public release of the Operating Systems & Virtualisation Security Knowl- edge Area. However, it should be noted that a fully-collated CyBOK document which includes all of the Knowledge Areas is anticipated to be released by the end of July 2019. This will likely include updated page layout and formatting of the individual Knowledge Areas KA Operating Systems & Virtualisation Security j October 2019 Page 1 The Cyber Security Body Of Knowledge www.cybok.org INTRODUCTION In this Knowledge Area, we introduce the principles, primitives and practices for ensuring se- curity at the operating system and hypervisor levels.
    [Show full text]
  • Chapter 3 Protected-Mode Memory Management
    CHAPTER 3 PROTECTED-MODE MEMORY MANAGEMENT This chapter describes the Intel 64 and IA-32 architecture’s protected-mode memory management facilities, including the physical memory requirements, segmentation mechanism, and paging mechanism. See also: Chapter 5, “Protection” (for a description of the processor’s protection mechanism) and Chapter 20, “8086 Emulation” (for a description of memory addressing protection in real-address and virtual-8086 modes). 3.1 MEMORY MANAGEMENT OVERVIEW The memory management facilities of the IA-32 architecture are divided into two parts: segmentation and paging. Segmentation provides a mechanism of isolating individual code, data, and stack modules so that multiple programs (or tasks) can run on the same processor without interfering with one another. Paging provides a mech- anism for implementing a conventional demand-paged, virtual-memory system where sections of a program’s execution environment are mapped into physical memory as needed. Paging can also be used to provide isolation between multiple tasks. When operating in protected mode, some form of segmentation must be used. There is no mode bit to disable segmentation. The use of paging, however, is optional. These two mechanisms (segmentation and paging) can be configured to support simple single-program (or single- task) systems, multitasking systems, or multiple-processor systems that used shared memory. As shown in Figure 3-1, segmentation provides a mechanism for dividing the processor’s addressable memory space (called the linear address space) into smaller protected address spaces called segments. Segments can be used to hold the code, data, and stack for a program or to hold system data structures (such as a TSS or LDT).
    [Show full text]
  • X86 Memory Protection and Translation
    x86 Memory Protection and Translation Don Porter CSE 506 Lecture Goal ò Understand the hardware tools available on a modern x86 processor for manipulating and protecting memory ò Lab 2: You will program this hardware ò Apologies: Material can be a bit dry, but important ò Plus, slides will be good reference ò But, cool tech tricks: ò How does thread-local storage (TLS) work? ò An actual (and tough) Microsoft interview question Undergrad Review ò What is: ò Virtual memory? ò Segmentation? ò Paging? Two System Goals 1) Provide an abstraction of contiguous, isolated virtual memory to a program 2) Prevent illegal operations ò Prevent access to other application or OS memory ò Detect failures early (e.g., segfault on address 0) ò More recently, prevent exploits that try to execute program data Outline ò x86 processor modes ò x86 segmentation ò x86 page tables ò Software vs. Hardware mechanisms ò Advanced Features ò Interesting applications/problems x86 Processor Modes ò Real mode – walks and talks like a really old x86 chip ò State at boot ò 20-bit address space, direct physical memory access ò Segmentation available (no paging) ò Protected mode – Standard 32-bit x86 mode ò Segmentation and paging ò Privilege levels (separate user and kernel) x86 Processor Modes ò Long mode – 64-bit mode (aka amd64, x86_64, etc.) ò Very similar to 32-bit mode (protected mode), but bigger ò Restrict segmentation use ò Garbage collect deprecated instructions ò Chips can still run in protected mode with old instructions Translation Overview 0xdeadbeef Segmentation 0x0eadbeef Paging 0x6eadbeef Virtual Address Linear Address Physical Address Protected/Long mode only ò Segmentation cannot be disabled! ò But can be a no-op (aka flat mode) x86 Segmentation ò A segment has: ò Base address (linear address) ò Length ò Type (code, data, etc).
    [Show full text]
  • Crusoe Processor Model TM3120
    Crusoe Processor Model TM3120 CrusoeTM Processor Model TM3120 Features • VLIW processor and x86 Code MorphingTM software provide x86-compatible mobile platform solution • Processor core operates at 333, 366, and 400 MHz • Integrated 64K-byte instruction cache and 32K-byte data cache • Integrated northbridge core logic features facilitate compact system designs • SDR SDRAM memory controller with 66-133 MHz, 3.3V interface • PCI bus controller (PCI 2.1 compliant) with 33 MHz, 3.3V interface • Advanced power management features and very-low power operation extend mobile battery life • Full System Management Mode (SMM) support • Compact 474-pin ceramic BGA package The Transmeta Crusoe Processor is a very-low power, high-speed microprocessor based on an advanced VLIW core architecture. When used in conjunction with Transmeta’s x86 Code Morphing software, the Crusoe Pro- cessor provides x86-compatible software execution using dynamic binary code translation, without requiring code recompilation. In addition to the VLIW core, the processor incorporates a 64K-byte instruction cache, 32K-byte data cache, 64-bit SDR SDRAM memory controller, and 32-bit PCI controller. These additional functional units, which are typically part of the core system logic that surrounds the microprocessor, allow the Crusoe Processor to provide a highly integrated and cost effective platform solution for the x86 mobile market. The processor core operates from a 1.5V supply, resulting in very low power consumption, even at high operat- ing frequencies. Crusoe processor power consumption during typical operation is as low as 15 milliwatts. Transmeta, Crusoe, and Code Morphing are trademarks of Transmeta Corporation. 1/18/00 Transmeta Corporation Crusoe Processor 1.0 Architecture The Crusoe Processor incorporates integer and floating point execution units, instruc- tion and data caches, a memory management unit, and multimedia instructions.
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • Architecture of VIA Isaiah (NANO)
    Architecture of VIA Isaiah (NANO) Jan Davidek dav053 2008/2009 1. Introduction to the VIA Nano™ Processor The last few years have seen significant changes within the microprocessor industry, and indeed the entire IT landscape. Much of this change has been driven by three factors: the increasing focus of both business and consumer on energy efficiency, the rise of mobile computing, and the growing performance requirements of computing devices in a fast expanding multimedia environment. In the microprocessor space, the traditional race for ever faster processing speeds has given way to one that factors in the energy used to achieve those speeds. Performance per watt is the new metric by which quality is measured, with all the major players endeavoring to increase the performance capabilities of their products, while reducing the amount of energy that they require. Based on the recently announced VIA Isaiah Architecture, the new VIA Nano™ processor is a next-generation x86 processor that sets the standard in power efficiency for tomorrow’s immersive internet experience. With advanced power and thermal management features helping to make it the world’s most energy efficient x86 processor architecture, the VIA Nano processor also boasts ultra modern functionality, high-performance computation and media processing, and enhanced VIA PadLock™ hardware security features. Augmenting the VIA C7® family of processors, the VIA Nano processor’s pin compatibility extends the VIA processor platform portfolio, enabling OEMs to offer a wider range of products for different market segments, and furnishing them with the ability to upgrade device performance without incurring the time and cost expense associated with system redesign.
    [Show full text]
  • The Technology Behind Crusoe™ Processors: Low-Power X86-Compatible Processors Implemented with Code-Morphing™ Software”
    “The Technology Behind Crusoe™ Processors: Low-Power x86-Compatible Processors Implemented with Code-Morphing™ Software” Alexander Klaiber Transmeta Corporation January 2000 presented by nick black <[email protected]> for cs8803dc 2010-04-15 watch this space for valuable addenda -- BIG MONEY! BIG PRIZES! YOU LOVE IT!! Motivation ● Commercial processor built around binary translation. ● Anyone remember the M680x0 emulator for PowerPC Macs?(*) ● How about PRISM's Epicode + Mica? VEST/AEST on Alpha?(**) ● One of two major GP-VLIW implementations. ● Yes, I absolutely am discounting Multiflow Computer's 125 sales. ● Integrated design of architecture and translator. ● Interesting design space: ● An attempt to reduce power and size of PC2001/x86. ● Not targeted at embedded space, where cost is a main motivator! “A Microprogrammed Implementation of an Architecture Simulation Language” (1977) (*) Tom Hormby's IBM, Apple, RISC, and the Roots of the PowerPC and Steven Levy's Insanely Great. (**) Paul Bolotoff's Alpha: The History in Facts and Comments. Anti-Motivation (*) 1917-10-23, paraphrased from John Reed's Ten Days That Shook the World (1919) pull over; that table's too fat (woop woop) Sources: Transmeta product datasheets, UIUC CS433 “Processor Presentation Series” notes for Transmeta Crusoe, sandpile.org IA-32 Implementation Guides for Crusoe/Efficeon Initial reactions, pre-paper: ● Anyone can run an x86 translator/emulator ● Why wouldn't Intel just build this instead? ● P6 was doing hardware CISC-to-RISC (CRISC) in 1995 ...though dissipating
    [Show full text]