FAST HASHING in CUDA Neville Walo Department of Computer

Total Page:16

File Type:pdf, Size:1020Kb

FAST HASHING in CUDA Neville Walo Department of Computer FAST HASHING IN CUDA Neville Walo Department of Computer Science ETH Zurich¨ Zurich,¨ Switzerland ABSTRACT cies, it is possible to use the compression function of SHA- Hash functions, such as SHA-256 [1], are extensively used 256 along with the Sarkar-Schellenberg composition prin- in cryptographic applications. However, SHA-256 cannot ciple [2] to create a parallel collision resistant hash function be parallelized due to sequential dependencies. Using the called PARSHA-256 [3]. Sarkar-Schellenberg composition principle [2] in combina- In this work, we try to accelerate hashing in CUDA [6]. tion with SHA-256 gives rise to PARSHA-256 [3], a parallel We have divided this project into two sub-projects. The first collision resistant hash function. We present efficient imple- one is the Bitcoin scenario, with the goal to calculate many mentations for both SHA-256 and PARSHA-256 in CUDA. independent SHA-256 computation in parallel. The second Our results demonstrate that for large messages PARSHA- case is PARSHA-256, where the goal is to implement the 256 can significantly outperform SHA-256. proposed algorithm efficiently in CUDA. Related work. To our knowledge there is no compara- ble implementation of PARSHA-256 which runs on a GPU. 1. INTRODUCTION There exists only the implementation of the original paper, Hash functions are one of the most important operations which uses multithreading [3]. On the other hand, there in cryptographic applications, like digital signature algo- are countless implementations of Bitcoin Miners in CUDA rithms, keyed-hash message authentication codes, encryp- [7, 8], as this was the most prominent way to mine Bitcoins tions and the generation of random numbers. Furthermore, before ASICs were introduced. hashing is also used in many data structures and applica- Our contribution is an implementation of SHA-256 and tions such as hash tables and for calculating checksums to PARSHA-256 in CUDA. compare files. The development and implementation of fast hash functions is therefore of great importance for many ar- 2. BACKGROUND eas. With the recent rise of the cryptocurrencies, it is as im- This section gives an overview of how PARSHA-256 and portant as never before to hash as fast as possible. Many SHA-256 and work, mainly taken from the original sources cryptocurrencies are based on the proof of work [4] princi- [1, 3]. The focus is on the technical implementation (how ple, in which one party (the prover) proves to others (the ver- the algorithm works), not on the theoretical properties (why ifiers) that a certain amount of computational effort has been the algorithm is secure). A word size of 32-bits is assumed. expended for some purpose. For example, in the Bitcoin protocol [5], users have to find a nonce such that the SHA- 2.1. SHA-256 [1] 256 hash of the nonce within the current block is smaller than the current target of the network. Since only the first The SHA-256 algorithm can be described in two stages: miner who finds a nonce that fulfills the target receives a re- preprocessing and hash computation. Preprocessing involves ward, it is important to try out many SHA-256 hashes as fast padding a message, parsing the padded message into 512- as possible. Today mostly ASICs (application-specific inte- bit blocks, and setting initialization values to be used in the grated circuit) are used to mine Bitcoins, as ASICs work hash computation. The hash computation generates a mes- more efficient and compute more hashes per second than sage schedule of 64 words from the padded message and traditional hardware. uses that schedule, along with functions, constants, and op- While the original SHA-256 implementation, as pro- erations to iteratively generate a series of hash values. The posed in the Secure Hash Standart (SHS) [1] by the National final hash value generated by the hash computation is called Institute of Standards and Technology (NIST), does not al- a message digest and its length is 256 bits. low for much parallelization due to sequential dependen- 2.1.1. Preprocessing After repeating steps one through four for every block, the resulting 256-bit message digest of the message, M, is Preprocessing consists of three steps: padding the message, the hash value of the final block. parsing the message into message blocks, and setting the initial hash value. 2.2. PARSHA-256 [3] Padding. The purpose of padding the input message is to ensure that the padded message is a multiple of 512 bits, PARSHA-256 uses the compression function of SHA-256 since SHA-256 assumes a block size of 512 bits. Suppose along with the Sarkar-Schellenberg [2] principle to create that the length of the message, M, is ` bits. The bit ”1” a parallelizable collision resistant hash function. The over- is appended to the end of the message, followed by k zero all approach does look not that different from SHA-256, but bits, where k is the smallest, non-negative solution to the there is one major change. While the task graph in SHA-256 equation ` + 1 + k ≡ 448 mod 512. Then the 64-bit block is linear, because the previous block has to be processed to that is equal to the number ` expressed using a binary rep- process the next one, in PARSHA-256 the processors are ar- resentation is appended to the message. This will result in a ranged in a binary tree (similar to a reduction), which allows message that can be divided into 512-bit blocks. working on more than one block at a time. Parsing the Message. The message and its padding are (1) (2) (N) parsed into N 512-bit blocks, M ;M ; :::; M . Since 2.2.1. Compression Function the 512 bits of the input block may be expressed as sixteen 32-bit words, the first 32 bits of the message block i are Let h() be the compression function. In the case of SHA- (i) 256, the input to h() consists of 24 32-bit words (768 bits) denoted M0 . Initial Hash Value. Before hash computation begins, and the output consists of 8 32-bit words (256 bits). In the (0) (0) rest of the paper we set n = 768 and m = 256. the initial 8 hash values, H0 up to H7 , must be set. These can be taken from the official document [1]. 2.2.2. Processor Tree 2.1.2. Hash Computation PARSHA-256 uses a binary tree of processor height T , note that T is an argument to the hash function and a different T SHA-256 uses eight 32-bit working variables and six logical can produce a different result. There are 2T processors in functions (Ch(x; y; z), Maj(x; y; z), σ (x), σ (x), Σ (x), 0 1 0 the tree and the children of processor P are P and P , Σ (x)), where each function operates on 32-bit words, which i 2i 2i+1 1 see Fig. 1. The arcs denote the data flow and go from the are represented as x, y, and z. The result of each function children to the parent. is a new 32-bit word. The exact specification of these func- tions can be found in the official document [1]. Each message block is processed in order, using the fol- lowing steps: 1. The Message Schedule fWtg of the current block is prepared using the following approach: 8 (i) <Mt 0 ≤ t ≤ 15 W = t σ1(Wt−2) + Wt−7 + 16 ≤ t ≤ 63 : σ0(Wt−15) + Wt−16 2. The eight working variables are initialized with the (i−1)st hash values. This means that block i receives as working variables the message digest of block i−1, Fig. 1: Processor Tree with T = 3. Source: [3] while the first block receives the initial hash values as working variables. The behaviour of any processor Pi with input y is de- scribed as follows: 3. In 64 rounds the working variables are permuted us- ( h(y) if jyj = n ing the above functions, the message schedule and Pi(y) = (1) predefined constants. The exact specification can be y else found in the official document [1]. 2.2.3. Formatting the Message 4. To compute the ith intermediate hash value, the 8 working variables are added to the (i − 1)th hash Similar to SHA-256, the incoming message has to be padded value. and divided into blocks. While for SHA-256, this procedure was relatively simple, as the message should be a multiple 3. In round (p + 2) all non leaf processors get an (n − of 512 bits, which is also the block size, in PARSHA-256 it 2m)-bit substring from the input message and con- is more complicated. catenate this with the two m-bit messages z2i; z2i+1 In PARSHA-256 the message undergoes two kinds of from their children from the previous round. The leaf padding. In the first kind of padding, called end-padding, processors do not receive any new input. zeros are appended to the end of the message to get the length of the padded message into a certain form. The sec- 4. In round (p + 3) only P0 and P1 get an (n − 2m)- ond kind of padding is called IV-Padding. The Initialization bit substring from the input message and concatenate Vector (IV) ensures that no invocation of h() gets only mes- this with the two m-bit messages z2i; z2i+1 from their sage bits as input. Using an IV is relatively simple in the children from the previous round. The other proces- Merkle-Damgard composition scheme (used by SHA-256).
Recommended publications
  • Hoja De Datos De Familes Del Procesador Intel(R) Core(TM) De 10A Generación, Vol.1
    10a generación de familias de procesadores Intel® Core™ Ficha técnica, Volumen 1 de 2 Compatible con la 10a generación de la familia de procesadores Intel® Core™, procesadores Intel® Pentium®, procesadores Intel® Celeron® para plataformas U/Y, anteriormente conocidos como Ice Lake. Agosto de 2019 Revisión 001 Número del Documento: 341077-001 Líneas legales y descargos de responsabilidad Esta información es una combinación de una traducción hecha por humanos y de la traducción automática por computadora del contenido original para su conveniencia. Este contenido se ofrece únicamente como información general y no debe ser considerada como completa o precisa. No puede utilizar ni facilitar el uso de este documento en relación con ninguna infracción u otro análisis legal relacionado con los productos Intel descritos en este documento. Usted acepta conceder a Intel una licencia no exclusiva y libre de regalías a cualquier reclamación de patente redactada posteriormente que incluya el objeto divulgado en este documento. Este documento no concede ninguna licencia (expresa o implícita, por impedimento o de otro tipo) a ningún derecho de propiedad intelectual. Las características y beneficios de las tecnologías Intel dependen de la configuración del sistema y pueden requerir la activación de hardware, software o servicio habilitado. El desempeño varía según la configuración del sistema. Ningún equipo puede ser absolutamente seguro. Consulte al fabricante de su sistema o su distribuidor minorista u obtenga más información en intel.la. Las tecnologías Intel pueden requerir la activación de hardware habilitado, software específico o servicios. Consulte con el fabricante o distribuidor del sistema. Los productos descritos pueden contener defectos de diseño o errores conocidos como erratas que pueden hacer que el producto se desvíe de las especificaciones publicadas.
    [Show full text]
  • GPU Developments 2018
    GPU Developments 2018 2018 GPU Developments 2018 © Copyright Jon Peddie Research 2019. All rights reserved. Reproduction in whole or in part is prohibited without written permission from Jon Peddie Research. This report is the property of Jon Peddie Research (JPR) and made available to a restricted number of clients only upon these terms and conditions. Agreement not to copy or disclose. This report and all future reports or other materials provided by JPR pursuant to this subscription (collectively, “Reports”) are protected by: (i) federal copyright, pursuant to the Copyright Act of 1976; and (ii) the nondisclosure provisions set forth immediately following. License, exclusive use, and agreement not to disclose. Reports are the trade secret property exclusively of JPR and are made available to a restricted number of clients, for their exclusive use and only upon the following terms and conditions. JPR grants site-wide license to read and utilize the information in the Reports, exclusively to the initial subscriber to the Reports, its subsidiaries, divisions, and employees (collectively, “Subscriber”). The Reports shall, at all times, be treated by Subscriber as proprietary and confidential documents, for internal use only. Subscriber agrees that it will not reproduce for or share any of the material in the Reports (“Material”) with any entity or individual other than Subscriber (“Shared Third Party”) (collectively, “Share” or “Sharing”), without the advance written permission of JPR. Subscriber shall be liable for any breach of this agreement and shall be subject to cancellation of its subscription to Reports. Without limiting this liability, Subscriber shall be liable for any damages suffered by JPR as a result of any Sharing of any Material, without advance written permission of JPR.
    [Show full text]
  • SIMD Extensions
    SIMD Extensions PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 12 May 2012 17:14:46 UTC Contents Articles SIMD 1 MMX (instruction set) 6 3DNow! 8 Streaming SIMD Extensions 12 SSE2 16 SSE3 18 SSSE3 20 SSE4 22 SSE5 26 Advanced Vector Extensions 28 CVT16 instruction set 31 XOP instruction set 31 References Article Sources and Contributors 33 Image Sources, Licenses and Contributors 34 Article Licenses License 35 SIMD 1 SIMD Single instruction Multiple instruction Single data SISD MISD Multiple data SIMD MIMD Single instruction, multiple data (SIMD), is a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data simultaneously. Thus, such machines exploit data level parallelism. History The first use of SIMD instructions was in vector supercomputers of the early 1970s such as the CDC Star-100 and the Texas Instruments ASC, which could operate on a vector of data with a single instruction. Vector processing was especially popularized by Cray in the 1970s and 1980s. Vector-processing architectures are now considered separate from SIMD machines, based on the fact that vector machines processed the vectors one word at a time through pipelined processors (though still based on a single instruction), whereas modern SIMD machines process all elements of the vector simultaneously.[1] The first era of modern SIMD machines was characterized by massively parallel processing-style supercomputers such as the Thinking Machines CM-1 and CM-2. These machines had many limited-functionality processors that would work in parallel.
    [Show full text]
  • Demystifying Internet of Things Security Successful Iot Device/Edge and Platform Security Deployment — Sunil Cheruvu Anil Kumar Ned Smith David M
    Demystifying Internet of Things Security Successful IoT Device/Edge and Platform Security Deployment — Sunil Cheruvu Anil Kumar Ned Smith David M. Wheeler Demystifying Internet of Things Security Successful IoT Device/Edge and Platform Security Deployment Sunil Cheruvu Anil Kumar Ned Smith David M. Wheeler Demystifying Internet of Things Security: Successful IoT Device/Edge and Platform Security Deployment Sunil Cheruvu Anil Kumar Chandler, AZ, USA Chandler, AZ, USA Ned Smith David M. Wheeler Beaverton, OR, USA Gilbert, AZ, USA ISBN-13 (pbk): 978-1-4842-2895-1 ISBN-13 (electronic): 978-1-4842-2896-8 https://doi.org/10.1007/978-1-4842-2896-8 Copyright © 2020 by The Editor(s) (if applicable) and The Author(s) This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book’s Creative Commons license, unless indicated otherwise in a credit line to the material.
    [Show full text]
  • A Fast, Verified, Cross-Platform Cryptographic Provider
    EverCrypt: A Fast, Verified, Cross-Platform Cryptographic Provider Jonathan Protzenko∗, Bryan Parnoz, Aymeric Fromherzz, Chris Hawblitzel∗, Marina Polubelovay, Karthikeyan Bhargavany Benjamin Beurdouchey, Joonwon Choi∗x, Antoine Delignat-Lavaud∗,Cedric´ Fournet∗, Natalia Kulatovay, Tahina Ramananandro∗, Aseem Rastogi∗, Nikhil Swamy∗, Christoph M. Wintersteiger∗, Santiago Zanella-Beguelin∗ ∗Microsoft Research zCarnegie Mellon University yInria xMIT Abstract—We present EverCrypt: a comprehensive collection prone (due in part to Intel and AMD reporting CPU features of verified, high-performance cryptographic functionalities avail- inconsistently [78]), with various cryptographic providers able via a carefully designed API. The API provably supports invoking illegal instructions on specific platforms [74], leading agility (choosing between multiple algorithms for the same functionality) and multiplexing (choosing between multiple im- to killed processes and even crashing kernels. plementations of the same algorithm). Through abstraction and Since a cryptographic provider is the linchpin of most zero-cost generic programming, we show how agility can simplify security-sensitive applications, its correctness and security are verification without sacrificing performance, and we demonstrate crucial. However, for most applications (e.g., TLS, cryptocur- how C and assembly can be composed and verified against rencies, or disk encryption), the provider is also on the critical shared specifications. We substantiate the effectiveness of these techniques with
    [Show full text]
  • Extracting and Mapping Industry 4.0 Technologies Using Wikipedia
    Computers in Industry 100 (2018) 244–257 Contents lists available at ScienceDirect Computers in Industry journal homepage: www.elsevier.com/locate/compind Extracting and mapping industry 4.0 technologies using wikipedia T ⁎ Filippo Chiarelloa, , Leonello Trivellib, Andrea Bonaccorsia, Gualtiero Fantonic a Department of Energy, Systems, Territory and Construction Engineering, University of Pisa, Largo Lucio Lazzarino, 2, 56126 Pisa, Italy b Department of Economics and Management, University of Pisa, Via Cosimo Ridolfi, 10, 56124 Pisa, Italy c Department of Mechanical, Nuclear and Production Engineering, University of Pisa, Largo Lucio Lazzarino, 2, 56126 Pisa, Italy ARTICLE INFO ABSTRACT Keywords: The explosion of the interest in the industry 4.0 generated a hype on both academia and business: the former is Industry 4.0 attracted for the opportunities given by the emergence of such a new field, the latter is pulled by incentives and Digital industry national investment plans. The Industry 4.0 technological field is not new but it is highly heterogeneous (actually Industrial IoT it is the aggregation point of more than 30 different fields of the technology). For this reason, many stakeholders Big data feel uncomfortable since they do not master the whole set of technologies, they manifested a lack of knowledge Digital currency and problems of communication with other domains. Programming languages Computing Actually such problem is twofold, on one side a common vocabulary that helps domain experts to have a Embedded systems mutual understanding is missing Riel et al. [1], on the other side, an overall standardization effort would be IoT beneficial to integrate existing terminologies in a reference architecture for the Industry 4.0 paradigm Smit et al.
    [Show full text]
  • Application Specific Programmable Processors for Reconfigurable Self-Powered Devices
    C651etukansi.kesken.fm Page 1 Monday, March 26, 2018 2:24 PM C 651 OULU 2018 C 651 UNIVERSITY OF OULU P.O. Box 8000 FI-90014 UNIVERSITY OF OULU FINLAND ACTA UNIVERSITATISUNIVERSITATIS OULUENSISOULUENSIS ACTA UNIVERSITATIS OULUENSIS ACTAACTA TECHNICATECHNICACC Teemu Nyländen Teemu Nyländen Teemu University Lecturer Tuomo Glumoff APPLICATION SPECIFIC University Lecturer Santeri Palviainen PROGRAMMABLE Postdoctoral research fellow Sanna Taskila PROCESSORS FOR RECONFIGURABLE Professor Olli Vuolteenaho SELF-POWERED DEVICES University Lecturer Veli-Matti Ulvinen Planning Director Pertti Tikkanen Professor Jari Juga University Lecturer Anu Soikkeli Professor Olli Vuolteenaho UNIVERSITY OF OULU GRADUATE SCHOOL; UNIVERSITY OF OULU, FACULTY OF INFORMATION TECHNOLOGY AND ELECTRICAL ENGINEERING Publications Editor Kirsti Nurkkala ISBN 978-952-62-1874-8 (Paperback) ISBN 978-952-62-1875-5 (PDF) ISSN 0355-3213 (Print) ISSN 1796-2226 (Online) ACTA UNIVERSITATIS OULUENSIS C Technica 651 TEEMU NYLÄNDEN APPLICATION SPECIFIC PROGRAMMABLE PROCESSORS FOR RECONFIGURABLE SELF-POWERED DEVICES Academic dissertation to be presented, with the assent of the Doctoral Training Committee of Technology and Natural Sciences of the University of Oulu, for public defence in the Wetteri auditorium (IT115), Linnanmaa, on 7 May 2018, at 12 noon UNIVERSITY OF OULU, OULU 2018 Copyright © 2018 Acta Univ. Oul. C 651, 2018 Supervised by Professor Olli Silvén Reviewed by Professor Leonel Sousa Doctor John McAllister ISBN 978-952-62-1874-8 (Paperback) ISBN 978-952-62-1875-5 (PDF) ISSN 0355-3213 (Printed) ISSN 1796-2226 (Online) Cover Design Raimo Ahonen JUVENES PRINT TAMPERE 2018 Nyländen, Teemu, Application specific programmable processors for reconfigurable self-powered devices. University of Oulu Graduate School; University of Oulu, Faculty of Information Technology and Electrical Engineering Acta Univ.
    [Show full text]
  • 10Th Gen Intel® Core™ Processor Families Datasheet, Vol. 1
    10th Generation Intel® Core™ Processor Families Datasheet, Volume 1 of 2 Supporting 10th Generation Intel® Core™ Processor Families, Intel® Pentium® Processors, Intel® Celeron® Processors for U/Y Platforms, formerly known as Ice Lake July 2020 Revision 005 Document Number: 341077-005 Legal Lines and Disclaimers You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Intel technologies may require enabled hardware, specific software, or services activation. Check with your system manufacturer or retailer. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548- 4725 or visit www.intel.com/design/literature.htm.
    [Show full text]
  • Oracle Solaris 10 113 Release Notes
    Oracle® Solaris 10 1/13 Release Notes Part No: E29493–01 January 2013 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT END USERS. Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including anyoperating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government. This software or hardware is developed for general use in a variety of information management applications.
    [Show full text]
  • Família De Processadores Intel® Core™ Da 10ª Geração
    Família de processadores Intel® Core™ da 10ª geração Datasheet, Volume 1 de 2 Apoiando a família de processadores Intel® Core™ da 10ª geração, processadores Intel® Pentium®, processadores Intel® Celeron® para plataformas U, anteriormente conhecido como Ice Lake Agosto de 2019 Revisão 001 Número do documento: 341077-001 Linhas legais e isenções de responsabilidade Para a sua conveniência, o conteúdo deste site foi traduzido utilizando-se uma combinação de programas de computador de tradução automática e tradutores. Este conteúdo é fornecido somente como informação de caráter genérico, não devendo ser considerado como uma tradução completamente correta do idioma original. Você não pode usar ou facilitar o uso deste documento em conexão com qualquer violação ou outra análise jurídica sobre produtos Intel descritos aqui. Você concorda em conceder à Intel uma licença não exclusiva e livre de royalties para qualquer pedido de patente, posteriormente redigido, que inclua o assunto divulgado aqui. Nenhuma licença (expressa ou implícita, por estoppel ou de outra forma) a quaisquer direitos de propriedade intelectual é concedida por este documento. Os recursos e benefícios das tecnologias Intel dependem da configuração do sistema e podem exigir ativação ativa de hardware, software ou serviço. O desempenho varia de acordo com a configuração do sistema. Nenhum sistema de computador é totalmente seguro. Consulte o fabricante ou revendedor do varejo de seu sistema, ou saiba mais em Intel.com. As tecnologias Intel podem exigir ativação de hardware, software específico ou serviços habilitados. Verifique com o fabricante do sistema ou varejista. Os produtos descritos podem conter defeitos de design ou erros conhecidos como errata que podem fazer com que o produto se desvie das especificações publicadas.
    [Show full text]
  • Datasheet Product Features
    Intel® IXP42X Product Line of Network Processors and IXC1100 Control Plane Processor Datasheet Product Features For a complete list of product features, see “Product Features” on page 9. ■ Intel XScale® Core ■ Encryption/Authentication ■ Three Network Processor Engines ■ High-Speed UART ■ PCI Interface ■ Console UART ■ Two MII Interfaces ■ Internal Bus Performance Monitoring Unit ■ UTOPIA-2 Interface ■ 16 GPIOs ■ USB v 1.1 Device Controller ■ Four Internal Timers ■ Two High-Speed, Serial Interfaces ■ Packaging ■ SDRAM Interface —492-pin PBGA —Commercial/Extended Temperature Typical Applications ■ High-Performance DSL Modem ■ Control Plane ■ High-Performance Cable Modem ■ Integrated Access Device (IAD) ■ Residential Gateway ■ Set-Top Box ■ SME Router ■ Access Points (802.11a/b/g) ■ Network Printers ■ Industrial Controllers Document Number: 252479-004 June 2004 Intel® IXP42X Product Line and IXC1100 Control Plane Processor INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTELR PRODUCTS. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE AND/OR USE OF INTEL PRODUCTS, INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT, OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel Corporation may have patents or pending patent applications, trademarks, copyrights, or other intellectual property rights that relate to the presented subject matter. The furnishing of documents and other materials and information does not provide any license, express or implied, by estoppel or otherwise, to any such patents, trademarks, copyrights, or other intellectual property rights. Intel products are not intended for use in medical, life saving, life sustaining, critical control or safety systems, or in nuclear facility applications.
    [Show full text]
  • 第10 代intel® Core™處理器系列
    第 10 代 Intel® Core ™處理器系列 資料表, 第 1 卷,共 2 卷 支援第 10 代 Intel® Core ™ 處理器系列、Intel®Pentium® 處理器、Intel® Celeron® U/Y 平臺處 理器,前身為 Ice Lake 2019 年 8 月 修訂版 001 Document Number: 341077-001 法律行和免責聲明 為方便您閱讀,本資訊為原始內容經過人工與電腦翻譯後之成果。本內容僅供大致參考之用,不應認定為完整 或準確。 您不得在涉及此處所述的 Intel 產品的任何侵權或其他法律分析中使用或便利使用本文檔。您同意向 Intel 授予此後起草的任何專利聲明的非排他性免 版稅許可,其中包括此處披露的主題事項。 本文件未以禁止或其他方式,對任何智慧財產權作明示或暗示的使用授權。 Intel 技術的功能和優勢取決於系統組態,可能需要啟用硬體、軟體或服務啟動。實際效能會依系統組態而異。沒有電腦系統能提供絕對的安全性。詳 情請洽詢購入系統的製造商或零售商,或是上網參閱 intel.com。 Intel 技術可能需要啟用硬體、特定軟體或服務啟動。請諮詢您的系統製造商或零售商。 所描述的產品可能包含設計缺陷或錯誤,稱為勘誤表,可能導致產品偏離已發佈的規格。只要來信索取,Intel 即會提供目前已知的勘誤表。 Intel 不針對任何明示或暗示保固提供任何擔保,包含但不限於適售性、適合特定用途以及不侵權的暗示擔保,以及任何因執行過程、交易過程、或在 產業中使用而產生的保固。 此處提供的所有資訊如有更改,恕不另行通知。聯繫您的 Intel 代表以獲取最新的 Intel 產品規格和路線圖 在本文檔中引用具有訂單號和文檔副本時,可致電 1-800-548-4725 或訪問 ww.intel.com/design/literature.htm。沒有電腦系統能提供絕對的安全 性。 Intel、 Intel 徽標、 Intel 酷睿、奔騰、VTune 和 Thunderbolt 是 Intel 公司或其子公司在美國和 / 或其他國家的商標。 * 其他名稱與品牌可能業經宣告為他人之財產。 版權所有 © 2019, Intel 公司 . 版權所有。 2 資料表,第 1 卷,共 2 卷 內容 1 簡介 .................................................................................................................................................................................. 11 1.1 處理器波動性聲明.......................................................................................................................................... 12 1.2 套件支援............................................................................................................................................................ 12 1.3 支援的技術 ......................................................................................................................................................
    [Show full text]