Intel(R) Software Guard Extensions Developer Reference for Linux* OS
Total Page:16
File Type:pdf, Size:1020Kb

Load more
Recommended publications
-
Fencing Cyberspace: Drawing Borders in a Virtual World Maureen A
University of Minnesota Law School Scholarship Repository Minnesota Law Review 1998 Fencing Cyberspace: Drawing Borders in a Virtual World Maureen A. O'Rourke Follow this and additional works at: https://scholarship.law.umn.edu/mlr Part of the Law Commons Recommended Citation O'Rourke, Maureen A., "Fencing Cyberspace: Drawing Borders in a Virtual World" (1998). Minnesota Law Review. 1923. https://scholarship.law.umn.edu/mlr/1923 This Article is brought to you for free and open access by the University of Minnesota Law School. It has been accepted for inclusion in Minnesota Law Review collection by an authorized administrator of the Scholarship Repository. For more information, please contact [email protected]. Fencing Cyberspace: Drawing Borders in a Virtual World Maureen A. O'Rourke* Introduction ............................................................................... 610 I. An Introduction to the Internet and the World W ide W eb .................................................................. 615 A. Origins of the Internet ................................................... 615 B. Development of the World Wide Web ........................... 619 C. Emergence of the Internet as a Commercial Marketplace ............................................... 624 1. Development of the Marketplace .............................. 624 2. Web Business Models ................................................ 625 a. Advertising-Based Models ................................... 626 b. Subscription-Based Models ................................. -
Intel® IA-64 Architecture Software Developer's Manual
Intel® IA-64 Architecture Software Developer’s Manual Volume 1: IA-64 Application Architecture Revision 1.1 July 2000 Document Number: 245317-002 THIS DOCUMENT IS PROVIDED “AS IS” WITH NO WARRANTIES WHATSOEVER, INCLUDING ANY WARRANTY OF MERCHANTABILITY, NONINFRINGEMENT, FITNESS FOR ANY PARTICULAR PURPOSE, OR ANY WARRANTY OTHERWISE ARISING OUT OF ANY PROPOSAL, SPECIFICATION OR SAMPLE. Information in this document is provided in connection with Intel products. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted by this document. Except as provided in Intel's Terms and Conditions of Sale for such products, Intel assumes no liability whatsoever, and Intel disclaims any express or implied warranty, relating to sale and/or use of Intel products including liability or warranties relating to fitness for a particular purpose, merchantability, or infringement of any patent, copyright or other intellectual property right. Intel products are not intended for use in medical, life saving, or life sustaining applications. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. Intel® IA-64 processors may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800- 548-4725, or by visiting Intel’s website at http://developer.intel.com/design/litcentr. -
Survey of Methodologies, Approaches, and Challenges in Parallel Programming Using High-Performance Computing Systems
Hindawi Scientific Programming Volume 2020, Article ID 4176794, 19 pages https://doi.org/10.1155/2020/4176794 Review Article Survey of Methodologies, Approaches, and Challenges in Parallel Programming Using High-Performance Computing Systems Paweł Czarnul ,1 Jerzy Proficz,2 and Krzysztof Drypczewski 2 1Dept. of Computer Architecture, Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, Gda´nsk, Poland 2Centre of Informatics–Tricity Academic Supercomputer & Network (CI TASK), Gdansk University of Technology, Gdan´sk, Poland Correspondence should be addressed to Paweł Czarnul; [email protected] Received 11 October 2019; Accepted 30 December 2019; Published 29 January 2020 Guest Editor: Pedro Valero-Lara Copyright © 2020 Paweł Czarnul et al. ,is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ,is paper provides a review of contemporary methodologies and APIs for parallel programming, with representative tech- nologies selected in terms of target system type (shared memory, distributed, and hybrid), communication patterns (one-sided and two-sided), and programming abstraction level. We analyze representatives in terms of many aspects including programming model, languages, supported platforms, license, optimization goals, ease of programming, debugging, deployment, portability, level of parallelism, constructs enabling parallelism and synchronization, features introduced in recent versions indicating trends, support for hybridity in parallel execution, and disadvantages. Such detailed analysis has led us to the identification of trends in high-performance computing and of the challenges to be addressed in the near future. It can help to shape future versions of programming standards, select technologies best matching programmers’ needs, and avoid potential difficulties while using high- performance computing systems. -
SIMD Extensions
SIMD Extensions PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 12 May 2012 17:14:46 UTC Contents Articles SIMD 1 MMX (instruction set) 6 3DNow! 8 Streaming SIMD Extensions 12 SSE2 16 SSE3 18 SSSE3 20 SSE4 22 SSE5 26 Advanced Vector Extensions 28 CVT16 instruction set 31 XOP instruction set 31 References Article Sources and Contributors 33 Image Sources, Licenses and Contributors 34 Article Licenses License 35 SIMD 1 SIMD Single instruction Multiple instruction Single data SISD MISD Multiple data SIMD MIMD Single instruction, multiple data (SIMD), is a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data simultaneously. Thus, such machines exploit data level parallelism. History The first use of SIMD instructions was in vector supercomputers of the early 1970s such as the CDC Star-100 and the Texas Instruments ASC, which could operate on a vector of data with a single instruction. Vector processing was especially popularized by Cray in the 1970s and 1980s. Vector-processing architectures are now considered separate from SIMD machines, based on the fact that vector machines processed the vectors one word at a time through pipelined processors (though still based on a single instruction), whereas modern SIMD machines process all elements of the vector simultaneously.[1] The first era of modern SIMD machines was characterized by massively parallel processing-style supercomputers such as the Thinking Machines CM-1 and CM-2. These machines had many limited-functionality processors that would work in parallel. -
The Microarchitecture of the Pentium 4 Processor
The Microarchitecture of the Pentium 4 Processor Glenn Hinton, Desktop Platforms Group, Intel Corp. Dave Sager, Desktop Platforms Group, Intel Corp. Mike Upton, Desktop Platforms Group, Intel Corp. Darrell Boggs, Desktop Platforms Group, Intel Corp. Doug Carmean, Desktop Platforms Group, Intel Corp. Alan Kyker, Desktop Platforms Group, Intel Corp. Patrice Roussel, Desktop Platforms Group, Intel Corp. Index words: Pentium® 4 processor, NetBurst™ microarchitecture, Trace Cache, double-pumped ALU, deep pipelining provides an in-depth examination of the features and ABSTRACT functions of the Intel NetBurst microarchitecture. This paper describes the Intel® NetBurst™ ® The Pentium 4 processor is designed to deliver microarchitecture of Intel’s new flagship Pentium 4 performance across applications where end users can truly processor. This microarchitecture is the basis of a new appreciate and experience its performance. For example, family of processors from Intel starting with the Pentium it allows a much better user experience in areas such as 4 processor. The Pentium 4 processor provides a Internet audio and streaming video, image processing, substantial performance gain for many key application video content creation, speech recognition, 3D areas where the end user can truly appreciate the applications and games, multi-media, and multi-tasking difference. user environments. The Pentium 4 processor enables real- In this paper we describe the main features and functions time MPEG2 video encoding and near real-time MPEG4 of the NetBurst microarchitecture. We present the front- encoding, allowing efficient video editing and video end of the machine, including its new form of instruction conferencing. It delivers world-class performance on 3D cache called the Execution Trace Cache. -
SAP Solutions on Vmware Vsphere Guidelines Summary and Best Practices
SAP On VMware Best Practices Version 1.1 December 2015 © 2015 VMware, Inc. All rights reserved. Page 1 of 61 SAP on VMware Best Practices © 2015 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents listed at http://www.vmware.com/download/patents.html. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. VMware, Inc. 3401 Hillview Ave Palo Alto, CA 94304 www.vmware.com © 2015 VMware, Inc. All rights reserved. Page 2 of 61 SAP on VMware Best Practices Contents 1. Overview .......................................................................................... 5 2. Introduction ...................................................................................... 6 2.1 Support ........................................................................................................................... 6 2.2 SAP ................................................................................................................................ 6 2.3 vCloud Suite ................................................................................................................... 9 2.4 Deploying SAP with vCloud Suite ................................................................................ 11 3. Virtualization Overview .................................................................. -
Oracle Solaris and Oracle SPARC Systems—Integrated and Optimized for Mission Critical Computing
An Oracle White Paper September 2010 Oracle Solaris and Oracle SPARC Servers— Integrated and Optimized for Mission Critical Computing Oracle Solaris and Oracle SPARC Systems—Integrated and Optimized for Mission Critical Computing Executive Overview ............................................................................. 1 Introduction—Oracle Datacenter Integration ....................................... 1 Overview ............................................................................................. 3 The Oracle Solaris Ecosystem ........................................................ 3 SPARC Processors ......................................................................... 4 Architected for Reliability ..................................................................... 7 Oracle Solaris Predictive Self Healing ............................................ 7 Highly Reliable Memory Subsystems .............................................. 9 Oracle Solaris ZFS for Reliable Data ............................................ 10 Reliable Networking ...................................................................... 10 Oracle Solaris Cluster ................................................................... 11 Scalable Performance ....................................................................... 14 World Record Performance ........................................................... 16 Sun FlashFire Storage .................................................................. 19 Network Performance .................................................................. -
Cluster Suite Overview
Red Hat Enterprise Linux 4 Cluster Suite Overview Red Hat Cluster Suite for Red Hat Enterprise Linux Edition 1.0 Last Updated: 2020-03-08 Red Hat Enterprise Linux 4 Cluster Suite Overview Red Hat Cluster Suite for Red Hat Enterprise Linux Edition 1.0 Landmann [email protected] Legal Notice Copyright © 2009 Red Hat, Inc. This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project. -
6Th Gen Intel® Core™ Processors
6th Generation Intel® Processor Family Specification Update Supporting the Intel® Pentium® Processor Family based on the U-Processor Supporting the 6th Generation Intel® Core™ Processor Family based on the Y-Processor September 2015 Version 1.0 Order Number: 332994-001EN Preface You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Intel technologies may require enabled hardware, specific software, or services activation. Check with your system manufacturer or retailer. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or visit www.intel.com/design/literature.htm. -
Arxiv:1312.1411V2 [Cs.LO] 9 Jun 2014 Ocretporm R Adt Einadipeet Seilyw Especially Implement, Buffering and Implement Design Multiprocessors to Tectures
Don’t sit on the fence A static analysis approach to automatic fence insertion Jade Alglave1, Daniel Kroening2, Vincent Nimal2, and Daniel Poetzl2 1 University College London, UK 2University of Oxford, UK Abstract Modern architectures rely on memory fences to prevent undesired weakenings of memory consistency. As the fences’ semantics may be subtle, the automation of their placement is highly desirable. But precise methods for restoring consistency do not scale to deployed systems code. We choose to trade some precision for genuine scalability: our technique is suitable for large code bases. We implement it in our new musketeer tool, and detail experiments on more than 350 executables of packages found in Debian Linux 7.1, e.g. memcached (about 10000 LoC). 1 Introduction Concurrent programs are hard to design and implement, especially when running on multiprocessor archi- tectures. Multiprocessors implement weak memory models, which feature e.g. instruction reordering, store buffering (both appearing on x86), or store atomicity relaxation (a particularity of Power and ARM). Hence, multiprocessors allow more behaviours than Lamport’s Sequential Consistency (SC) [Lam79], a theoretical model where the execution of a program corresponds to an interleaving of the different threads. This has a dramatic effect on programmers, most of whom learned to program with SC. Fortunately, architectures provide special fence (or barrier) instructions to prevent certain behaviours. Yet both the questions of where and how to insert fences are contentious, as fences are architecture-specific and expensive. Attempts at automatically placing fences include Visual Studio 2013, which offers an option to guarantee acquire/release semantics (we study the performance impact of this policy in Sec. -
High Performance Virtual Machine Recovery in the Cloud
High Performance Virtual Machine Recovery in the Cloud Valentina Salapura1 and Richard Harper2 1IBM T. J. Watson Research Center, 1101 Kitchawan Rd, NY, Yorktown Heights, U.S.A. 2IBM T. J. Watson Research Center, Research Triangle Park, NC, U.S.A. Keywords: Cloud Computing, High Availability, Virtualization, Automation, Enterprise Class. Abstract: In this paper, we outline and illustrate concepts that are essential to achieve fast, highly scalable virtual machine planning and failover at the Virtual Machine (VM) level in a data center containing a large number of servers, VMs, and disks. To illustrate the concepts a solution is implemented and analyzed for IBM’s Cloud Managed Services enterprise cloud. The solution enables at-failover-time planning, and keeps the recovery time within tight service level agreement (SLA) allowed time budgets via parallelization of recovery activities. The initial serial failover time was reduced for an order of magnitude due to parallel VM restart, and to parallel VM restart combined with parallel storage device remapping. 1 INTRODUCTION originally designed for smaller managed environments, and do not scale well as the system Cloud computing is being rapidly adopted across the size and complexity increases. Detecting failures, IT industry as a platform for increasingly more determining appropriate failover targets, re-mapping demanding workloads, both traditional and a new storage to those failover targets, and restarting the generation of mobile, social and analytics virtual workload have to be carefully designed and applications. In the cloud, customers are being led parallelized in order to meet the service level to expect levels of availability that until recently agreement (SLA) for large systems. -
Floating-Point on X86-64
Floating-Point on x86-64 Sixteen registers: %xmm0 through %xmm15 • float or double arguments in %xmm0 – %xmm7 • float or double result in %xmm0 • %xmm8 – %xmm15 are temporaries (caller-saved) Two operand sizes: • single-precision = 32 bits = float • double-precision = 64 bits = double ��� Arithmetic Instructions addsx source, dest subsx source, dest mulsx source, dest divsx source, dest x is either s or d Add doubles addsd %xmm0, %xmm1 Multiply floats mulss %xmm0, %xmm1 3 Conversion cvtsx2sx source, dest cvttsx2sx source, dest x is either s, d, or i With i, add an extra extension for l or q Convert a long to a double cvtsi2sdq %rdi, %xmm0 Convert a float to a int cvttss2sil %xmm0, %eax 4 Example Floating-Point Compilation double scale(double a, int b) { return b * a; } cvtsi2sdl %edi, %xmm1 mulsd %xmm1, %xmm0 ret 5 SIMD Instructions addpx source, dest subpx source, dest mulpx source, dest divpx source, dest Combine pairs of doubles or floats ... because registers are actually 128 bits wide Add two pairs of doubles addpd %xmm0, %xmm1 Multiply four pairs of floats mulps %xmm0, %xmm1 6 Auto-Vectorization void mult_all(double a[4], double b[4]) { a[0] = a[0] * b[0]; a[1] = a[1] * b[1]; a[2] = a[2] * b[2]; a[3] = a[3] * b[3]; } • What if a and b are alises? • What if a or b is not 16-byte aligned? ��� Auto-Vectorization void mult_all(double * __restrict__ ai, double * __restrict__ bi) { double *a = __builtin_assume_aligned(ai, 16); double *b = __builtin_assume_aligned(bi, 16); a[0] = a[0] * b[0]; a[1] = a[1] * b[1]; a[2] = a[2] * b[2]; a[3] = a[3] * b[3]; } movapd 16(%rdi), %xmm0 movapd (%rdi), %xmm1 mulpd 16(%rsi), %xmm0 -O3 mulpd (%rsi), %xmm1 movapd %xmm0, 16(%rdi) gcc movapd %xmm1, (%rdi) ret ���� History: Floating-Point Support in x86 8086 • No foating-point hardware • Software can implement IEEE arithmatic by manipulating bits, but that’s slow 8087 (a.k.a.