Energy Efficient Enabling Technologies for Semantic Video Processing on Mobile Devices

Total Page:16

File Type:pdf, Size:1020Kb

Energy Efficient Enabling Technologies for Semantic Video Processing on Mobile Devices ENERGY EFFICIENT ENABLING TECHNOLOGIES FOR SEMANTIC VIDEO PROCESSING ON MOBILE DEVICES by Daniel Larkin, B.Eng. Submitted in partial fulfilment of the requirements for the Degree of Doctor of Philosophy Dublin City University School of Electronic Engineering Supervisor: Prof. Noel E. O’Connor September, 2008 I hereby certify that this material, which I now submit for assessment on the pro- gramme of study leading to the award of Doctor of Philosophy is entirely my own work, that I have exercised reasonable care to ensure that the work is original, and does not to the best of my knowledge breach any law of copyright, and has not been taken from the work of others save and to the extent that such work has been cited and acknowledged within the text of my work. Signed: Daniel Larkin (Candidate) ID: Date: Table of Contents List of Figures vii List of Tables xi Abstract xii List of Acronyms xiv List of Peer-Reviewed Publications xvii Acknowledgements xix 1 Introduction 1 1.1 The Emergence of Mobile Multimedia . 2 1.2 Grand Challenges Facing Next Generation Mobile Multimedia . 3 1.2.1 Semantic Multimedia Processing . 3 1.2.2 Multimedia Content Delivery to Mobile Devices . 5 1.2.3 Multimedia Processing on Resource Constrained Mobile Devices . 6 1.3 Research Motivation and Work Programme . 7 1.3.1 Novel Face Detection and associated hardware acceleration . 7 1.3.2 Hardware Acceleration of Motion Estimation . 8 1.4 ResearchObjectives................................. 9 1.5 ThesisStructure................................... 9 iii 1.6 Summary ...................................... 10 2 Technical Background 11 2.1 Digital Video Compression . 12 2.1.1 A Generic Video Compression System . 15 2.1.2 Semantic Video Object based Compression . 28 2.1.3 Image & Video Compression Standards . 32 2.2 Semantic Video Object Segmentation . 37 2.3 Thesis Contributions in the Context of Video Object Processing . 38 2.4 Implementation Options . 41 2.4.1 Summary of Implementation Options . 43 2.5 Energy Efficient Design Principles . 44 2.5.1 Low Power design techniques . 47 2.5.2 SummaryofLowPowerDesign . 51 2.6 Conclusions..................................... 52 3 Face Detection: A Review of Popular Approaches 53 3.1 Face Detection State of the Art Review . 55 3.1.1 Feature Extraction based face detection approaches . 55 3.1.2 Appearance based face detection approaches . 59 3.1.3 Prior art in hardware acceleration of face detection . 69 3.2 Discussion: Suitable Algorithms for a Mobile Device . 73 3.3 Fundamentals of Evolutionary Artificial Neural Networks . 77 3.3.1 Artificial Neural Networks . 78 3.3.2 Genetic Algorithms . 82 3.3.3 Evolutionary Artificial Neural Networks . 83 3.3.4 Neuro Evolution of Augmenting Topologies . 88 3.4 Conclusions..................................... 90 4 A Novel Face Detection Training Algorithm 92 4.1 Training data preparation . 92 4.2 NEAT based face detection training . 95 4.3 Training Runs and Parameter Selection . 102 iv 4.3.1 Non-Seeded Topology Training . 106 4.3.2 Seeded Topology Training . 110 4.4 Conclusions on Training . 120 4.5 FutureWork..................................... 121 4.5.1 Automatic Input/Feature Selection . 128 4.5.2 Using Alternative Low-Level Features . 128 4.5.3 Detection of Side Profile and Rotated Faces . 130 4.6 Summary of Contributions . 130 5 Software Implementation of Trained Face Detection EANN 134 5.1 Normal Mode Face Detection Operation . 134 5.1.1 Merging detection results . 136 5.2 Profiling of Normal Mode Face Detection Operation . 138 5.2.1 Software Power & Energy Consumption . 140 5.3 Software Implementation & Algorithmic Optimisations . 141 5.4 Results........................................ 145 5.4.1 Evaluation on the BioID face database . 147 5.4.2 Failure Analysis on the CMU/MIT face database . 150 5.5 FutureWork..................................... 150 5.6 Summary of Contributions . 154 6 EANN Hardware Accelerator 155 6.1 State of the Art Hardware EANN Review . 158 6.1.1 Efficient Multiply Accumulation . 159 6.1.2 Activation Function Generator . 159 6.1.3 Number System and Precision Requirement Considerations . 160 6.2 ProposedHardwareArchitecture . 162 6.3 Implementation of Proposed Architecture . 165 6.3.1 Activation Function Generator . 166 6.4 Results........................................ 172 6.4.1 Activation Function Results . 173 6.4.2 Hardware Implementation Results . 174 6.4.3 Benchmarking of Proposed Hardware Architecture against Prior Art . 178 v 6.5 FutureWork..................................... 181 6.6 Summary of Contributions . 182 7 Binary Motion Estimation Hardware Accelerator 184 7.1 State of the Art Review . 185 7.1.1 Binary Motion Estimation . 190 7.2 Proposed Binary Motion Estimation Architecture . 193 7.2.1 Reformulation of the Binary SAD Metric . 194 7.2.2 4xPE Block Matching Architecture . 200 7.3 Results........................................ 208 7.4 FutureWork..................................... 209 7.4.1 16XPE Block-Matching Architecture . 210 7.4.2 Possible PE improvements . 210 7.5 Summary of Contributions . 212 8 Conclusions & Future Work 214 8.1 Motivation for Research – A Summary . 214 8.2 Summary of Thesis contributions . 216 8.3 A Vision for the Future . 218 Bibliography 220 vi List of Figures 1.1 Video scene decomposed into constituent semantic objects . 3 1.2 Example of multiple applications leveraging a hardware ANN accelerator . 8 2.1 An example of a digital still image and video sequence . 13 2.2 Examples of Y CbCr sampling modes . 13 2.3 A Generic Video Encoder and Decoder . 17 2.4 Temporal Redundancy in Video Sequences . 19 2.5 Motion estimation taxonomy . 20 2.6 Block sizes used in H.264 motion estimation) . 21 2.7 Probability of DCT coefficients being nonzero . 25 2.8 Zig-zag scanning of quantised DCT coefficients . 27 2.9 MPEG-4VideoObjects............................... 29 2.10 MPEG-4BinaryShapeEncoder . 31 2.11 Motion Vector difference for Shape . 33 2.12 Contributions of this Thesis in context of Video object-based functionality . 39 2.13 Dynamic power loss in a CMOS Inverter . 45 3.1 Basic taxonomy of face detection algorithms . 56 3.2 Block diagram of a generic feature extraction based face detection algorithm . 56 3.3 Algorithmic options for appearance based face detection approaches ....... 59 3.4 Rowley’s ANN based face detection algorithm . 61 vii 3.5 Haar-like Features used in Viola & Jones Cascaded Adaboost algorithm ..... 66 3.6 Additional Haar features for rotation-invariant face detection . ...... 67 3.7 System Architecture of Face Detection Algorithm proposed by Nguyen et al . 71 3.8 Face Detection Hardware Architecture Proposed by Yang et al . ...... 73 3.9 Basic Terminology associated with Artificial Neural Networks . 79 3.10 Selection of popular activation functions . 81 3.11 Examples of genetic algorithm binary crossover techniques . ..... 84 3.12 Taxonomy of EANN algorithms . 85 3.13 Issues associated with topology genome representation & the crossover operator . 87 3.14 Example of a NEAT genome and resultant phenotype . 87 3.15 Use of innovation numbers in the mutation and crossover operators in NEAT . 89 4.1 Preprocessing of FERET face training dataset . 93 4.2 Example scenery image devoid of faces which was used during the training image 94 4.3 Starting topology seeded with localised features . 95 4.4 Proposed NEAT-based face detection training . 99 4.5 Simple example of over-training . 100 4.6 Proposed Evolutionary Training Strategy . 105 4.7 Non-seeded topology training run – iteration 1 . 111 4.8 Non-seeded topology training run – iteration 2 . 112 4.9 Non-seeded topology training run – iteration 3 . 113 4.10 Non-seeded topology training run – iteration 4 . 114 4.11 Non-seeded topology training run – iteration 5 . 115 4.12 Non-seeded topology training run – iteration 6 . 116 4.13 Seeded topology training run – iteration 1 . 122 4.14 Seeded topology training run – iteration 2 . 123 4.15 Seeded topology training run – iteration 3 . 124 4.16 Seeded topology training run – iteration 4 . 125 4.17 Seeded topology training run – iteration 5 . 126 4.18 Seeded topology training run – iteration 6 . 127 4.19 DCTEnergyexploration .............................. 131 4.20 Energy versus the number of SA-DCT coefficients of each of the face samples . 132 viii 5.1 Example image pyramid . 135 5.2 Software Profiling of Proposed Face Detection Algorithm . 139 5.3 Power Profiling of Multiply Accumulate Operations on an ARM-920T processor 141 5.4 Power Profiling of a Sigmoid Activation Function on an ARM-920T processor . 142 5.5 Efficient calculation of block skin totals . 143 5.6 Effect of changing the face detection threshold . 146 5.7 Representative results from the BioID face evaluation dataset . 148 5.8 ROC graph generated from proposed algorithm when using the BioID dataset . 149 5.9 Representative results from the CMU face evaluation dataset . 151 5.10 ROC graph generated from proposed algorithm when using the CMU dataset . 153 6.1 NEAT multimedia hardware accelerator . 157 6.2 Neuronconnectiontypes .............................. 162 6.3 NEAT phenotype to hardware memory mapping . 163 6.4 Simplified block diagram of the proposed EANN hardware accelerator datapath . 164 6.5 Simplified block diagram of the.
Recommended publications
  • Webcore: Architectural Support for Mobile Web Browsing
    WebCore: Architectural Support for Mobile Web Browsing Yuhao Zhu Vijay Janapa Reddi Department of Electrical and Computer Engineering The University of Texas at Austin [email protected], [email protected] Abstract The Web browser is undoubtedly the single most impor- Browser Browser tant application in the mobile ecosystem. An average user 63% 54% spends 72 minutes each day using the mobile Web browser. Web browser internal engines (e.g., WebKit) are also growing 23% 8% 32% Media 6% in importance because they provide a common substrate for 7% 7% Others developing various mobile Web applications. In a user-driven, Media Games Others interactive, and latency-sensitive environment, the browser’s Email performance is crucial. However, the battery-constrained na- (a) Time dist. of window focus. (b) Time dist. of CPU processing. ture of mobile devices limits the performance that we can de- Fig. 1: Mobile Web browser share study conducted by our industry liver for mobile Web browsing. As traditional general-purpose research partner on their employees’ devices [2]. Similar observa- techniques to improve performance and energy efficiency fall tions were reported by NVIDIA on Tegra-based mobile handsets [3,4]. short, we must employ domain-specific knowledge while still maintaining general-purpose flexibility. network limited. However, this trend is changing. With about In this paper, we first perform design-space exploration 10X improvement in round-trip time from 3G to LTE, network to identify appropriate general-purpose architectures that latency is no longer the only performance bottleneck [51]. uniquely fit the characteristics of a popular Web browsing Prior work has shown that over the past decade, network engine.
    [Show full text]
  • EVA: an Efficient Vision Architecture for Mobile Systems
    EVA: An Efficient Vision Architecture for Mobile Systems Jason Clemons, Andrea Pellegrini, Silvio Savarese, and Todd Austin Department of Electrical Engineering and Computer Science University of Michigan Ann Arbor, Michigan 48109 fjclemons, apellegrini, silvio, [email protected] Abstract The capabilities of mobile devices have been increasing at a momen- tous rate. As better processors have merged with capable cameras in mobile systems, the number of computer vision applications has grown rapidly. However, the computational and energy constraints of mobile devices have forced computer vision application devel- opers to sacrifice accuracy for the sake of meeting timing demands. To increase the computational performance of mobile systems we Figure 1: Computer Vision Example The figure shows a sock present EVA. EVA is an application-specific heterogeneous multi- monkey where a computer vision application has recognized its face. core having a mix of computationally powerful cores with energy The algorithm would utilize features such as corners and use their efficient cores. Each core of EVA has computation and memory ar- geometric relationship to accomplish this. chitectural enhancements tailored to the application traits of vision Watts over 250 mm2 of silicon, typical mobile processors are limited codes. Using a computer vision benchmarking suite, we evaluate 2 the efficiency and performance of a wide range of EVA designs. We to a few Watts with typically 5 mm of silicon [4] [22]. show that EVA can provide speedups of over 9x that of an embedded To meet the limited computation capability of mobile proces- processor while reducing energy demands by as much as 3x. sors, computer vision application developers reluctantly sacrifice image resolution, computational precision or application capabili- Categories and Subject Descriptors C.1.4 [Parallel Architec- ties for lower quality versions of vision algorithms.
    [Show full text]
  • 6Th Generation Intel® Core™ Processors Based on the Mobile U-Processor for Iot Solutions (Intel® Core™ I7-6600U, I5-6300U, and I3-6100U Processors)
    PLATFORM BRIEF 6th Generation Intel® Core™ Mobile Processor Family Internet of Things 6th Generation Intel® Core™ Processors Based on the Mobile U-Processor for IoT Solutions (Intel® Core™ i7-6600U, i5-6300U, and i3-6100U Processors) Harness the Performance, Features, and Edge-to-Cloud Scalability to Build Tomorrow’s IoT Solutions Today Product Overview Stunning Visual Performance Intel is proud to announce its 6th The 6th generation Intel Core generation Intel® Core™ processor processors utilize the new Gen9 family featuring ultra low-power, graphics engine, which improves 64-bit, multicore processors built on graphic performance by up to the latest 14 nm technology. Designed 34 percent.1 The improvements are for small form-factor applications, this demonstrated through faster 3-D multichip package (MCP) integrates graphics performance and rendering a low-power CPU and platform applications at low power. Video controller hub (PCH) onto a common playback is also faster and smoother package substrate. thanks to the new multiplane overlay capability. The new generation offers The 6th generation Intel Core processor up to three independent audio streams family offers dramatically higher CPU and displays, Ultra HD 4K support, and and graphics performance, a broad workload consolidation for lower BOM range of power and features scaling costs and energy output. the entire Intel product line, and new, advanced features that boost edge-to- Users will also enjoy enhanced cloud Internet of Things (IoT) designs high-density streaming applications in a wide variety of markets. These and optimized 4K videoconferencing processors run at 15W thermal design with accelerated 4K hardware media power (TDP) and are ideal for small, codecs HEVC (8-bit), VP8, VP9, and energy-efficient, form-factor designs, VDENC encoding, decoding, and including digital signage, point-of-sale transcoding.
    [Show full text]
  • PART I ITEM 1. BUSINESS Industry We Are
    PART I ITEM 1. BUSINESS Industry We are the world’s largest semiconductor chip maker, based on revenue. We develop advanced integrated digital technology products, primarily integrated circuits, for industries such as computing and communications. Integrated circuits are semiconductor chips etched with interconnected electronic switches. We also develop platforms, which we define as integrated suites of digital computing technologies that are designed and configured to work together to provide an optimized user computing solution compared to ingredients that are used separately. Our goal is to be the preeminent provider of semiconductor chips and platforms for the worldwide digital economy. We offer products at various levels of integration, allowing our customers flexibility to create advanced computing and communications systems and products. We were incorporated in California in 1968 and reincorporated in Delaware in 1989. Our Internet address is www.intel.com. On this web site, we publish voluntary reports, which we update annually, outlining our performance with respect to corporate responsibility, including environmental, health, and safety compliance. On our Investor Relations web site, located at www.intc.com, we post the following filings as soon as reasonably practicable after they are electronically filed with, or furnished to, the U.S. Securities and Exchange Commission (SEC): our annual, quarterly, and current reports on Forms 10-K, 10-Q, and 8-K; our proxy statements; and any amendments to those reports or statements. All such filings are available on our Investor Relations web site free of charge. The SEC also maintains a web site (www.sec.gov) that contains reports, proxy and information statements, and other information regarding issuers that file electronically with the SEC.
    [Show full text]
  • NVIDIA Tegra 4 Family CPU Architecture 4-PLUS-1 Quad Core
    Whitepaper NVIDIA Tegra 4 Family CPU Architecture 4-PLUS-1 Quad core 1 Table of Contents ...................................................................................................................................................................... 1 Introduction .............................................................................................................................................. 3 NVIDIA Tegra 4 Family of Mobile Processors ............................................................................................ 3 Benchmarking CPU Performance .............................................................................................................. 4 Tegra 4 Family CPUs Architected for High Performance and Power Efficiency ......................................... 6 Wider Issue Execution Units for Higher Throughput ............................................................................ 6 Better Memory Level Parallelism from a Larger Instruction Window for Out-of-Order Execution ...... 7 Fast Load-To-Use Logic allows larger L1 Data Cache ............................................................................. 8 Enhanced branch prediction for higher efficiency .............................................................................. 10 Advanced Prefetcher for higher MLP and lower latency .................................................................... 10 Large Unified L2 Cache .......................................................................................................................
    [Show full text]
  • Architectural Support for Javascript Type-Checking on Mobile Processors
    Checked Load: Architectural Support for JavaScript Type-Checking on Mobile Processors Owen Anderson Emily Fortuna Luis Ceze Susan Eggers Computer Science and Engineering, University of Washington http://sampa.cs.washington.edu Abstract applications, including Google Maps, Twitter, and Face- book would not be feasible without both high-throughput Dynamic languages such as Javascript are the de-facto and low-latency JavaScript virtual machines on the client. standard for web applications. However, generating effi- At the same time, innovations in mobile device pro- cient code for dynamically-typed languages is a challenge, grammability have opened up embedded targets to the same because it requires frequent dynamic type checks. Our anal- class of programmers. Today’s smart mobile devices are ysis has shown that some programs spend upwards of 20% expected to provide a developer API that is usable by of dynamic instructions doing type checks, and 12.9% on normal application developers, as opposed to the special- average. ized embedded developers of the past. One such plat- In this paper we propose Checked Load, a low- form, HP/Palm’s WebOS [17], uses JavaScript as its pri- complexity architectural extension that replaces software- mary application development language. Others encourage based, dynamic type checking. Checked Load is comprised JavaScript-heavy web applications in addition to their na- of four new ISA instructions that provide flexible and au- tive development environments, as a means of providing tomatic type checks for memory operations, and whose im- feature-rich, portable applications with minimal develop- plementation requires minimal hardware changes. We also ment costs. propose hardware support for dynamic type prediction to Because of their power and space constraints, embedded reduce the cost of failed type checks.
    [Show full text]
  • Intel® Core™2 Duo Mobile Processor for Intel® Centrino® Duo Mobile Processor Technology
    Intel® Core™2 Duo Mobile Processor for Intel® Centrino® Duo Mobile Processor Technology Datasheet September 2007 Document Number: 314078-004 INFORMATIONLegal Lines and Disclaimers IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
    [Show full text]
  • Design and Implementation of Low Power 5 Stage Pipelined 32 Bits MIPS Processor Using 28Nm Technology
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-8 Issue-4S2 March, 20 Design and implementation of low power 5 stage Pipelined 32 bits MIPS Processor using 28nm Technology V.Prasanth, V.Sailaja, P.Sunitha, B.Vasantha Lakshmi processor. The operations used by MIPS processor in Abstract— MIPS is a simple streamlined highly scalable instruction set which are generally used to access memory in RISC architecture is most used in android base devices and best MIPS processor are load and Store and other operations suited for portable mobile devices. This Paper presents a design which are remaining are performed on register to register of 5 stage pipelined 32 bit MIPS processor on a 28nm basis [2] this results in more clear instruction set design Technology. The processor is designed using Harvard architecture. The most important feature of pipelining is where it allows execution of one instruction-per cycle rate. performance and speed of the processor, this results in increase The pipelining uses parallelism at instruction level to execute of device power. To reduce dynamic power using RTL clock multiple instructions simultaneously using a single processor gating inside FPGA device we presented a novel approach in this [2]. The major disadvantage with MIPS processor design is paper. Design functionality in terms of area power and speed is dynamic power consumption results due to clock power and analyzed using kintex 7 platform board. switching-activity. Clock Gating is a method which employed in the design to reduce power consumption [17] by Keywords: RISC,MIPS, Clock Gating, Dynamic Power, reducing switching activity of non active blocks.
    [Show full text]
  • 5 Microprocessors
    Color profile: Disabled Composite Default screen BaseTech / Mike Meyers’ CompTIA A+ Guide to Managing and Troubleshooting PCs / Mike Meyers / 380-8 / Chapter 5 5 Microprocessors “MEGAHERTZ: This is a really, really big hertz.” —DAVE BARRY In this chapter, you will learn or all practical purposes, the terms microprocessor and central processing how to Funit (CPU) mean the same thing: it’s that big chip inside your computer ■ Identify the core components of a that many people often describe as the brain of the system. You know that CPU CPU makers name their microprocessors in a fashion similar to the automobile ■ Describe the relationship of CPUs and memory industry: CPU names get a make and a model, such as Intel Core i7 or AMD ■ Explain the varieties of modern Phenom II X4. But what’s happening inside the CPU to make it able to do the CPUs amazing things asked of it every time you step up to the keyboard? ■ Install and upgrade CPUs 124 P:\010Comp\BaseTech\380-8\ch05.vp Friday, December 18, 2009 4:59:24 PM Color profile: Disabled Composite Default screen BaseTech / Mike Meyers’ CompTIA A+ Guide to Managing and Troubleshooting PCs / Mike Meyers / 380-8 / Chapter 5 Historical/Conceptual ■ CPU Core Components Although the computer might seem to act quite intelligently, comparing the CPU to a human brain hugely overstates its capabilities. A CPU functions more like a very powerful calculator than like a brain—but, oh, what a cal- culator! Today’s CPUs add, subtract, multiply, divide, and move billions of numbers per second.
    [Show full text]
  • Introducing Intel's First-Ever Core™ I9 Mobile Processor
    Product Brief Mobile 8TH Gen Intel® Core™ Processors INTRODUCING INTEL’s FIRST-EVER CORE™ I9 MOBILE PROCESSOR. THE PERFORMANCE POWERHOUSE FOR WHAT THE MOST DEMANDING MOBILE ENTHUSIASTS NEED TODAY AND FOR WHAT COMES NEXT. Be ready for amazing experiences in gaming, VR, content creation, and entertainment wherever your computing takes you with the highest- performance mobile 8th Generation Intel® Core™ processor family. This latest addition to the 8th generation processor family extends all the capabilities users have come to love in our mobile H series platforms with advanced innovations that deliver exciting new features to immerse you in incredible experiences on a variety of form factors. Product Brief Mobile 8TH Gen Intel® Core™ Processors REDEFINE ENTHUSIAST MOBILE PC PERFORMANCE ULTIMATE MOBILE PLATFORM PERFORMANCE The newest 8th Generation Intel Core processors redefine enthusiast mobile PC performance now with up to six cores and 12 MB of cache memory for more processing power—that’s two more cores than the previous generation Intel Core processor family—Intel® Turbo Boost 2.0 technology and new Intel® Thermal Velocity Boost to opportunistically and automatically increase core frequency whenever processor temperature and turbo budget allows. Intel® Hyper-Threading Technology delivers multitasking support in the latest generation of Intel Core processors. For the enthusiast, the fully-unlocked 8th Generation Intel® Core™ i9-8950HK processor provides the opportunity to tweak the platform performance to its fullest potential and enjoy hardcore mobile gaming and VR experiences. 2 Product Brief Mobile 8TH Gen Intel® Core™ Processors THE NEW MOBILE 8TH GENERATION INTEL® CORE™ PROCESSOR FAMILY DELIVERS: • An impressive portfolio of standard and unlocked systems for a broad range of usages and performance levels.
    [Show full text]
  • Intel® Core™ I7-900/I7-800/I7-700 Mobile Processor
    Intel® Core™ i7-900 Mobile Processor Extreme Edition Series, Intel Core i7-800 and i7-700 Mobile Processor Series Datasheet- Volume One This is volume 1 of 2. Refer to document 320766 for Volume 2 September 2009 Document Number: 320765 -001 INFORMATIONLegal Lines and Disclaimers IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined.” Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications.
    [Show full text]
  • Android in the Cloud on Arm Native Servers
    Android in the cloud on Arm native servers December 2020 Executive summary All the chatter about Arm servers and the role of Android in the cloud in recent years generates questions about what these things could mean for a largely mobile based platform like Arm/Android. This whitepaper examines the practical and commercial use cases that are driving a new paradigm linking the primary mobile and embedded computing platform in the world today with its new cousin in cloud infrastructure: Arm servers + virtualisation + native Android execution. While detailing the use cases driving a mobile platform migration to cloud infrastructure, this whitepaper also showcases the solution that Ampere, Canonical, and NETINT Technologies have partnered to produce addressing this migration. The solution is an example of how a service provider or developer can take advantage of Arm native computing in a cloud context to bring together an ecosystem of over 3 million primarily mobile apps integrated with benefits from a cloud enabled infrastructure. A case study around cloud gaming fills in the practical real-world considerations of this new paradigm. Cloud gaming really showcases the advantages of all the solution components the partners have integrated to make a scalable, efficient cloud resource available to a new class of cloud-native applications catering to the billions of users with Arm based devices worldwide. The whitepaper concludes with thoughts and predictions about the evolution of this new paradigm as hardware and software components mature in the coming years. A new paradigm The Arm architecture has dominated the mobile processor market with its unrivaled ability to maximise power-efficiency.
    [Show full text]