Survey and Benchmarking of Machine Learning Accelerators

Total Page:16

File Type:pdf, Size:1020Kb

Survey and Benchmarking of Machine Learning Accelerators 1 Survey and Benchmarking of Machine Learning Accelerators Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, and Jeremy Kepner MIT Lincoln Laboratory Supercomputing Center Lexington, MA, USA freuther,pmichaleas,michael.jones,vijayg,sid,[email protected] Abstract—Advances in multicore processors and accelerators components play a major role in the success or failure of an have opened the flood gates to greater exploration and application AI system. of machine learning techniques to a variety of applications. These advances, along with breakdowns of several trends including Moore’s Law, have prompted an explosion of processors and accelerators that promise even greater computational and ma- chine learning capabilities. These processors and accelerators are coming in many forms, from CPUs and GPUs to ASICs, FPGAs, and dataflow accelerators. This paper surveys the current state of these processors and accelerators that have been publicly announced with performance and power consumption numbers. The performance and power values are plotted on a scatter graph and a number of dimensions and observations from the trends on this plot are discussed and analyzed. For instance, there are interesting trends in the plot regarding power consumption, numerical precision, and inference versus training. We then select and benchmark two commercially- available low size, weight, and power (SWaP) accelerators as these processors are the most interesting for embedded and Fig. 1. Canonical AI architecture consists of sensors, data conditioning, mobile machine learning inference applications that are most algorithms, modern computing, robust AI, human-machine teaming, and users (missions). Each step is critical in developing end-to-end AI applications and applicable to the DoD and other SWaP constrained users. We systems. determine how they actually perform with real-world images and neural network models, compare those results to the reported performance and power consumption values and evaluate them On the left side of Figure 1, structured and unstructured against an Intel CPU that is used in some embedded applications. data sources provide different views of entities and/or phe- nomenology. These raw data products are fed into a data con- Index Terms—Machine learning, GPU, TPU, dataflow, accel- erator, embedded inference ditioning step in which they are fused, aggregated, structured, accumulated, and converted to information. The information generated by the data conditioning step feeds into a host I. INTRODUCTION of supervised and unsupervised algorithms such as neural networks, which extract patterns, predict new events, fill in Artificial Intelligence (AI) and machine learning (ML) have missing data, or look for similarities across datasets, thereby the opportunity to revolutionize the way many industries, converting the input information to actionable knowledge. arXiv:1908.11348v1 [cs.PF] 29 Aug 2019 militaries, and other organizations address the challenges of This actionable knowledge is then passed to human beings evolving events, data deluge, and rapid courses of action. for decision-making processes in the human-machine teaming Innovations in computations, data sets, and algorithms have phase. The phase of human-machine teaming provides the driven many advances for machine learning and its application users with useful and relevant insight turning knowledge into to many different areas. AI solutions involve a number of actionable intelligence or insight. different pieces that must work together in order to provide Underlying all of these phases is a bedrock of modern capabilities that can be used by decision makers, warfighters, computing systems that is comprised of one or more heteroge- and analysts; Figure 1 depicts these important pieces that are nous computing elements. For example, sensor processing may needed when developing an end-to-end AI solution. While occur on low power embedded computers, while algorithms certain components may not be as visible to end-users as may be computed in very large data centers. With regard to others, our experience has shown that each of these interrelated performance advances in these computing elements, Moore’s This material is based upon work supported by the Assistant Secretary of law trends have ended [1], as have a number of related Defense for Research and Engineering under Air Force Contract No. FA8721- laws and trends including Denard’s scaling (power density), 05-C-0002 and/or FA8702-15-D-0001. Any opinions, findings, conclusions or clock frequency, core counts, instructions per clock cycle, and recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Assistant Secretary of Defense for instructions per Joule (Koomey’s law) [2]. Many of the tech- Research and Engineering. nologies, tricks and techniques of processor chip designers that 2 extended these trends have been exhausted. However, all is not including integer representations, have been shown to lost, yet; advancements and innovations are still progressing. be reasonably effective for inference [7], [8]. However, In fact, there has been a Cambrian explosion of computing it has also generally been established that very limited technologies and architectures in recent years. Specializa- numerical precisions like int4, int2, and int1 do not tion of circuits for certain functionalities is being exploited adequately represent model weight parameters and sig- whereby certain often-used operational kernels, methods, or nificantly affect model output predictions. functions are being accelerated with specialized circuit blocks The survey in the next section of this paper focuses on the and chips. These accelerators are designed with a different computational throughput of the processors and accelerators balance between performance and functional flexibility. One along with the power that is consumed to achieve that perfor- area in which we are seeing an explosion of accelerators is mance. Other factors include the memory bandwidth to load ML processors and accelerators [3]. Understanding the relative and update model parameters and data; memory capacity for benefits of these technologies is of particular importance to model weight parameters and input data, both close to the applying AI to domains under significant constraints such as arithmetic units and the global memory of the processors and size, weight, and power, both in embedded applications and accelerator; and arithmetic intensity [9] of the neural network in data centers. models being processed by the processor or accelerator. These But before we get to the survey of ML processors and factors are involved in managing model parameters and input accelerators, we must cover several topic that are important for data flows within the model; hence, they also influence the understanding several dimensions of evaluation in the survey. trade-offs between chip bandwidth capabilities, data flow We must discuss the types of neural networks for which these flexibility, and configuration and amount of computational ML accelerators are being designed; the distinction between capability. These factors, however, are beyond the scope of neural network training and inference; and the numerical this paper, and they will be addressed in future phases of this precision with which the neural networks are being used for research. training and inference: II. SURVEY OF PROCESSORS • Types of Neural Networks – AI and machine learning encompass a wide set of statistics-based technologies as Many recent advances in AI can be at least partly credited one can see in the taxonomy detailed in the algorithm to advances in computing hardware [10], [11]. In particular, section (Section 3) of this MIT Lincloln Laboratory modern computing advances have been able to realize many technical report [4]. Even among neural networks, there computationally heavy machine-learning algorithms such as are a growing number of neural network patterns [5]. neural networks. While machine-learning algorithms such as This paper will focus on processors that are geared neural networks have had a rich theoretic history [12], recent toward deep neural networks (DNNs) and convolutional advances in computing have made the application of such neural networks (CNNs). Overall, the most emphasis of algorithms a reality by providing the computational power computational capability for machine learning is on DNN needed to train and process massive quantities of data. Al- and CNNs because they are quite computationally inten- though the computing landscape of the past decade has been sive [6], with the fully connected and convolutional lay- rich with numerous innovations, more embedded and mobile ers being the most computationally intense. Conversely, applications that require low size, weight, and power (SWaP) pooling, dropout, softmax, and recurrent/skip connection systems will need capabilities that are beyond those delivered layers are not computationally intensive since these types by the traditional architectures of central processing units of layers stipulate datapaths for weight and data operands. (CPUs) and graphics processing units (GPUs). For example, • Neural Network Training versus Inference – Neural net- in commercial applications, it is common to off-load data work training uses libraries of input data to converge conditioning and algorithms to non-SWaP constrained plat- model weight parameters by applying the labeled input forms such high-performance
Recommended publications
  • News, Information, Rumors, Opinions, Etc
    http://www.physics.miami.edu/~chris/srchmrk_nws.html ● Miami-Dade/Broward/Palm Beach News ❍ Miami Herald Online Sun Sentinel Palm Beach Post ❍ Miami ABC, Ch 10 Miami NBC, Ch 6 ❍ Miami CBS, Ch 4 Miami Fox, WSVN Ch 7 ● Local Government, Schools, Universities, Culture ❍ Miami Dade County Government ❍ Village of Pinecrest ❍ Miami Dade County Public Schools ❍ ❍ University of Miami UM Arts & Sciences ❍ e-Veritas Univ of Miami Faculty and Staff "news" ❍ The Hurricane online University of Miami Student Newspaper ❍ Tropic Culture Miami ❍ Culture Shock Miami ● Local Traffic ❍ Traffic Conditions in Miami from SmartTraveler. ❍ Traffic Conditions in Florida from FHP. ❍ Traffic Conditions in Miami from MSN Autos. ❍ Yahoo Traffic for Miami. ❍ Road/Highway Construction in Florida from Florida DOT. ❍ WSVN's (Fox, local Channel 7) live Traffic conditions in Miami via RealPlayer. News, Information, Rumors, Opinions, Etc. ● Science News ❍ EurekAlert! ❍ New York Times Science/Health ❍ BBC Science/Technology ❍ Popular Science ❍ Space.com ❍ CNN Space ❍ ABC News Science/Technology ❍ CBS News Sci/Tech ❍ LA Times Science ❍ Scientific American ❍ Science News ❍ MIT Technology Review ❍ New Scientist ❍ Physorg.com ❍ PhysicsToday.org ❍ Sky and Telescope News ❍ ENN - Environmental News Network ● Technology/Computer News/Rumors/Opinions ❍ Google Tech/Sci News or Yahoo Tech News or Google Top Stories ❍ ArsTechnica Wired ❍ SlashDot Digg DoggDot.us ❍ reddit digglicious.com Technorati ❍ del.ic.ious furl.net ❍ New York Times Technology San Jose Mercury News Technology Washington
    [Show full text]
  • Accenture AI Inferencing in Action
    POV POV PUT YOUR AI SOLUTION ON STEROIDS POV PUT YOUR AI SOLUTION ON STEROIDS POINT OF VIEW POV PUT YOUR AI SOLUTION ON STEROIDS POV MATCH GPU PERFORMANCE AT HALF THE COST FOR AI INFERENCE WORKLOADS Proven CPU-based solution from Accenture and Intel boosts the performance and lowers the cost of AI inferencing by enabling an easy-to-deploy, scalable, and cost-efficient architecture AI INFERENCING—THE NEXT CRITICAL STEP AFTER AI ALGORITHM TRAINING Artificial Intelligence (AI) solutions include three main functions—identifying and preparing data, training an artificial intelligence algorithm, and using the algorithm for inferring new outcomes. Each function requires different compute recourses and deployment architecture. The choices of infrastructure components and technologies significantly impact the performance and costs associated with deploying an end-to-end AI solution. Data scientists and machine learning (ML) engineers spend significant time devising the right architecture for all stages of the AI pipeline. MODEL DATA TRAINING AND SCORING AND PREPARATION OPTIMIZATION INFERENCE Once an AI computer/algorithm has been trained through traditional or deep learning techniques, it can deliver value by interpreting data (i.e., inferring). Through inference, an AI algorithm can analyze data to: • Differentiate between various items • Identify trends and patterns that can be leveraged during decision-making • Reveal opportunities and possible solutions • Recognize voices, faces, images, etc. POV PUT YOUR AI SOLUTION ON STEROIDS POV Revealing hidden As we look to the future, AI inference will become increasingly possibilities— important to businesses operating in all segments—from health care to financial services to aerospace. And as the reliance on AI inference continues to grow, so Accenture AIP, does the importance of choosing the right AI infrastructure to support it.
    [Show full text]
  • A 1024-Core 70GFLOPS/W Floating Point Manycore Microprocessor
    A 1024-core 70GFLOPS/W Floating Point Manycore Microprocessor Andreas Olofsson, Roman Trogan, Oleg Raikhman Adapteva, Lexington, MA The Past, Present, & Future of Computing SIMD MIMD PE PE PE PE MINI MINI MINI CPU CPU CPU PE PE PE PE MINI MINI MINI CPU CPU CPU PE PE PE PE MINI MINI MINI CPU CPU CPU MINI MINI MINI BIG BIG CPU CPU CPU CPU CPU BIG BIG BIG BIG CPU CPU CPU CPU PAST PRESENT FUTURE 2 Adapteva’s Manycore Architecture C/C++ Programmable Incredibly Scalable 70 GFLOPS/W 3 Routing Architecture 4 E64G400 Specifications (Jan-2012) • 64-Core Microprocessor • 100 GFLOPS performance • 800 MHz Operation • 8GB/sec IO bandwidth • 1.6 TB/sec on chip memory BW • 0.8 TB/sec network on chip BW • 64 Billion Messages/sec IO Pads Core • 2 Watt total chip power • 2MB on chip memory Link Logic • 10 mm2 total silicon area • 324 ball 15x15mm plastic BGA 5 Lab Measurements 80 Energy Efficiency 70 60 50 GFLOPS/W 40 30 20 10 0 0 200 400 600 800 1000 1200 MHz ENERGY EFFICIENCY ENERGY EFFICIENCY (28nm) 6 Epiphany Performance Scaling 16,384 G 4,096 F 1,024 L 256 O 64 4096 P 1024 S 16 256 64 4 16 1 # Cores On‐demand scaling from 0.25W to 64 Watt 7 Hold on...the title said 1024 cores! • We can build it any time! • Waiting for customer • LEGO approach to design • No global timinga paths • Guaranteed by design • Generate any array in 1 day • ~130 mm2 silicon area 1024 Cores 1Core 8 What about 64-bit Floating Point? Single Precision Double Precision 2 FLOPS/CYCLE 2 FLOPS/CYCLE 64KB SRAM 64KB SRAM 0.215mm^2 0.237mm^2 700MHz 600MHz 9 Epiphany Latency Specifications
    [Show full text]
  • GPU Developments 2018
    GPU Developments 2018 2018 GPU Developments 2018 © Copyright Jon Peddie Research 2019. All rights reserved. Reproduction in whole or in part is prohibited without written permission from Jon Peddie Research. This report is the property of Jon Peddie Research (JPR) and made available to a restricted number of clients only upon these terms and conditions. Agreement not to copy or disclose. This report and all future reports or other materials provided by JPR pursuant to this subscription (collectively, “Reports”) are protected by: (i) federal copyright, pursuant to the Copyright Act of 1976; and (ii) the nondisclosure provisions set forth immediately following. License, exclusive use, and agreement not to disclose. Reports are the trade secret property exclusively of JPR and are made available to a restricted number of clients, for their exclusive use and only upon the following terms and conditions. JPR grants site-wide license to read and utilize the information in the Reports, exclusively to the initial subscriber to the Reports, its subsidiaries, divisions, and employees (collectively, “Subscriber”). The Reports shall, at all times, be treated by Subscriber as proprietary and confidential documents, for internal use only. Subscriber agrees that it will not reproduce for or share any of the material in the Reports (“Material”) with any entity or individual other than Subscriber (“Shared Third Party”) (collectively, “Share” or “Sharing”), without the advance written permission of JPR. Subscriber shall be liable for any breach of this agreement and shall be subject to cancellation of its subscription to Reports. Without limiting this liability, Subscriber shall be liable for any damages suffered by JPR as a result of any Sharing of any Material, without advance written permission of JPR.
    [Show full text]
  • An Emerging Architecture in Smart Phones
    International Journal of Electronic Engineering and Computer Science Vol. 3, No. 2, 2018, pp. 29-38 http://www.aiscience.org/journal/ijeecs ARM Processor Architecture: An Emerging Architecture in Smart Phones Naseer Ahmad, Muhammad Waqas Boota * Department of Computer Science, Virtual University of Pakistan, Lahore, Pakistan Abstract ARM is a 32-bit RISC processor architecture. It is develop and licenses by British company ARM holdings. ARM holding does not manufacture and sell the CPU devices. ARM holding only licenses the processor architecture to interested parties. There are two main types of licences implementation licenses and architecture licenses. ARM processors have a unique combination of feature such as ARM core is very simple as compare to general purpose processors. ARM chip has several peripheral controller, a digital signal processor and ARM core. ARM processor consumes less power but provide the high performance. Now a day, ARM Cortex series is very popular in Smartphone devices. We will also see the important characteristics of cortex series. We discuss the ARM processor and system on a chip (SOC) which includes the Qualcomm, Snapdragon, nVidia Tegra, and Apple system on chips. In this paper, we discuss the features of ARM processor and Intel atom processor and see which processor is best. Finally, we will discuss the future of ARM processor in Smartphone devices. Keywords RISC, ISA, ARM Core, System on a Chip (SoC) Received: May 6, 2018 / Accepted: June 15, 2018 / Published online: July 26, 2018 @ 2018 The Authors. Published by American Institute of Science. This Open Access article is under the CC BY license.
    [Show full text]
  • Persistent Memory for Artificial Intelligence
    Persistent Memory for Artificial Intelligence Bill Gervasi Principal Systems Architect [email protected] Santa Clara, CA August 2018 1 Demand Outpacing Capacity In-Memory Computing Artificial Intelligence Machine Learning Deep Learning Memory Demand DRAM Capacity Santa Clara, CA August 2018 2 Driving New Capacity Models Non-volatile memories Industry successfully snuggling large memories to the processors… Memory Demand DRAM Capacity …but we can do oh! so much more Santa Clara, CA August 2018 3 My Three Talks at FMS NVDIMM Analysis Memory Class Storage Artificial Intelligence Santa Clara, CA August 2018 4 History of Architectures Let’s go back in time… Santa Clara, CA August 2018 5 Historical Trends in Computing Edge Co- Computing Processing Power Failure Santa Clara, CA August 2018 Data Loss 6 Some Moments in History Central Distributed Processing Processing Shared Processor Processor per user Dumb terminals Peer-to-peer networks Santa Clara, CA August 2018 7 Some Moments in History Central Distributed Processing Processing “Native Signal Processing” Hercules graphics Main CPU drivers Sound Blaster audio Cheap analog I/O Rockwell modem Ethernet DSP Tightly-coupled coprocessing Santa Clara, CA August 2018 8 The Lone Survivor… Integrated graphics Graphics add-in cards …survived the NSP war Santa Clara, CA August 2018 9 Some Moments in History Central Distributed Processing Processing Phone providers Phone apps provide controlled all local services data processing Edge computing reduces latency Santa Clara, CA August 2018 10 When the Playing
    [Show full text]
  • Comparison of 116 Open Spec, Hacker Friendly Single Board Computers -- June 2018
    Comparison of 116 Open Spec, Hacker Friendly Single Board Computers -- June 2018 Click on the product names to get more product information. In most cases these links go to LinuxGizmos.com articles with detailed product descriptions plus market analysis. HDMI or DP- USB Product Price ($) Vendor Processor Cores 3D GPU MCU RAM Storage LAN Wireless out ports Expansion OSes 86Duino Zero / Zero Plus 39, 54 DMP Vortex86EX 1x x86 @ 300MHz no no2 128MB no3 Fast no4 no5 1 headers Linux Opt. 4GB eMMC; A20-OLinuXino-Lime2 53 or 65 Olimex Allwinner A20 2x A7 @ 1GHz Mali-400 no 1GB Fast no yes 3 other Linux, Android SATA A20-OLinuXino-Micro 65 or 77 Olimex Allwinner A20 2x A7 @ 1GHz Mali-400 no 1GB opt. 4GB NAND Fast no yes 3 other Linux, Android Debian Linux A33-OLinuXino 42 or 52 Olimex Allwinner A33 4x A7 @ 1.2GHz Mali-400 no 1GB opt. 4GB NAND no no no 1 dual 40-pin 3.4.39, Android 4.4 4GB (opt. 16GB A64-OLinuXino 47 to 88 Olimex Allwinner A64 4x A53 @ 1.2GHz Mali-400 MP2 no 1GB GbE WiFi, BT yes 1 40-pin custom Linux eMMC) Banana Pi BPI-M2 Berry 36 SinoVoip Allwinner V40 4x A7 Mali-400 MP2 no 1GB SATA GbE WiFi, BT yes 4 Pi 40 Linux, Android 8GB eMMC (opt. up Banana Pi BPI-M2 Magic 21 SinoVoip Allwinner A33 4x A7 Mali-400 MP2 no 512MB no Wifi, BT no 2 Pi 40 Linux, Android to 64GB) 8GB to 64GB eMMC; Banana Pi BPI-M2 Ultra 56 SinoVoip Allwinner R40 4x A7 Mali-400 MP2 no 2GB GbE WiFi, BT yes 4 Pi 40 Linux, Android SATA Banana Pi BPI-M2 Zero 21 SinoVoip Allwinner H2+ 4x A7 @ 1.2GHz Mali-400 MP2 no 512MB no no WiFi, BT yes 1 Pi 40 Linux, Android Banana
    [Show full text]
  • Penguin Computing Upgrades Corona with Latest AMD Radeon Instinct GPU Technology for Enhanced ML and AI Capabilities
    Penguin Computing Upgrades Corona with latest AMD Radeon Instinct GPU Technology for Enhanced ML and AI Capabilities November 18, 2019 Fremont, CA., November 18, 2019 -Penguin Computing, a leader in high-performance computing (HPC), artificial intelligence (AI), and enterprise data center solutions and services, today announced that Corona, an HPC cluster first delivered to Lawrence Livermore National Lab (LLNL) in late 2018, has been upgraded with the newest AMD Radeon Instinct™ MI60 accelerators, based on Vega which, per AMD, is the World’s 1st 7nm GPU architecture that brings PCIe® 4.0 support. This upgrade is the latest example of Penguin Computing and LLNL’s ongoing collaboration aimed at providing additional capabilities to the LLNL user community. As previously released, the cluster consists of 170 two-socket nodes with 24-core AMD EPYCTM 7401 processors and a PCIe 1.6 Terabyte (TB) nonvolatile (solid-state) memory device. Each Corona compute node is GPU-ready with half of those nodes today utilizing four AMD Radeon Instinct MI25 accelerators per node, delivering 4.2 petaFLOPS of FP32 peak performance. With the MI60 upgrade, the cluster increases its potential PFLOPS peak performance to 9.45 petaFLOPS of FP32 peak performance. This brings significantly greater performance and AI capabilities to the research communities. “The Penguin Computing DOE team continues our collaborative venture with our vendor partners AMD and Mellanox to ensure the Livermore Corona GPU enhancements expand the capabilities to continue their mission outreach within various machine learning communities,” said Ken Gudenrath, Director of Federal Systems at Penguin Computing. Corona is being made available to industry through LLNL’s High Performance Computing Innovation Center (HPCIC).
    [Show full text]
  • AI Accelerator Latencies in Hybrid Vehicular Simulation
    AI Accelerator Latencies in Hybrid Vehicular Simulation Jussi Hanhirova Matias Hyyppä Abstract Aalto University Aalto University We study the use of accelerators for vehicular AI (Artifi- Espoo, Finland Espoo, Finland cial Intelligence) applications. Managing the computation jussi.hanhirova@aalto.fi juho.hyyppa@aalto.fi is complex as vehicular AI applications call for high per- formance computations in a real-time distributed environ- ment, in which low and predictable latencies are essential. We have used the CARLA simulator together with machine learning based on CNNs (Convolutional Neural Networks) in our research. In this paper, we present the latency be- Anton Debner Vesa Hirvisalo havior with GPU acceleration for CNN processing. Our ex- Aalto University Aalto University perimentation is motivated by using the simulator to find the Espoo, Finland Espoo, Finland corner cases that are demanding for the accelerated CNN anton.debner@aalto.fi vesa.hirvisalo@aalto.fi processing. Author Keywords Computation acceleration; GPU; deep learning ACM Classification Keywords D.4.8 [Performance]: Simulation; I.2.9 [Robotics]: Autonomous vehicles; I.3.7 [Three-Dimensional Graphics and Realism]: Virtual reality Introduction In this paper, we address the usage of accelerators in ve- hicular AI (Artificial Intelligence) systems and in the simula- tors that are needed in the development of such systems. The recent development of AI system is enabling many new Convolutional Neural Net- applications including autonomous driving of motor vehi- software [5] together with deep learning based inference on works (CNN) are a specific cles on public roads. Many of such systems process sen- TensorFlow [6]. Our measurements show the basic viability class of neural networks that sor data related to environment perception in real-time, be- of the hybrid simulation approach, but they also underline are often used in deep form cause they trigger actions which have latency requirements.
    [Show full text]
  • SECOND AMENDED COMPLAINT 3:14-Cv-582-JD
    Case 3:14-cv-00582-JD Document 51 Filed 11/10/14 Page 1 of 19 1 EDUARDO G. ROY (Bar No. 146316) DANIEL C. QUINTERO (Bar No. 196492) 2 JOHN R. HURLEY (Bar No. 203641) PROMETHEUS PARTNERS L.L.P. 3 220 Montgomery Street Suite 1094 San Francisco, CA 94104 4 Telephone: 415.527.0255 5 Attorneys for Plaintiff 6 DANIEL NORCIA 7 UNITED STATES DISTIRCT COURT 8 NORTHERN DISTRICT OF CALIFORNIA 9 DANIEL NORCIA, on his own behalf and on Case No.: 3:14-cv-582-JD 10 behalf of all others similarly situated, SECOND AMENDED CLASS ACTION 11 Plaintiffs, COMPLAINT FOR: 12 v. 1. VIOLATION OF CALIFORNIA CONSUMERS LEGAL REMEDIES 13 SAMSUNG TELECOMMUNICATIONS ACT, CIVIL CODE §1750, et seq. AMERICA, LLC, a New York Corporation, and 2. UNLAWFUL AND UNFAIR 14 SAMSUNG ELECTRONICS AMERICA, INC., BUSINESS PRACTICES, a New Jersey Corporation, CALIFORNIA BUS. & PROF. CODE 15 §17200, et seq. Defendants. 3. FALSE ADVERTISING, 16 CALIFORNIA BUS. & PROF. CODE §17500, et seq. 17 4. FRAUD 18 JURY TRIAL DEMANDED 19 20 21 22 23 24 25 26 27 28 1 SECOND AMENDED COMPLAINT 3:14-cv-582-JD Case 3:14-cv-00582-JD Document 51 Filed 11/10/14 Page 2 of 19 1 Plaintiff DANIEL NORCIA, having not previously amended as a matter of course pursuant to 2 Fed.R.Civ.P. 15(a)(1)(B), hereby exercises that right by amending within 21 days of service of 3 Defendants’ Motion to Dismiss filed October 20, 2014 (ECF 45). 4 Individually and on behalf of all others similarly situated, Daniel Norcia complains and alleges, 5 by and through his attorneys, upon personal knowledge and information and belief, as follows: 6 NATURE OF THE ACTION 7 1.
    [Show full text]
  • Technology, Media, and Telecommunications Predictions
    Technology, Media, and Telecommunications Predictions 2019 Deloitte’s Technology, Media, and Telecommunications (TMT) group brings together one of the world’s largest pools of industry experts—respected for helping companies of all shapes and sizes thrive in a digital world. Deloitte’s TMT specialists can help companies take advantage of the ever- changing industry through a broad array of services designed to meet companies wherever they are, across the value chain and around the globe. Contact the authors for more information or read more on Deloitte.com. Technology, Media, and Telecommunications Predictions 2019 Contents Foreword | 2 5G: The new network arrives | 4 Artificial intelligence: From expert-only to everywhere | 14 Smart speakers: Growth at a discount | 24 Does TV sports have a future? Bet on it | 36 On your marks, get set, game! eSports and the shape of media in 2019 | 50 Radio: Revenue, reach, and resilience | 60 3D printing growth accelerates again | 70 China, by design: World-leading connectivity nurtures new digital business models | 78 China inside: Chinese semiconductors will power artificial intelligence | 86 Quantum computers: The next supercomputers, but not the next laptops | 96 Deloitte Australia key contacts | 108 1 Technology, Media, and Telecommunications Predictions 2019 Foreword Dear reader, Welcome to Deloitte Global’s Technology, Media, and Telecommunications Predictions for 2019. The theme this year is one of continuity—as evolution rather than stasis. Predictions has been published since 2001. Back in 2009 and 2010, we wrote about the launch of exciting new fourth-generation wireless networks called 4G (aka LTE). A decade later, we’re now making predic- tions about 5G networks that will be launching this year.
    [Show full text]
  • PC Gamers Win Big
    CASE STUDY Intel® Solid-State Drives Performance, Storage and the Customer Experience PC Gamers Win Big Intel® Solid-State Drives (SSDs) deliver the ultimate gaming experience, providing dramatic visual and runtime improvements Intel® Solid-State Drives (SSDs) represent a revolutionary breakthrough, delivering a giant leap in storage performance. Designed to satisfy the most demanding gamers, media creators, and technology enthusiasts, Intel® SSDs bring a high level of performance and reliability to notebook and desktop PC storage. Faster load times and improved graphics performance such as increased detail in textures, higher resolution geometry, smoother animation, and more characters on the screen make for a better gaming experience. Developers are now taking advantage of these features in their new game designs. SCREAMING LOAD TiMES AND SMOOTH GRAPHICS With no moving parts, high reliability, and a longer life span than traditional hard drives, Intel Solid-State Drives (SSDs) dramatically improve the computer gaming experience. Load times are substantially faster. When compared with Western Digital VelociRaptor* 10K hard disk drives (HDDs), gamers experienced up to 78 percent load time improvements using Intel SSDs. Graphics are smooth and uninterrupted, even at the highest graphics settings. To see the performance difference in a head-to- head video comparing the Intel® X25-M SATA SSD with a 10,000 RPM HDD, go to www.intelssdgaming.com. When comparing frame-to-frame coherency with the Western Digital VelociRaptor 10K HDD, the Intel X25-M responds with zero hitching while the WD VelociRaptor shows hitching seven percent of the time. This means gamers experience smoother visual transitions with Intel SSDs.
    [Show full text]