A Many-Core Architecture for In-Memory Data Processing

Total Page:16

File Type:pdf, Size:1020Kb

A Many-Core Architecture for In-Memory Data Processing MICRO 2017 Submission #XXX – Confidential Draft – Do NOT Distribute!! A Many-core Architecture for In-Memory Data Processing ABSTRACT program due to their SIMT programming model and intoler- We live in an information age, with data and analytics guiding ance to control flow divergence. Their dependence on high a large portion of our daily decisions. Data is being generated bandwidth graphics memory to sustain the large number of at a tremendous pace from connected cars, connected homes on-die cores severely constraints their memory capacity. A and connected workplaces, and extracting useful knowledge single GPU with 300+ GB/s of memory bandwidth still sits from this data is a quickly becoming an impractical task. on a PCIE 3.0 link, reducing their data load and data move- Single-threaded performance has become saturated in the last ment capabilities, which is essential for high ingest streaming decade, and there is a growing need for custom solutions to workloads as well as SQL queries involving large to large keep pace with these workloads in a scalable and efficient joins. manner. We analyzed the performance of complex analytics queries A big portion of the power in analytics workloads involves on large data structures, and identified several key areas that bringing data to the processing cores, and we aim to optimize can improve efficiency. Firstly, most analytics queries needs that. We present the Database Processing Unit or DPU, a lots of joins and group-bys. Secondly, these queries need shared memory many-core that is specifically designed for to be broken down into simple streaming primitives, which in-memory analytics workloads. The DPU contains a unique can be efficiently partitioned amongst cores. Thirdly, this Data Movement System (DMS), which provides hardware level of scaleout also means that core to core communication acceleration for data movement and preprocessing operations. must be fast and power efficient. We present the architecture The DPU also provides acceleration for core to core com- and runtime capabilities of the Database Processing Unit munication via a unique hardware RPC mechanism called (DPU), designed for database query processing and complex the Atomic Transaction Engine or ATE. Comparison of a analytics. The DPU is designed to scale to thousands of nodes, fabricated DPU chip with a variety of state of the art x86 terabytes of memory capacity and terabytes/sec of memory applications shows a performance/Watt advantage of 3× to bandwidth at rack scale, focusing on a precise compute to 16×. memory bandwidth ratio while minimizing power. The DPU is a shared memory many-core, the hardware reduces power over a commodity Xeon processor by sacri- 1. INTRODUCTION ficing features such as out-of-order, superscalar execution, The end of Dennard scaling and dark silicon [9] have sat- SIMD units, large multi-tier caches, paging and cache co- urated the performance of x86 processors, with most new herence, and instead provides acceleration for common data generations devoting larger areas of the die to the integrated movement operations. A DPU consists of 32 identical db- GPU [15]. With more companies gathering data for use in Cores, which are simple dual issue in-order cores with a with analytics and machine learning, the volume and velocity of a simple (and low power) cache and a programmer managed data is rapidly rising, and massive computation and memory scratchpad (Data Memory or DMEM). For an equal power bandwidth is needed to process and analyze this data in a envelope, DPUs provide higher memory capacity and higher scalable and efficient manner. The slowdown in CPU scal- aggregate memory bandwidth as well as compared to GPUs, ing means that only alternative for traditional server farms is making them a better choice for big data workloads. DPUs scale out, which is expensive in terms of area and power due achieve this efficiency by recognizing data movement and to additional uncore overheads. inter-core communication as key enablers of complex analyt- Custom architectures have traditionally faced resistance in ics applications, and provide hardware acceleration for these datacenters due to their high programmability and bootstrap functions. cost, however, with Moore’s law slowing down, most big ven- The DPU programming model is based on scalability and dors are considering application specific hardware as a way to reaching peak memory bandwidth, as each DPU is designed get higher performance at lower power budgets. Intel recently with a balanced compute to memory bandwidth ratio of 2 announced their foray into a hybrid FPGA solution [11], Mi- cycles of compute per byte of data transferred from DRAM. crosoft is exploring FPGAs in their datacenters [21], Google A DPU relies on a programmable DMA engine called the is building custom accelerators for machine learning [16] and Data Movement System (DMS), that allows it to efficiently all major cloud vendors offer GPUs as part of their offerings. stream data from memory. The DMS allows for efficient GPUs are quickly becoming an attractive choice in dat- line speed processing of in-memory data, by allowing data acenters for their compute capabilities and corresponding movement and partitioning between DRAM and a dbCore’s power efficiency [2][1][14][13]. However, they are hard to 1 scratchpad asynchronously, leaving the dbCore to perform useful work. The DMS can perform many common stream operations like read, write, gather, scatter and hashes at peak bandwidth, allowing for efficienct overlap of compute and data transfer. This also means that dbCores are less reliant on caches, allowing us to dramatically simplify the cache and re- duce power consumption. The DMS also communicates with the dbCores via a unique Wait For Event (WFE) mechanism, that allows for interrupt-free notification of data movement, further reducing power and improving performance. Unlike conventional DMA block engines, the DMS is inde- pendently programmable by each dbCore, through the use of specialized instructions called descriptors that are populated and stored in the dbCore’s scratchpad. Chaining descriptors Figure 1: Block diagram of a DPU together to form a pipeline allows for the DMS to be used in a variety of ways. For example, efficient access to large data structures requires hardware support for partitioning opera- tions. [24]. The DMS allows for efficient data partitioning across any/all cores, either based on a range of values, or based on a radix, or on the basis of a CRC32 hash across any number of keys. The results of these partitioning operations are placed in the respective dbCore’s scratchpad for further processing. x86 processors focus on programmability and single thread performance, GPUs focus on compute bandwidth, DPUs fo- cus on programmability and aggregate memory bandwidth/Watt at scale. The DPU is envisioned as part of a larger system, where thousands of DPUs are connected via an Infiniband tree at rack scale, providing > 40000 dbCores, > 10 TB/s of aggregate memory bandwidth and > 1 TB/s of network bandwidth at 20 kW of power. Achieving rack scale gains requires fulfillment of two objectives - Improved algorithms for scalability across these tens of thousands of cores, and im- proved performance/Watt on a single DPU, which translates into improved performance/Watt of a rack. We taped out and Figure 2: Block diagram of a Macro manufactured a DPU using a 40 nm process for benchmark- ing purposes, and focus our evaluation on this single DPU • We showcase unique hardware software co-design as- for this work. Our primary contributions are: pects of our approach towards DPU application perfor- mance across a variety of workloads. • We present the Data Processing Unit (DPU), a many- core architecture for in memory data processing focused Section2 describe the energy efficient architecture of the on highly parallel processing of large amounts of data DPU and its core components. Section3 briefly describes while minimizing power consumption. the runtime used to simplify application development on the DPU. We then evaluate the performance of our hardware • We introduce the Data Movement System (DMS), a using a few microbenchmarks in Section4. Finally, we look DMA engine which provides efficient data movement, at the performance and energy efficiency of the DPU across partitioning and pre-processing capabilities. The DMS a variety of real world applications in Section5. has been uniquely designed to minimize the amount of software needed to control the DMS in critical loops. 2. DPU ARCHITECTURE • We present an Atomic Transaction Engine (ATE), a The DPU is a System-On-a-Chip (SoC) that defines a ba- mechanism that implements synchronous and asyn- sic processing unit at the edge of a network topology. The chronous Remote Procedure Calls from one core to DPU is designed with power efficiency and maximizing data another, some of which are executed in hardware, al- parallelism as its principle constraints. Figure1 shows the lowing fast execution of functions without interrupting block diagram of a DPU. The DPU is designed in a hierarchi- the remote core. cal manner, 8 identical dbCores form a macro, and the DPU contains 4 such identical macros. DbCores in a macro share • We design and fabricate the DPU, and evaluate its per- a L2 cache (Figure2, and each macro can be power gated formance across a large variety of workloads, achieving independently of other macros. The primary computation 3× to 15× improvement in performance/Watt over a in a DPU is carried out on 32 identical dbCores, which can single socket Xeon processor. communicate via an ATE interface. Each dbCore reads/writes 2 to main memory (DRAM) using the DMS, or via a 2 level Memory operations. In general, all loads read the value simplified cache hierarchy. Each DPU has a dual core ARM from an address in memory and either sign- or zero-extend A9 processor on board as well, which manages the Infiniband the value to 64- bits as the value is written into the register networking stack, and provides a high bandwidth interface file.
Recommended publications
  • And Complex-Valued Multiply-Accumulate SIMD Unit for Digital Signal Processors
    An Area Efficient Real- and Complex-Valued Multiply-Accumulate SIMD Unit for Digital Signal Processors Lukas Gerlach, Guillermo Paya-Vay´ a,´ and Holger Blume Cluster of Excellence Hearing4all, Institute of Microelectronic Systems Leibniz Universitat¨ Hannover, Appelstr. 4, 30167 Hannover, Germany Email: {gerlach, guipava, blume}@ims.uni-hannover.de Abstract—This paper explores a real- and complex-valued In the signal processing field, the fast Fourier transform multiply-accumulate (MAC) functional unit for digital signal pro- (FFT) is one of the mostly used transformations, which greatly cessors. MAC units with single-instruction-multiple-data (SIMD) pushes the performance requirements. The data parallelism support are often used to increase the processing performance inherent in the FFT processing allows operating with many in modern signal processing processors. Compared to a real- independent MAC operations simultaneously. Therefore, a valued SIMD-MAC units, the proposed unit uses the same performance increment can be achieved by MAC units with multipliers to also support complex-valued SIMD-MAC and butterfly operations. The area overhead for the complex mode SIMD mechanisms, but many instructions are still needed to is small. Complex-valued operations speed up signal processing operate the real- and imaginary parts of complex numbers algorithms and make the execution more efficient in terms of separately. The use of single instructions in DSPs, executing power consumption. As a case study, a fast Fourier transform operations with complex numbers, can lead to a significant (FFT) is implemented for a VLIW-processor with a complex- performance gain in many signal processing algorithms. valued SIMD butterfly extension. The proposed functional unit is quantitatively evaluated in terms of performance, silicon area, A SIMD-MAC unit that can handle both complex and and power consumption.
    [Show full text]
  • Information Age Anthology Vol II
    DoD C4ISR Cooperative Research Program ASSISTANT SECRETARY OF DEFENSE (C3I) Mr. Arthur L. Money SPECIAL ASSISTANT TO THE ASD(C3I) & DIRECTOR, RESEARCH AND STRATEGIC PLANNING Dr. David S. Alberts Opinions, conclusions, and recommendations expressed or implied within are solely those of the authors. They do not necessarily represent the views of the Department of Defense, or any other U.S. Government agency. Cleared for public release; distribution unlimited. Portions of this publication may be quoted or reprinted without further permission, with credit to the DoD C4ISR Cooperative Research Program, Washington, D.C. Courtesy copies of reviews would be appreciated. Library of Congress Cataloging-in-Publication Data Alberts, David S. (David Stephen), 1942- Volume II of Information Age Anthology: National Security Implications of the Information Age David S. Alberts, Daniel S. Papp p. cm. -- (CCRP publication series) Includes bibliographical references. ISBN 1-893723-02-X 97-194630 CIP August 2000 VOLUME II INFORMATION AGE ANTHOLOGY: National Security Implications of the Information Age EDITED BY DAVID S. ALBERTS DANIEL S. PAPP TABLE OF CONTENTS Acknowledgments ................................................ v Preface ................................................................ vii Chapter 1—National Security in the Information Age: Setting the Stage—Daniel S. Papp and David S. Alberts .................................................... 1 Part One Introduction......................................... 55 Chapter 2—Bits, Bytes, and Diplomacy—Walter B. Wriston ................................................................ 61 Chapter 3—Seven Types of Information Warfare—Martin C. Libicki ................................. 77 Chapter 4—America’s Information Edge— Joseph S. Nye, Jr. and William A. Owens....... 115 Chapter 5—The Internet and National Security: Emerging Issues—David Halperin .................. 137 Chapter 6—Technology, Intelligence, and the Information Stream: The Executive Branch and National Security Decision Making— Loch K.
    [Show full text]
  • A Many-Core Architecture for In-Memory Data Processing
    A Many-core Architecture for In-Memory Data Processing Sandeep R Agrawal Sam Idicula Arun Raghavan [email protected] [email protected] [email protected] Oracle Labs Oracle Labs Oracle Labs Evangelos Vlachos Venkatraman Govindaraju Venkatanathan Varadarajan [email protected] [email protected] venkatanathan.varadarajan@oracle. Oracle Labs Oracle Labs com Oracle Labs Cagri Balkesen Georgios Giannikis Charlie Roth [email protected] [email protected] [email protected] Oracle Labs Oracle Labs Oracle Labs Nipun Agarwal Eric Sedlar [email protected] [email protected] Oracle Labs Oracle Labs ABSTRACT ACM Reference format: For many years, the highest energy cost in processing has been Sandeep R Agrawal, Sam Idicula, Arun Raghavan, Evangelos Vlachos, Venka- traman Govindaraju, Venkatanathan Varadarajan, Cagri Balkesen, Georgios data movement rather than computation, and energy is the limiting Giannikis, Charlie Roth, Nipun Agarwal, and Eric Sedlar. 2017. A Many-core factor in processor design [21]. As the data needed for a single Architecture for In-Memory Data Processing. In Proceedings of MICRO-50, application grows to exabytes [56], there is clearly an opportunity Cambridge, MA, USA, October 14–18, 2017, 14 pages. to design a bandwidth-optimized architecture for big data compu- https://doi.org/10.1145/3123939.3123985 tation by specializing hardware for data movement. We present the Data Processing Unit or DPU, a shared memory many-core that is specifically designed for high bandwidth analytics workloads. 1 INTRODUCTION The DPU contains a unique Data Movement System (DMS), which A large number of data analytics applications in areas varying provides hardware acceleration for data movement and partition- from business intelligence, health sciences and real time log and ing operations at the memory controller that is sufficient to keep telemetry analysis already benefit from working sets that span up with DDR bandwidth.
    [Show full text]
  • NVIDIA Bluefield-2 Datasheet
    NVIDIA BLUEFIELD-2 DPU DATA CENTER INFRASTRUCTURE ON A CHIP The NVIDIA® BlueField®-2 data processing unit (DPU) is the world’s first data center Key Features infrastructure-on-a-chip optimized for traditional enterprises’ modern cloud workloads and high performance computing. It delivers a broad set of accelerated software- Security defined networking, storage, security, and management services with the ability to > Hardened isolation layer offload, accelerate and isolate data center infrastructure. With its 200Gb/s Ethernet or > Hardware root of trust > IPsec/TLS and AES-XTS encryption InfiniBand connectivity, the BlueField-2 DPU enables organizations to transform their IT acceleration infrastructures into state-of-the-art data centers that are accelerated, fully > Connection tracking for stateful firewall and programmable, and armed with “zero trust” security to prevent data breaches and IDS/IPS cyber attacks. > Regular expression (RegEx) matching processor ® By combining the industry-leading NVIDIA ConnectX -6 Dx network adapter with an Storage ® array of Arm cores and infrastructure-specific offloads, BlueField-2 offers purpose- > NVIDIA GPUDirect® Storage built, hardware-acceleration engines with full software programmability. Sitting at the > Elastic block storage enabled by BlueField edge of every server, BlueField-2 empowers agile, secured and high-performance cloud SNAP storage virtualization > Compression and decompression and artificial intelligence (AI) workloads, all while reducing the total cost of ownership acceleration and increasing data center efficiency. > NVMe-oF acceleration > VirtIO-blk acceleration The NVIDIA DOCA™ software framework enables developers to rapidly create applications and services for the BlueField-2 DPU. NVIDIA DOCA makes it easy to Networking leverage DPU hardware accelerators, providing breakthrough data center performance, > RoCE, Zero Touch RoCE efficiency and security.
    [Show full text]
  • The Marketing of Information in the Information Age Hofacker and Goldsmith
    The Marketing of Information in the Information Age Hofacker and Goldsmith THE MARKETING OF INFORMATION IN THE INFORMATION AGE CHARLES F. HOFACKER, Florida State University RONALD E. GOLDSMITH, Florida State University The long-standing distinction made between goods and services must now be joined by the distinction between these two categories and “information products.” The authors propose that marketing management, practice, and theory could benefit by broadening the scope of what marketers market by adding a third category besides tangible goods and intangible services. It is apparent that information is an unusual and fundamental physical and logical phenomenon from either of these. From this the authors propose that information products differ from both tangible goods and perishable services in that information products have three properties derived from their fundamental abstractness: they are scalable, mutable, and public. Moreover, an increasing number of goods and services now contain information as a distinct but integral element in the way they deliver benefits. Thus, the authors propose that marketing theory should revise the concept of “product” to explicitly include an informational component and that the implications of this revised concept be discussed. This paper presents some thoughts on the issues such discussions should address, focusing on strategic management implications for marketing information products in the information age. INTRODUCTION “information,” in addition to goods and services (originally proposed by Freiden et al. 1998), A major revolution in marketing management while identifying unique characteristics of pure and theory occurred when scholars established information products. To support this position, that “products” should not be conceptualized the authors identify key attributes that set apart solely as tangible goods, but also as intangible an information product from a good or a services.
    [Show full text]
  • War Gaming in the Information Age—Theory and Purpose Paul Bracken
    Naval War College Review Volume 54 Article 6 Number 2 Spring 2001 War Gaming in the Information Age—Theory and Purpose Paul Bracken Martin Shubik Follow this and additional works at: https://digital-commons.usnwc.edu/nwc-review Recommended Citation Bracken, Paul and Shubik, Martin (2001) "War Gaming in the Information Age—Theory and Purpose," Naval War College Review: Vol. 54 : No. 2 , Article 6. Available at: https://digital-commons.usnwc.edu/nwc-review/vol54/iss2/6 This Article is brought to you for free and open access by the Journals at U.S. Naval War College Digital Commons. It has been accepted for inclusion in Naval War College Review by an authorized editor of U.S. Naval War College Digital Commons. For more information, please contact [email protected]. Bracken and Shubik: War Gaming in the Information Age—Theory and Purpose WAR GAMING IN THE INFORMATION AGE Theory and Purpose Paul Bracken and Martin Shubik ver twenty years ago, a study was carried out under the sponsorship of the ODefense Advanced Research Projects Agency and in collaboration with the General Accounting Office to survey and critique the models, simulations, and war games then in use by the Department of Defense.1 From some points of view, twenty years ago means ancient history; changes in communication tech- nology and computers since then can be measured Paul Bracken, professor of management and of political science at Yale University, specializes in national security only in terms of orders of magnitudes. The new world and management issues. He is the author of Command of the networked battlefield, super-accurate weapons, and Control of Nuclear Forces (1983) and of Fire in the and the information technology (IT) revolution, with its East: The Rise of Asian Military Power and the Second Nuclear Age (HarperCollins, 1999).
    [Show full text]
  • Bluefield As Platform
    VISION ON NETWORKING IN THE AGE OF AI January 2021 J.J. Vegas Olmos, L. Liss, T. Oved, Z. Binshtock, D. Goldenberg DISCLAIMER • I only have 15 minutes and certainly the deck is long • I will go through some slides quickly • We can always engage in a conversation off-line [email protected] • You will notice throughout the presentation we are transmission media agnostic • Fiber, copper, wireless… not really relevant • Each media has its place and time • Whenever I say “Currently it is like this, but it will be like that” – that is a research line that requires efforts 2 DATA PROCESSING UNIT VISION NVIDIA’s DPU (Data Processing Unit) are the smallest DC you can have • A DPU is • A network interface • A SoC for programmability • A GPU • An acceleration enginee…. NVIDIA CONFIDENTIAL. DO NOT DISTRIBUTE. 3 A LITTLE BIT OF BACKGROUND: THE DATA CENTER IS THE NEW UNIT OF COMPUTING Accelerated Disaggregated Infrastructure (ADI) Accelerated Computing Software defined, GPU: AI & machine learning Hardware-accelerated DPU (data processing unit) GPU critical for AI & machine learning Every workload will become AI Accelerated DPU essential to disaggregate resources & make composable ADI NVIDIA Networking You can do 3 things with data: transport it, processed it, use it. As of 2020, NVIDIA covers these three pilars with the incorporation of Mellanox Technologies 4 DISAGGREGATION & COMPOSABILITY All Resources Become Virtualized and Composable GPUs CPUs Ethernet Switches Ethernet Adapter Cards (NICs) Storage MEM/PMEM The NVIDIA Network is the Backplane of the Data Center | Dynamically Compose the Computer You Need! NVIDIA CONFIDENTIAL. DO NOT DISTRIBUTE.
    [Show full text]
  • Technology and Economic Growth in the Information Age Introduction
    Technology and Economic Growth in the Information Age 1 POLICY BACKGROUNDER No. 147 For Immediate Release March 12, 1998 Technology and Economic Growth in the Information Age Introduction The idea of rapid progress runs counter to well-publicized reports of an American economy whose growth rate has slipped. Pessimists, citing statistics on weakening productivity and gross domestic product growth, contend the economy isn’t strong enough to keep improving Americans’ standard of living. They offer a dour view: the generation coming of age today will be the first in American history not to live better than its parents.1 Yet there is plenty of evidence that this generation is better off than any that have gone before, and given the technologies likely to shape the next quarter century, there are reasons to believe that progress will be faster than ever — a stunning display of capitalism’s ability to lift living standards. To suppose otherwise would be to exhibit the shortsightedness of Charles H. Du- ell, commissioner of the U.S. Office of Patents, who in 1899 said, “Everything that can be invented has been invented.” Ironically, though, our economic statistics may miss the show. The usual measures of progress — output and productivity — are losing relevance “The usual measures of prog- in this age of increasingly rapid technological advances. As the economy ress — output and produc- evolves, it is delivering not only a greater quantity of goods and services, but tivity — are losing relevance also improved quality and greater variety. Workers and their families will be in this age of increasingly rapid technological advanc- better off because they will have more time off, better working conditions, es.” more enjoyable jobs and other benefits that raise our living standards but aren’t easily measured and therefore often aren’t counted in gross domestic product (GDP).
    [Show full text]
  • Introduction the Information Age Has Changed How Literacy Can Be Defined
    Introduction The information age has changed how literacy can be defined. Broadband access, along with other technologies, is changing the world and the information within it (Headrick, 2000; Williams, 1997; Mason, 1986). Tools used in learning and literacy are evolving at an incredible rate (Baron, 1999; Bruce, 2001; Bruce, Michaels, & Watson-Gegeo, 1985). Many would argue that a single definition could not encompass what literacy in the information age is. Further, the conceptual idea of literacy and information is continually evolving. Some would say that literacy is contextual and similar to constructivism, as each situation requires unique and interdependent skills. Winner (1986) suggests: No idea is more provocative in controversies about technology and society than the notion that technical things have political qualities. At issue is the claim that the machines, structures, and systems of modern material culture can be accurately judged not only for their contributions to efficiency and productivity and their positive and negative environmental side effects, but also for the ways in which they can embody specific forms of power and authority. Being literate, in many situations, also affords one power; the power of information can be life changing. For example, where does one obtain the information needed to vote - from a friend, post office, blog, wiki, newspaper, magazine, radio, television, or flyer? And possibly more important, who is distributing the information about a potential candidate? In many communities, local information is deemed more trustworthy than cooperate sources. Understanding and being able to disseminate information in a vast array of contexts is essential to staying literate. Definition Literacy in the information age can be defined as the ability to acquire and utilize knowledge, skills, and other resources to facilitate learning.
    [Show full text]
  • The Effect of E-Commerce on the International Trade of Small And
    Southern Illinois University Carbondale OpenSIUC Research Papers Graduate School 5-2016 The ffecE t of E-commerce on The nI ternational Trade of Small and Medium Enterprises in China Yanan Chang [email protected] Follow this and additional works at: http://opensiuc.lib.siu.edu/gs_rp Recommended Citation Chang, Yanan. "The Effect of E-commerce on The nI ternational Trade of Small and Medium Enterprises in China." (May 2016). This Article is brought to you for free and open access by the Graduate School at OpenSIUC. It has been accepted for inclusion in Research Papers by an authorized administrator of OpenSIUC. For more information, please contact [email protected]. THE EFFECT OF E-COMMERCE ON THE INTERNATIONAL TRADE OF SMALL AND MEDIUM ENTERPRISES IN CHINA by Yanan Chang B.S., Zhengzhou University of Light Industry, 2013 A Research Paper Submitted in Partial Fulfillment of the Requirements for the Master of Arts Department of Economics in the Graduate School Southern Illinois University Carbondale May 2016 RESEARCH PAPER APPROVAL THE EFFECT OF E-COMMERCE ON THE INTERNATIONAL TRADE OF SMALL AND MEDIUM ENTERPRISES IN CHINA By Yanan Chang A Research Paper Submitted in Partial Fulfillment of the Requirements for the Degree of Master in the field of Economics Approved by: Professor Chifeng Dai Graduate School Southern Illinois University Carbondale December 15, 2015: AN ABSTRACT OF THE RESEARCH PAPER OF YANAN CHANG, for the Master degree in ECONOMICS, presented on December 15, 2015, at Southern Illinois University Carbondale. TITLE: THE EFFECT OF E-COMMERCE ON THE INTERNATIONAL TRADE OF SMALL AND MEDIUM ENTERPRISES IN CHINA MAJOR PROFESSOR: Professor Chifeng Dai In modern world, economic globalization is the trend of the economic development.
    [Show full text]
  • Rejuvenation and the Age of Information Daniel Menasché, Kishor Trivedi, Eitan Altman
    Rejuvenation and the Age of Information Daniel Menasché, Kishor Trivedi, Eitan Altman To cite this version: Daniel Menasché, Kishor Trivedi, Eitan Altman. Rejuvenation and the Age of Information. WoSAR 2019 - 11th International Workshop on Software Aging and Rejuvenation, Oct 2019, Berlin, Germany. hal-02293628 HAL Id: hal-02293628 https://hal.inria.fr/hal-02293628 Submitted on 21 Sep 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Rejuvenation and the Age of Information Daniel Sadoc Menasche´ Kishor Trivedi Eitan Altman Department of Computer Science Department of Computer Science Inria, Universite´ Coteˆ D’Azur Federal University of Rio de Janeiro Duke University, United States Sophia Antipolis, France Rio de Janeiro, RJ, Brazil [email protected] [email protected] [email protected] Abstract—Two decades after the seminal paper on software data [6]. There is a cost to refresh a software system (and aging and rejuvenation appeared in 1995, a new concept and met- decrease software aging) or to refresh a virtual message ric referred to as the age of information (AoI) has been gaining (and decrease age of information). In Section IV we attention from practitioners and the research community.
    [Show full text]
  • A Brief History of IT
    IT Computer Technical Support Newsletter A Brief History of IT May 23, 2016 Vol.2, No.29 TABLE OF CONTENTS Introduction........................1 Pre-mechanical..................2 Mechanical.........................3 Electro-mechanical............4 Electronic...........................5 Age of Information.............6 Since the dawn of modern computers, the rapid digitization and growth in the amount of data created, shared, and consumed has transformed society greatly. In a world that is interconnected, change happens at a startling pace. Have you ever wondered how this connected world of ours got connected in the first place? The IT Computer Technical Support 1 Newsletter is complements of Pejman Kamkarian nformation technology has been around for a long, long time. Basically as Ilong as people have been around! Humans have always been quick to adapt technologies for better and faster communication. There are 4 main ages that divide up the history of information technology but only the latest age (electronic) and some of the electromechanical age really affects us today. 1. Pre-Mechanical The earliest age of technology. It can be defined as the time between 3000 B.C. and 1450 A.D. When humans first started communicating, they would try to use language to make simple pictures – petroglyphs to tell a story, map their terrain, or keep accounts such as how many animals one owned, etc. Petroglyph in Utah This trend continued with the advent of formal language and better media such as rags, papyrus, and eventually paper. The first ever calculator – the abacus was invented in this period after the development of numbering systems. 2 | IT Computer Technical Support Newsletter 2.
    [Show full text]