US 2015/0160872 A1 CHEN (43) Pub

Total Page:16

File Type:pdf, Size:1020Kb

US 2015/0160872 A1 CHEN (43) Pub US 2015.0160872A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2015/0160872 A1 CHEN (43) Pub. Date: Jun. 11, 2015 (54) OPERATION METHOD OF DISTRIBUTED (52) U.S. Cl. MEMORY DISKCLUSTER STORAGE CPC ............ G06F 3/0619 (2013.01); G06F 3/0689 SYSTEM (2013.01); G06F 3/0664 (2013.01) (71) Applicant: HSUN-YUAN CHEN, TAIPEI CITY 116 (TW) (57) ABSTRACT (72) Inventor: HSUN-YUAN CHEN, TAIPEI CITY 116 (TW) The present invention relates to anoperation method of dis (21) Appl. No.: 14/562,892 tributed memory disk cluster storage system, a distributed rº 3. memory storage system is adopted thereby satisfying four (22) Filed: Dec. 8, 2014 desired expansions which are the expansion of network band - - - - - - width, the expansion of hard disk capacity, the expansion of (30) Foreign Application Priority Data IOPS speed, and the expansion of memory I/O transmitting speed. Meanwhile, the system can be cross-region operated, Dec. 9, 2013 (TW) --------------------------------- 102145155 data center and WAN, so the user’s requirements can be Publication Classification collected through the local memory disk cluster for being provided with the correspondingp services, the capacity of the (51) Int. CI. memory disk cluster can also be gradually expanded for fur G06F 3/06 (2006.01) ther providing cross-region or cross-country data service. user end user end user end user end iSCSI FC FCOE NAS LAN P SAN virtualized Scheme Connection LAN iP T T T------—----- 25 - - - - - - - - - - - - I | 1 | 101 102 | 103 - - — — — — - - - - - - - - - - —ill r- - - - - - - - - - - - t - - - - Th|APP Eliº AFF EAFF EAFF | 10 iiii |Tºº, ?º º Tºº | | 11 || | i || ; : |||| | } i || ||| | | 13 13 { } || 13 13 || | 13 13 l– — — — — — — — — — — — I li- ~ — — — — — — |# - - | | | 20 i 20 i | virtualized distributed switch || i 1/8/10/16/40/56/100 1/8/10/16/40/56!400 1/8/10/16/40/56/100 Gbe physical switch Gbe physical switch Gbe physical switch router | SSL VPN or VPN Connection WAN iP º| Patent Application Publication Jun. 11, 2015 Sheet 1 of 2 US 2015/0160872 A1 puæuasnpuÐJasn puædesnpuÐ13Sn|può10SnpuÐ Patent Application Publication Jun. 11, 2015 Sheet 2 of 2 US 2015/0160872 A1 TOE„',() T?TODELDO@LODOEL ||||l.|||||| |||z||?zu| zu US 2015/0160872 A1 Jun. 11, 2015 and other upward compatible type having different format, BRIEF DESCRIPTION OF THE DRAWINGS CePH, GlusterFS, SphereFS, Taobao File System, ZFS, [0033] The present invention will be apparent to those SDFS, MooseRS, AdvKS, Be file system (BFS), Btrfs, Coda, skilled in the art by reading the following detailed description CrossDOS, disk file system (DFS), Episode, EFS, exFAT, ext, of a preferred embodiment thereof, with reference to the FAT, global file system (GFS), hierarchical file system (HFS), attached drawings, in which: HFS Plus, high performance file system, IBM general parallel [0034] FIG. 1 is a schematic view illustrating the operation file system, JFS, Macintosh file system, MINIX, NetWare file method of distributed memory disk cluster storage system system, NILFS, Novell storage service, NTFS, QFS, according to one embodiment provided by the present inven QNX4FS, ReiserFS (Reiser4), SpadPS, UBIFS, Unix file tion; and system, Veritas file system (VxIS), VFAT, write anywhere [0035] FIG. 2 is another schematic view illustrating the file layout (WAFL), XFS, Xsan, ZFS, CHFS, FFS2, F2FS, operation method of distributed memory disk cluster storage JFFS, JFFS2, LogFS, NVFS, YAFFS, UBIFS, DCE/DFS, system according to one embodiment provided by the present MFS, CXFS, GFS2, Google file system, OCFS, OCFS2, invention. QFS, Xsan, AFS, OpenAFS, AFP. MS-DFS, GPFS, Lustre, NCP, NFS, POHMELFS, Hadoop, HAMMER, SMB (CIFS), DETAILED DESCRIPTION OF THE PREFERRED cramfs, FUSE, SquashFS, UMSDOS, UnionFS, configfs, EMBODIMENT devfs, procfs, spec?s, sysfs, trnpfs, WinFS, EncPS, EFS, ZFS, [0036] The following descriptions are of exemplary RAW, ASM, LVM, SFS, MPFS or MGFS. embodiments only, and are not intended to limit the scope, [0030] According to the operation method of distributed applicability or configuration of the invention in any way. memory disk cluster storage system provided by the present Rather, the following description provides a convenient illus invention, the distributed memory storage system can satisfy tration for implementing exemplary embodiments of the invention. Various changes to the described embodiments four desired expansions which are the expansion of network may be made in the function and arrangement of the elements bandwidth, the expansion of hard disk capacity, the expansion described without departing from the scope of the invention of IOPS speed, and the expansion of memory I/O transmitting as set forth in the appended claims. speed. Meanwhile, the system can be cross-region operated, [0037] Referring from FIG. 1, the present invention pro data center and WAN, so the user’s requirements can be vides an operation method of distributed memory disk cluster collected through the local memory disk cluster for being storage system, wherein one preferred embodiment for illus provided with the corresponding services, the capacity of the trating the operation method of distributed memory disk clus memory disk cluster can also be gradually expanded for fur ter storage system is as following: ther providing cross-region or cross-country data service. [0038] The installation of a distributed memory storage [0031] With the increased quantity of the storage devices, equipment includes a plurality of computer units (10) for increasing one server would have the network bandwidth and assembling a cluster scheme (1) so as to form a cluster the disk capacity being correspondingly accumulated thereby memory disk; wherein the computer unit (10) is installed with forming a resource pool, the distributed memory disk cluster a CPU, at least a memory, at least a hard disk, at least a storage is served like a physical hard disk, so the whole network card, a mother board, an I/O interface card, at least a operation would not be affected due to one of the physical connection cable and a housing. [0039] The computer unit (10) is installed with a system mainframes being failed, the chunk memory disk in the copy virtual machine platform operation system, so the computer could copy the stored data to a new chunk memory disk, so a unit (10) is formed with a plurality of virtual machines, and fundamental data backup is maintained, meanwhile the con the computerunit (10) is used for setting the required machine tinuous data protector (CDP) is also adopted for providing memory resource capacity, the operation system is used for novel service of data backup and recovery, thus the disadvan setting the memory capacity occupying manner, or a program tages of the tape backup often being failed and the backup software is utilized for planning the memory to a hard disk only being performed once a day are improved. device for forming as a chunk memory disk (11) which is the [0032] In addition, the data generated through the copy can same as the tracks of a hard disk. be sent from different chunk memory disk thereby achieving [0040] As such, a file is enabled to be divided into one or the many-to-one data transmission, when the user amount plural data, and the file size can bed. MB or bigger, one or increases, only increasing the quantity of the chunk memory plural copies are evenly distributed in the chunk memory disk disk can achieve the many-to-many transmission, so the dis (11), so the data is actually stored in a memory module, and a advantages of the multiple RAID hard disks crashing causing memory bus with multiple channels is utilized for parallel the whole data being missed, the limitation of the quantity of accessing the memory module thereby allowing the capacity network interface of storage device and the network speed of the memory module to be planned for being used as a hard causing the excessive data being overly jammed and delayed disk, wherein the access of the memory module supports all for transmitting, the expansion of LUN and the data center the file formats of the operation system, and a distributed being unable to be cross-region operated can be solved; the storage scheme is utilized for allowing the data to be copied to present invention adopts the memory being served as a disk, one or more copies, with the above-mentioned method, the each file or each virtual machine can be stored in the memory data center can still be operated even if the machine is broken with a file format, the highest I/O speed of the memory bus and/or the data center is damaged. can be directly utilized, the data can be transmitted between [0041) Each copied data can be encrypted through mixing the CPU and the memory, the highest I/O number, distance the 1-4096 bit AES and RSA for being stored in the memory, and speed can be provided. Accordingly, the present invention when the data is desired to be accessed, the data is transmitted is novel and more practical in use comparing to prior art. between the memory and the CPU thereby minimizing the US 2015/0160872 A1 Jun. 11, 2015 I/O accessing times and distance, the virtual machine is and the data access speed and the data liability can also be formed as a file format for being stored in the memory mod increased, and the above-mentioned can be gradually ule, the memory capacity planned for the virtual memory is increased according to the user’s desire. also in the same sector. [0048] When the cluster schemes (1) are formed, each of [0042] When the operation system
Recommended publications
  • Administració De Sistemes GNU Linux Mòdul4 Administració
    Administració local Josep Jorba Esteve PID_00238577 GNUFDL • PID_00238577 Administració local Es garanteix el permís per a copiar, distribuir i modificar aquest document segons els termes de la GNU Free Documentation License, Version 1.3 o qualsevol altra de posterior publicada per la Free Software Foundation, sense seccions invariants ni textos de la oberta anterior o posterior. Podeu consultar els termes de la llicència a http://www.gnu.org/licenses/fdl-1.3.html. GNUFDL • PID_00238577 Administració local Índex Introducció.................................................................................................. 5 1. Eines bàsiques per a l'administrador........................................... 7 1.1. Eines gràfiques i línies de comandes .......................................... 8 1.2. Documents d'estàndards ............................................................. 10 1.3. Documentació del sistema en línia ............................................ 13 1.4. Eines de gestió de paquets .......................................................... 15 1.4.1. Paquets TGZ ................................................................... 16 1.4.2. Fedora/Red Hat: paquets RPM ....................................... 19 1.4.3. Debian: paquets DEB ..................................................... 24 1.4.4. Nous formats d'empaquetat: Snap i Flatpak .................. 28 1.5. Eines genèriques d'administració ................................................ 36 1.6. Altres eines .................................................................................
    [Show full text]
  • Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4
    SUSE Linux Enterprise Server 12 SP4 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xii 2 Giving Feedback xiv 3 Documentation Conventions xiv 4 Product Life Cycle and Support xvi Support Statement for SUSE Linux Enterprise Server xvii • Technology Previews xviii I FILE SYSTEMS AND MOUNTING 1 1 Overview
    [Show full text]
  • Early Experiences with Storage Area Networks and CXFS John Lynch
    Early Experiences with Storage Area Networks and CXFS John Lynch Aerojet 6304 Spine Road Boulder CO 80516 Abstract This paper looks at the design, integration and application issues involved in deploying an early access, very large, and highly available storage area network. Covered are topics from filesystem failover, issues regarding numbers of nodes in a cluster, and using leading edge solutions to solve complex issues in a real-time data processing network. 1 Introduction SAN technology can be categorized in two distinct approaches. Both Aerojet designed and installed a highly approaches use the storage area network available, large scale Storage Area to provide access to multiple storage Network over spring of 2000. This devices at the same time by one or system due to it size and diversity is multiple hosts. The difference is how known to be one of a kind and is the storage devices are accessed. currently not offered by SGI, but would serve as a prototype system. The most common approach allows the hosts to access the storage devices across The project’s goal was to evaluate Fibre the storage area network but filesystems Channel and SAN technology for its are not shared. This allows either a benefits and applicability in a second- single host to stripe data across a greater generation, real-time data processing number of storage controllers, or to network. SAN technology seemed to be share storage controllers among several the technology of the future to replace systems. This essentially breaks up a the traditional SCSI solution. large storage system into smaller distinct pieces, but allows for the cost-sharing of The approach was to conduct and the most expensive component, the evaluation of SAN technology as a storage controller.
    [Show full text]
  • Study of File System Evolution
    Study of File System Evolution Swaminathan Sundararaman, Sriram Subramanian Department of Computer Science University of Wisconsin {swami, srirams} @cs.wisc.edu Abstract File systems have traditionally been a major area of file systems are typically developed and maintained by research and development. This is evident from the several programmer across the globe. At any point in existence of over 50 file systems of varying popularity time, for a file system, there are three to six active in the current version of the Linux kernel. They developers, ten to fifteen patch contributors but a single represent a complex subsystem of the kernel, with each maintainer. These people communicate through file system employing different strategies for tackling individual file system mailing lists [14, 16, 18] various issues. Although there are many file systems in submitting proposals for new features, enhancements, Linux, there has been no prior work (to the best of our reporting bugs, submitting and reviewing patches for knowledge) on understanding how file systems evolve. known bugs. The problems with the open source We believe that such information would be useful to the development approach is that all communication is file system community allowing developers to learn buried in the mailing list archives and aren’t easily from previous experiences. accessible to others. As a result when new file systems are developed they do not leverage past experience and This paper looks at six file systems (Ext2, Ext3, Ext4, could end up re-inventing the wheel. To make things JFS, ReiserFS, and XFS) from a historical perspective worse, people could typically end up doing the same (between kernel versions 1.0 to 2.6) to get an insight on mistakes as done in other file systems.
    [Show full text]
  • CXFSTM Client-Only Guide for SGI® Infinitestorage
    CXFSTM Client-Only Guide for SGI® InfiniteStorage 007–4507–016 COPYRIGHT © 2002–2008 SGI. All rights reserved; provided portions may be copyright in third parties, as indicated elsewhere herein. No permission is granted to copy, distribute, or create derivative works from the contents of this electronic documentation in any manner, in whole or in part, without the prior written permission of SGI. LIMITED RIGHTS LEGEND The software described in this document is "commercial computer software" provided with restricted rights (except as to included open/free source) as specified in the FAR 52.227-19 and/or the DFAR 227.7202, or successive sections. Use beyond license provisions is a violation of worldwide intellectual property laws, treaties and conventions. This document is provided with limited rights as defined in 52.227-14. TRADEMARKS AND ATTRIBUTIONS SGI, Altix, the SGI cube and the SGI logo are registered trademarks and CXFS, FailSafe, IRIS FailSafe, SGI ProPack, and Trusted IRIX are trademarks of SGI in the United States and/or other countries worldwide. Active Directory, Microsoft, Windows, and Windows NT are registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. AIX and IBM are registered trademarks of IBM Corporation. Brocade and Silkworm are trademarks of Brocade Communication Systems, Inc. AMD, AMD Athlon, AMD Duron, and AMD Opteron are trademarks of Advanced Micro Devices, Inc. Apple, Mac, Mac OS, Power Mac, and Xserve are registered trademarks of Apple Computer, Inc. Disk Manager is a registered trademark of ONTRACK Data International, Inc. Engenio, LSI Logic, and SANshare are trademarks or registered trademarks of LSI Corporation.
    [Show full text]
  • ECE 598 – Advanced Operating Systems Lecture 19
    ECE 598 { Advanced Operating Systems Lecture 19 Vince Weaver http://web.eece.maine.edu/~vweaver [email protected] 7 April 2016 Announcements • Homework #7 was due • Homework #8 will be posted 1 Why use FAT over ext2? • FAT simpler, easy to code • FAT supported on all major OSes • ext2 faster, more robust filename and permissions 2 btrfs • B-tree fs (similar to a binary tree, but with pages full of leaves) • overwrite filesystem (overwite on modify) vs CoW • Copy on write. When write to a file, old data not overwritten. Since old data not over-written, crash recovery better Eventually old data garbage collected • Data in extents 3 • Copy-on-write • Forest of trees: { sub-volumes { extent-allocation { checksum tree { chunk device { reloc • On-line defragmentation • On-line volume growth 4 • Built-in RAID • Transparent compression • Snapshots • Checksums on data and meta-data • De-duplication • Cloning { can make an exact snapshot of file, copy-on- write different than link, different inodles but same blocks 5 Embedded • Designed to be small, simple, read-only? • romfs { 32 byte header (magic, size, checksum,name) { Repeating files (pointer to next [0 if none]), info, size, checksum, file name, file data • cramfs 6 ZFS Advanced OS from Sun/Oracle. Similar in idea to btrfs indirect still, not extent based? 7 ReFS Resilient FS, Microsoft's answer to brtfs and zfs 8 Networked File Systems • Allow a centralized file server to export a filesystem to multiple clients. • Provide file level access, not just raw blocks (NBD) • Clustered filesystems also exist, where multiple servers work in conjunction.
    [Show full text]
  • USB Composite Gadget Using CONFIG-FS on Dra7xx Devices
    Application Report SPRACB5–September 2017 USB Composite Gadget Using CONFIG-FS on DRA7xx Devices RaviB ABSTRACT This application note explains how to create a USB composite gadget, network control model (NCM) and abstract control model (ACM) from the user space using Linux® CONFIG-FS on the DRA7xx platform. Contents 1 Introduction ................................................................................................................... 2 2 USB Composite Gadget Using CONFIG-FS ............................................................................. 3 3 Creating Composite Gadget From User Space.......................................................................... 4 4 References ................................................................................................................... 8 List of Figures 1 Block Diagram of USB Composite Gadget............................................................................... 3 2 Selection of CONFIGFS Through menuconfig........................................................................... 4 3 Select USB Configuration Through menuconfig......................................................................... 4 4 Composite Gadget Configuration Items as Files and Directories ..................................................... 5 5 VID, PID, and Manufacturer String Configuration ....................................................................... 6 6 Kernel Logs Show Enumeration of USB Composite Gadget by Host ................................................ 6 7 Ping
    [Show full text]
  • Load Management and Demand Response in Small and Medium Data Centers
    Thiago Lara Vasques Load Management and Demand Response in Small and Medium Data Centers PhD Thesis in Sustainable Energy Systems, supervised by Professor Pedro Manuel Soares Moura, submitted to the Department of Mechanical Engineering, Faculty of Sciences and Technology of the University of Coimbra May 2018 Load Management and Demand Response in Small and Medium Data Centers by Thiago Lara Vasques PhD Thesis in Sustainable Energy Systems in the framework of the Energy for Sustainability Initiative of the University of Coimbra and MIT Portugal Program, submitted to the Department of Mechanical Engineering, Faculty of Sciences and Technology of the University of Coimbra Thesis Supervisor Professor Pedro Manuel Soares Moura Department of Electrical and Computers Engineering, University of Coimbra May 2018 This thesis has been developed under the Energy for Sustainability Initiative of the University of Coimbra and been supported by CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brazil). ACKNOWLEDGEMENTS First and foremost, I would like to thank God for coming to the conclusion of this work with health, courage, perseverance and above all, with a miraculous amount of love that surrounds and graces me. The work of this thesis has also the direct and indirect contribution of many people, who I feel honored to thank. I would like to express my gratitude to my supervisor, Professor Pedro Manuel Soares Moura, for his generosity in opening the doors of the University of Coimbra by giving me the possibility of his orientation when I was a stranger. Subsequently, by his teachings, guidance and support in difficult times. You, Professor, inspire me with your humbleness given the knowledge you possess.
    [Show full text]
  • Silicon Graphics, Inc. Scalable Filesystems XFS & CXFS
    Silicon Graphics, Inc. Scalable Filesystems XFS & CXFS Presented by: Yingping Lu January 31, 2007 Outline • XFS Overview •XFS Architecture • XFS Fundamental Data Structure – Extent list –B+Tree – Inode • XFS Filesystem On-Disk Layout • XFS Directory Structure • CXFS: shared file system ||January 31, 2007 Page 2 XFS: A World-Class File System –Scalable • Full 64 bit support • Dynamic allocation of metadata space • Scalable structures and algorithms –Fast • Fast metadata speeds • High bandwidths • High transaction rates –Reliable • Field proven • Log/Journal ||January 31, 2007 Page 3 Scalable –Full 64 bit support • Large Filesystem – 18,446,744,073,709,551,615 = 264-1 = 18 million TB (exabytes) • Large Files – 9,223,372,036,854,775,807 = 263-1 = 9 million TB (exabytes) – Dynamic allocation of metadata space • Inode size configurable, inode space allocated dynamically • Unlimited number of files (constrained by storage space) – Scalable structures and algorithms (B-Trees) • Performance is not an issue with large numbers of files and directories ||January 31, 2007 Page 4 Fast –Fast metadata speeds • B-Trees everywhere (Nearly all lists of metadata information) – Directory contents – Metadata free lists – Extent lists within file – High bandwidths (Storage: RM6700) • 7.32 GB/s on one filesystem (32p Origin2000, 897 FC disks) • >4 GB/s in one file (same Origin, 704 FC disks) • Large extents (4 KB to 4 GB) • Request parallelism (multiple AGs) • Delayed allocation, Read ahead/Write behind – High transaction rates: 92,423 IOPS (Storage: TP9700)
    [Show full text]
  • Hardware-Driven Evolution in Storage Software by Zev Weiss A
    Hardware-Driven Evolution in Storage Software by Zev Weiss A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2018 Date of final oral examination: June 8, 2018 ii The dissertation is approved by the following members of the Final Oral Committee: Andrea C. Arpaci-Dusseau, Professor, Computer Sciences Remzi H. Arpaci-Dusseau, Professor, Computer Sciences Michael M. Swift, Professor, Computer Sciences Karthikeyan Sankaralingam, Professor, Computer Sciences Johannes Wallmann, Associate Professor, Mead Witter School of Music i © Copyright by Zev Weiss 2018 All Rights Reserved ii To my parents, for their endless support, and my cousin Charlie, one of the kindest people I’ve ever known. iii Acknowledgments I have taken what might be politely called a “scenic route” of sorts through grad school. While Ph.D. students more focused on a rapid graduation turnaround time might find this regrettable, I am glad to have done so, in part because it has afforded me the opportunities to meet and work with so many excellent people along the way. I owe debts of gratitude to a large cast of characters: To my advisors, Andrea and Remzi Arpaci-Dusseau. It is one of the most common pieces of wisdom imparted on incoming grad students that one’s relationship with one’s advisor (or advisors) is perhaps the single most important factor in whether these years of your life will be pleasant or unpleasant, and I feel exceptionally fortunate to have ended up iv with the advisors that I’ve had.
    [Show full text]
  • CXFSTM Administration Guide for SGI® Infinitestorage
    CXFSTM Administration Guide for SGI® InfiniteStorage 007–4016–025 CONTRIBUTORS Written by Lori Johnson Illustrated by Chrystie Danzer Engineering contributions to the book by Vladmir Apostolov, Rich Altmaier, Neil Bannister, François Barbou des Places, Ken Beck, Felix Blyakher, Laurie Costello, Mark Cruciani, Rupak Das, Alex Elder, Dave Ellis, Brian Gaffey, Philippe Gregoire, Gary Hagensen, Ryan Hankins, George Hyman, Dean Jansa, Erik Jacobson, John Keller, Dennis Kender, Bob Kierski, Chris Kirby, Ted Kline, Dan Knappe, Kent Koeninger, Linda Lait, Bob LaPreze, Jinglei Li, Yingping Lu, Steve Lord, Aaron Mantel, Troy McCorkell, LaNet Merrill, Terry Merth, Jim Nead, Nate Pearlstein, Bryce Petty, Dave Pulido, Alain Renaud, John Relph, Elaine Robinson, Dean Roehrich, Eric Sandeen, Yui Sakazume, Wesley Smith, Kerm Steffenhagen, Paddy Sreenivasan, Roger Strassburg, Andy Tran, Rebecca Underwood, Connie Woodward, Michelle Webster, Geoffrey Wehrman, Sammy Wilborn COPYRIGHT © 1999–2007 SGI. All rights reserved; provided portions may be copyright in third parties, as indicated elsewhere herein. No permission is granted to copy, distribute, or create derivative works from the contents of this electronic documentation in any manner, in whole or in part, without the prior written permission of SGI. LIMITED RIGHTS LEGEND The software described in this document is "commercial computer software" provided with restricted rights (except as to included open/free source) as specified in the FAR 52.227-19 and/or the DFAR 227.7202, or successive sections. Use beyond
    [Show full text]
  • A Modern Primer on Processing in Memory
    A Modern Primer on Processing in Memory Onur Mutlua,b, Saugata Ghoseb,c, Juan Gomez-Luna´ a, Rachata Ausavarungnirund SAFARI Research Group aETH Z¨urich bCarnegie Mellon University cUniversity of Illinois at Urbana-Champaign dKing Mongkut’s University of Technology North Bangkok Abstract Modern computing systems are overwhelmingly designed to move data to computation. This design choice goes directly against at least three key trends in computing that cause performance, scalability and energy bottlenecks: (1) data access is a key bottleneck as many important applications are increasingly data-intensive, and memory bandwidth and energy do not scale well, (2) energy consumption is a key limiter in almost all computing platforms, especially server and mobile systems, (3) data movement, especially off-chip to on-chip, is very expensive in terms of bandwidth, energy and latency, much more so than computation. These trends are especially severely-felt in the data-intensive server and energy-constrained mobile systems of today. At the same time, conventional memory technology is facing many technology scaling challenges in terms of reliability, energy, and performance. As a result, memory system architects are open to organizing memory in different ways and making it more intelligent, at the expense of higher cost. The emergence of 3D-stacked memory plus logic, the adoption of error correcting codes inside the latest DRAM chips, proliferation of different main memory standards and chips, specialized for different purposes (e.g., graphics, low-power, high bandwidth, low latency), and the necessity of designing new solutions to serious reliability and security issues, such as the RowHammer phenomenon, are an evidence of this trend.
    [Show full text]