Interconnection Architectures for Petabyte-Scale High-Performance Storage Systems

Total Page:16

File Type:pdf, Size:1020Kb

Interconnection Architectures for Petabyte-Scale High-Performance Storage Systems Interconnection Architectures for Petabyte-Scale High-Performance Storage Systems Andy D. Hospodor Ethan L. Miller Senior Member, IEEE Storage Systems Research Center [email protected] University of California, Santa Cruz [email protected] Abstract The first part of this shift is already occurring at the storage device level. Today, many storage subsystems uti- As demand for storage bandwidth and capacity grows, lize low-cost commodity storage in the form of 3.5” hard designers have proposed the construction of petabyte- disk drives. The heads, media and electronics of these de- scale storage systems. Rather than relying upon a few vices are often identical to storage used on desktop comput- very large storage arrays, these petabyte-scale systems ers. The only remaining differentiator between desktop and have thousands of individual disks working together to server storage is the interface. At present, Fibre-Channel provide aggregate storage system bandwidth exceeding and SCSI remain the choice of large, high-end storage sys- 100 GB/s. However, providing this bandwidth to storage tems while the AT attachment (ATA) remains the choice of system clients becomes difficult due to limits in network desktop storage. However, the introduction of the Serial technology. This paper discusses different interconnection ATA interface provides nearly equivalent performance and topologies for large disk-based systems, drawing on previ- a greatly reduced cost to attach storage. ous experience from the parallel computing community. By Most current designs for such petabyte-scale systems choosing the right network, storage system designers can rely upon relatively large individual storage systems that eliminate the need for expensive high-bandwidth commu- must be connected by very high-speed networks in order to nication links and provide a highly-redundant network re- provide the required transfer bandwidths to each storage el- silient against single node failures. We analyze several dif- ement. We have developed alternatives to this design tech- ferent topology choices and explore the tradeoffs between nique using 1Gb/s network speeds and small (4–12 port) cost and performance. Using simulations, we uncover po- switching elements to connect individual object-based stor- tential pitfalls, such as the placement of connections be- age devices, usually single disks. By including a small- tween the storage system network and its clients, that may scale switch on each drive, we developa design that is more arise when designing such a large system. scalable and less expensive than using larger storage ele- ments because we can use cheaper networks and switches. Moreover, our design is more resistant to failures—if a sin- 1. Introduction gle switch or node fails, data can simply flow around it. Since failure of a single switch typically makes data from at least one storage element unavailable, maintaining less Modern high-performance computing systems require 1 data per storage element makes the overall storage system storage systems capable of storing petabytes of data, more resilient. and delivering that data to thousands of computing el- ements at aggregate speeds exceeding 100GB/s. Just In this paper, we present alternative interconnection net- as high-performance computers have shifted from a few work designs for a petabyte-scale storage system built from very powerful computation elements to networks of thou- individual nodes consisting of a disk drive and 4–12 port sands of commodity-typecomputing elements, storage sys- gigabit network switch. We explore alternative network tems must make the transition from relatively few high- topologies, focusing on overall performance and resistance performance storage engines to thousands of networked to individual switch failures. We are less concerned with commodity-type storage devices. disk failures—disks will fail regardless of interconnection network topology, and there is other research on redun- 1A petabyte (PB) is 250 bytes. dancy schemes for massive-scale storage systems [16]. This paper appeared in the 21st IEEE / 12th NASA Goddard Conference on Mass Storage Systems and Technologies (MSST2004), College Park, MD, April 2004. 2. Background can provide the same high bandwidth with much less ex- pensive, lower-speed networks and switches. There are many existing techniques that provide high- bandwidth file service, including RAID, storage area net- 2.2. Parallel Processing works, and network-attached storage. However, these tech- niques cannot provide 100GB/s on their own, and each has Interconnection networks for computing elements have limitations that manifest in a petabyte-scale storage sys- long been the subject of parallel computing research. No tem. Rather, network topologies originally developed for clear winner has emerged; rather, there are many different massively parallel computers are better suited to construct interconnection topologies, each with its own advantages massive storage systems. and disadvantages, as discussed in Section 3. Traditional multiprocessors such as the Intel Touchstone Delta [14] 2.1. Existing Storage Architectures and the CM-5 [10] segregated I/O nodes from computing nodes, typically placing I/O nodes at the edge of the par- RAID (Redundant Array of Independent Disks) [2] pro- allel computer. File systems developed for these configu- tects a disk array against failure of an individual drive. rations were capable of high performance [3, 12] using a However, the RAID system is limited to the aggregate per- relatively small number of RAID-based arrays, eliminating formance of the underlying array. RAID arrays are typi- the need for more complex interconnection networks in the cally limited to about 16 disks; larger arrays begin to suffer storage system. from reliability problems and issues of internal bandwidth. There have been a few systems that suggested embed- Systems such as Swift [9] have proposed the use of RAID ding storage in a multiprocessor network. In RAMA [11], on very large disk arrays by computing parity across sub- every multiprocessor compute node had its own disk and sections of disks, thus allowing the construction of larger switch. RAMA, however, did not consider storage systems arrays. However, such systems still suffer from a basic on the scale that are necessary for today’s systems, and did problem: connections between the disks in the array and not consider the wide range of topologies discussed in this to the outside world are limited by the speed of the inter- paper. connection network. Fortunately, storage systems place different, and some- Storage area networks (SANs) aggregate many de- what less stringent, demands on the interconnection net- vices together at the block level. Storage systems are work than parallel processors. Computing nodes typically connected together via network, typically FibreChannel, communicate using small, low latency messages, but stor- through high-performance switches. This arrangement al- age access involves large transfers and relatively high la- lows servers to share a pool of storage, and can enable the tency. As a result, parallel computers require custom net- use of hundreds of disks in a single system. However, the work hardware, while storage interconnection networks, primary use of SANs is to decouple servers from storage because of their tolerance for higher latency, can exploit devices, not to provide high bandwidth. While SANs are commodity technologies such as gigabit Ethernet. appropriate for high I/O rate systems, they cannot provide high bandwidth without appropriate interconnection net- 2.3. Issues with Large Storage Scaling work topology. Network-attached storage [6] (NAS) is similar to SAN- A petabyte-scale storage system must meet many de- based storage in that both designs have pools of storage mands: it must provide high bandwidth at reasonable la- connected to servers via networks. In NAS, however, in- tency, it must be both continuously available and reliable, it dividual devices present storage at the file level rather than must not lose data, and its performancemust scale as its ca- the block level. This means that individual devices are re- pacity increases. Existing large-scale storage systems have sponsible for managing their own data layout; in SANs, some of these characteristics, but not all of them. For exam- data layout is managed by the servers. While most exist- ple, most existing storage systems are scaled by replacing ing network-attached storage is implemented in the form of the entire storage system in a “forklift upgrade.” This ap- CIFS- or NFS-style file systems, object-based storage [15] proach is unacceptable in a system containing petabytes of is fast becoming a good choice for large-scale storage sys- data because the system is simply too large. While there tems [16]. Current object-based file systems such as Lus- are techniques for dynamically adding storage capacity to tre [13] use relatively large storage nodes, each imple- a existing system [7], the inability of such systems to scale mented as a standard network file server with dozens of in performance remains an issue. disks. As a result, they must use relatively high-speed inter- One petabyte of storage capacity requires about 4096 connections to provide the necessary aggregate bandwidth. (212) storage devices of 250GB each; disks with this ca- In contrast, tightly coupling switches and individual disks pacity are appearing in the consumer desktop market in 2 45 10000 RPM FibreChannel 40 7200 RPM ATA 35 30 ¡ ¢ £ ¤ ¥ ¦ ¡ § ¨ © § ¨ 25 20 15 10 Throughput (MB/s) 5 0 4 10 100 1000 10000 100000 Request size (KB) ¡ ¢ £ ¤ ¥ ¦ ¡ § ¨ © § ¨ Figure 1. Disk throughput as a function of transfer size. early 2004. A typical high-performance disk, the Seagate Cheetah 10K, has a 200MB/s FibreChannel interface and spins at 10000 RPM with a 5.3ms average seek time and (a) Disks connected to a single server. This configuration is sus- sustained transfer rate of 44MB/s. In a typical transac- ceptible to loss of availability if a single switch or server fails. tion processing environment, the Cheetah would service a 4KB request in about 8.3ms for a maximum of 120 I/Os per second, or 0.5MB/s from each drive.
Recommended publications
  • Marks Published for Opposition
    MARKS PUBLISHED FOR OPPOSITION The following marks are published in compliance with section 12(a) of the Trademark Act of 1946. Applications for the registration of marks in more than one class have been filed as provided in section 30 of said act as amended by Public Law 772, 87th Congress, approved Oct. 9, 1962, 76 Stat. 769. Opposition under section 13 may be filed within thirty days of the date of this publication. See rules 2.101 to 2.105. A separate fee of two hundred dollars for opposing each mark in each class must accompany the opposition. SECTION 1.— INTERNATIONAL CLASSIFICATION The short titles associated below with the international class numbers are terms designed merely for quick identification and are not an official part of the international classification. The full names of international classes are given in section 6.1 of the trademark rules of practice. The designation ‘‘U.S. Cl.’’ appearing in this section refers to the U.S. class in effect prior to Sep. 1, 1973 rather than the international class which applies to applications filed on or after that date. For adoption of international classification see notice in the OFFICIAL GAZETTE of Jun. 26, 1973 (911 O.G. TM 210). Application in more than one class SN 74-373,002. SYSCO CORPORATION, HOUSTON, TX. SN 74-718,075. QUICKLAW INC., KINGSTON, ONTARIO FILED 3-29-1993. K7L 5J8, CANADA, BY CHANGE OF NAME QL SYSTEMS LIMITED, KINGSTON, ONTARIO, K7L 5J8, CANADA, FILED 8-21-1995. QUICKSEARCH CLASS 25—CLOTHING FOR CLOTHING, NAMELY, APRONS AND CAPS (U.S.
    [Show full text]
  • Snapsan S3000/S5000 CLI Command Reference (WIN
    Overland SnapSAN™ CLI Command Reference (WIN) Storage User Guide S3000/S5000 January 2013 10400402-002 SnapSAN S3000/S5000 CLI User Guide @2012-13 Overland Storage, Inc. All rights reserved. Overland®, Overland Data®, Overland Storage®, ARCvault®, DynamicRAID®, LibraryPro®, LoaderXpress®, Multi-SitePAC®, NEO®, NEO Series®, PowerLoader®, Protection OS®, REO®, REO 4000®, REO Series®, Snap Appliance®, Snap Care® (EU only), SnapServer®, StorAssure®, Ultamus®, VR2®, and XchangeNOW® are registered trademarks of Overland Storage, Inc. GuardianOS™, RAINcloud™, SnapDisk™, SnapEDR™, Snap Enterprise Data Replicator™, SnapExpansion™, SnapSAN™, SnapScale™, SnapServer DX Series™, SnapServer Manager™, SnapWrite™, and SnapServer Manager™ are trademarks of Overland Storage, Inc. All other brand names or trademarks are the property of their respective owners. The names of companies and individuals used in examples are fictitious and intended to illustrate the use of the software. Any resemblance to actual companies or individuals, whether past or present, is coincidental. PROPRIETARY NOTICE All information contained in or disclosed by this document is considered proprietary by Overland Storage. By accepting this material the recipient agrees that this material and the information contained therein are held in confidence and in trust and will not be used, reproduced in whole or in part, nor its contents revealed to others, except to meet the purpose for which it was delivered. It is understood that no right is conveyed to reproduce or have reproduced any item herein disclosed without express permission from Overland Storage. Overland Storage provides this manual as is, without warranty of any kind, either expressed or implied, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose.
    [Show full text]
  • Primary Storage Concepts and Library Attachment Methods
    PRIMARY STORAGE CONCEPTS AND LIBRARY ATTACHMENT METHODS TECHNICAL WHITE PAPER TABLE OF CONTENTS FORWARD 1 DATA APPLICATIONS USING PRIMARY STORAGE 2 DATA ACCESS LEVELS 2 BLOCK LEVEL 2 FILE LEVEL 2 DISK DRIVE OVERVIEW 3 INTERNAL VERSUS EXTERNAL DISK 4 JBOD OVERVIEW 4 RAID OVERVIEW 5 STRIPING 5 REDUNDANCY 6 RAID 0 7 RAID 1 8 RAID 3 – RAID 4 8 RAID 5 9 RAID 10 9 HOST SYSTEM ATTACHMENT METHODS 10 SAS/DAS 10 Backup Methods 10 NAS 11 Servers 11 Backup Methods 12 LAN-free Backup 13 NAS Boxes 13 Backup Methods 15 NDMP 17 Future Overland NDMP Support 17 SAN 18 Enterprise Class Fibre Channel RAID subsystems 18 SAN Vision 19 SAN Reality 20 IPSAN 21 Server Clustering 22 HA (Highly Available) SANs 23 SAN Backup Methods 24 SAN Island Backup 25 Serverless Backup 26 SAN NAS CONVERGENCE 28 APPENDIX A – HAVING FUN WITH EXCLUSIVE OR’ING 29 FORWARD This Primary Storage Choices document is intended to provide an overview of the choices available and the tradeoffs involved for: • Users and Managers of Information Technology resources – particularly storage • Overland Sales and Technical team members • Overland Executives and Managers 1 DATA APPLICATIONS USING PRIMARY STORAGE Data applications using Data applications using primary storage consist of two types, Data Creation and Data Manipulation primary storage consist of applications. two types, Data Creation and Data Manipulation Loosely defined, a data creation application is any program that generates data in the form of files, applications. PowerPoint, Word, Excel, Etc. The vast amount of enterprise storage is occupied with data that has been created.
    [Show full text]
  • Participating Addendum Contract #0000000000000000000028963
    PARTICIPATING ADDENDUM CONTRACT #0000000000000000000028963 NASPO ValuePoint Software Value Added Reseller (SVAR) Administrated by the State of Arizona (hereinafter “Lead State”) MASTER AGREEMENT SHI International Corporation Master Agreement No: ADSPO16-130651 (hereinafter “Contractor”) And The State of Indiana acting by and through the Indiana State Police (“ISP”) (hereinafter “Participating State/Entity”) 1. Scope: This addendum covers the Software Value Added Reseller contract led by the State of Arizona for use by state agencies and other entities located in the Participating State authorized by that state’s statutes to utilize State contracts with the prior approval of the state’s chief procurement official. 2. Participation: Use of specific NASPO ValuePoint cooperative contracts by agencies, political subdivisions and other entities (including cooperatives) authorized by an individual state’s statutes to use State contracts are subject to the prior approval of the respective State Chief Procurement Official. Issues of interpretation and eligibility for participation are solely within the authority of the State Chief Procurement Official. 3. Participating State Modifications or Additions to Master Agreement: (These modifications or additions apply only to actions and relationships within the Participating Entity.) Participating State/Entity must check one of the boxes below. [__] No changes to the terms and conditions of the Master Agreement are required. [ X ] The following changes are modifying or supplementing the Master Agreement terms and conditions. Exhibit A, hereby attached and incorporated by reference, is the Supplement covering compliance with State laws. The Supplement is incorporated into and made a part of the Master Agreement Number ADSPO16-130651 (Exhibit B) and applies to all transactions under this Participating Addendum.
    [Show full text]
  • Using Overland Storage's Snapserver NAS with MOBOTIX™ Cameras
    A SUBSIDIARY OF Using Overland Storage’s SnapServer® NAS with MOBOTIX™ Cameras A solution for video surveillance using MOBOTIX™ high-resolution mega- pixel cameras, Mobotix Management Center MxMC™ video management software and Overland Storage SnapServer® NAS as the network video storage and external, removable RDX HDDs as archive storage June, 2016 Table of Contents 1. Abstract 3 2. Overview 4 2.1 The cameras 4 2.1.1 What is an IP network camera? 4 2.1.2 How are the cameras connected to an IP network? 4 2.2 The camera monitoring software 5 2.3 Overland Storage SnapServer as the network video archive 5 2.4 Benefits of the MOBOTIX™ / SnapServer solution 7 2.5 SnapServer selection guide 8 3. Calculating your storage requirements 9 4. Replicating your image files to another site 10 4.1 Archiving and vaulting your image files 11 5. Overall solution configuration steps 12 5.1 Plan your environment 12 5.2 Configure the SnapServer 12 5.3 Configure cameras on the IP network 12 6. Best practices 13 7. How to configure – the specifics 14 7.1 Basic SnapServer setup as the camera network archive 14 7.1.1 Using SnapServer Manager to set an IP address 14 7.1.2 The SnapServer web administration interface 15 7.1.3 SnapServer initial setup 16 7.1.4 SnapServer initial RAID setup 17 7.1.5 SnapServer create a volume 17 7.1.6 SnapServer create a share 18 7.1.7 SnapServer create a local user 18 7.1.8 SnapServer associating a user with a share 19 7.2 Register your SnapServer 20 7.2.1 Setup email notifications 20 8.
    [Show full text]
  • Snap Server 3-In-1 Administrator Guide
    User Guide for Snap Server 110 and 210 COPYRIGHT Copyright © 2008, Overland Storage, Inc. All rights reserved worldwide. Information in this document is subject to change without notice and does not represent a commitment on the part of Overland Storage or any of its subsidiaries. The software described in this document is furnished under a license agreement. The software may be used only in accordance with the terms of the license agreement. It is against the law to copy the software on any medium. No part of this manual may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, for any purpose without the express written permission of Overland Storage, Inc. TRADEMARKS Overland Storage, the Overland Storage logo, Snap Server, the Snap Server logo, GuardianOS, SnapOS, Snap Disk, and StorAssure are trademarks or registered trademarks of Overland Storage, Inc. in the U.S.A. and other countries. Products mentioned herein are for identification purposes only and may be registered trademarks or trademarks of their respective companies. Windows, Windows NT, Internet Explorer, and Active Directory are registered trademarks of Microsoft Corporation. Java and Solaris, are registered trademarks of Sun Microsystems, Inc. AppleShare, AppleTalk, Macintosh, and MacOS are registered trademarks of Apple Computer. BakBone and NetVault are trademarks of BakBone Software. AIX is a registered trademark of IBM Corporation. OpenView and HP-UX are trademarks or registered trademarks of Hewlett-Packard Company. BrightStor, Unicenter TNG, ARCserve, eTrust, and Unicenter are trademarks or registered trademarks of Computer Associates, Inc. Smart UPS and APC are registered trademarks of American Power Conversion Corporation.
    [Show full text]
  • NEO 200S/400S User Guide
    Overland NEO ® 200s/400s Library Storage User Guide April 2012 10400367-001 NEO 200s/400s User Guide ©2012 Overland Storage, Inc. All rights reserved. Overland®, Overland Data®, Overland Storage®, LibraryPro®, LoaderXpress®, Multi-SitePAC®, NEO®, NEO Series®, PowerLoader®, Protection OS®, REO®, REO 4000®, REO Series®, Snap Care®, SnapServer®, StorAssure®, WebTLC®, and XchangeNOW® are registered trademarks of Overland Storage, Inc. GuardianOS™, SnapWrite™, Snap Enterprise Data Replicator™, SnapExpansion™, SnapSAN™, and SnapServer Manager™ are trademarks of Overland Storage, Inc. All other brand names or trademarks are the property of their respective owners. The names of companies and individuals used in examples are fictitious and intended to illustrate the use of the software. Any resemblance to actual companies or individuals, whether past or present, is coincidental. PROPRIETARY NOTICE All information contained in or disclosed by this document is considered proprietary by Overland Storage. By accepting this material the recipient agrees that this material and the information contained therein are held in confidence and in trust and will not be used, reproduced in whole or in part, nor its contents revealed to others, except to meet the purpose for which it was delivered. It is understood that no right is conveyed to reproduce or have reproduced any item herein disclosed without express permission from Overland Storage. Overland Storage provides this manual as is, without warranty of any kind, either expressed or implied, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Overland Storage may make improvements or changes in the products or programs described in this manual at any time.
    [Show full text]
  • Using Overland Snapserver Systems with Mobotix Cameras
    Using Overland SnapServer Systems with Mobotix Cameras A solution for Video Surveillance using Mobotix High Resolution Mega-pixel Cameras, Mobotix MxControlCenter Monitoring Software and Overland Storage SnapServer systems as the Network Video Storage Archive September, 2012 Table of Contents 1. Abstract ......................................................................................................................................................................... 3 2. Overview ....................................................................................................................................................................... 4 2.1 The Cameras .......................................................................................................................................................... 4 2.1.1 What is an IP Network Camera? ........................................................................................................................ 4 2.1.2 How are the Cameras connected to an IP Network? ......................................................................................... 4 2.2 The Camera Monitoring Software .......................................................................................................................... 4 2.3 The Overland SnapServer as the Network Video Archive ..................................................................................... 5 2.4 The Benefits of the Mobotix/SnapServer Solution ................................................................................................
    [Show full text]
  • Removable Disk
    REMOVABLE DISK The RDX Removable Disk Storage System offers rugged, reliable and convenient storage for backup, archive, data interchange and disaster recovery. It provides enterprise performance and fast access, with transfer rates up to 260MB/s and capacities up to 3TB per cartridge on hard disk drive (HDD) format and up to 512GB on solid-state disk (SSD) format. EASY ECONOMICAL FAST PORTABLE RDX removable disk RDX is the most cost- Recording is fast (up to The cartridges are small technology provides the eective business-grade 260MB/s) as is backup and durable, so you can simplicity of tape-based data protection you can restoration. Access is slip them in your pocket. backup with the speed buy. The total cost of near-instant, allowing you Store them o-site for and exibility of a hard ownership is less than to work directly with data archiving, or use them to drive. Your computer will low-end tape. on the cartridge. transfer data to another recognize each pocket- computer. sized cartridge. I V PROTECT YOUR DATA THE RIGHT DATA STORAGE RDX protects the following: The key to helping businesses > Small businesses protect their data is a reliable > Remote offices storage solution like RDX > Individual networks removable disk technology. > Workstations and servers > Personal data SOME RDX INDUSTRY SEGMENTS > Security RDX provides the following > Medical RDX BENEFITS customer benefits: Video / Audio / Photo RDX is perfect to safeguard the > Fast Access > Education Blistering disk -based speed and mission-critical data of businesses access, and near-instant file retrieval > Manufacturing that must work with limited > Budget-Friendly > Commercial budgets and resources.
    [Show full text]
  • Product Overview
    Product Overview Trusted and reliable data storage, management, storage and archive solutions Virtual Physical Primary backup Environments Environments to SnapServer USB3.0 RDX QuikStor for Data Storage Group local backup and data exchange IP Network With over 1 million units deployed worldwide and over 30 years of technology leadership from Tandberg Data and Overland Storage, the Overland-Tandberg Storage Group continues to develop proven technology that makes data management and data protection Archiving effortless. Businesses of all sizes and locations are faced with both the challenges of Application SnapServer Secondary backup with RDX data storage growth and limitations in resources or expertise to deploy complicated and as primary storage SAS/FC RDX QuikStation4 or 8 expensive systems. (file share, database, etc.) Our entire line of data management and data protection solutions provide the benefits of simple yet fully featured online, nearline, offline and archival storage to meet business needs across the data lifecycle, without the typical drawbacks or high cost of ownership. Off-Site Companies can focus on building their business instead of worrying about data growth, Data Vaulting data loss or data management. Long-term and compliant archiving and with NEO LTO Tape Library Disaster Recovery Overland-Tandberg’s product portfolio consists of RDX removable disk, NEO Series tape libraries, and SnapServer NAS storage, and is enhanced by its dedicated global service and support that has been recognized for its outstanding levels of service. DATA STORAGE : DATA MANAGEMENT : DATA PROTECTION : BACKUP : ARCHIVE : COMPLIANCE : DISASTER RECOVERY : SECURITY : CLOUD sphere3d.com overlandstorage.com tandbergdata.com RDX Family RDX® is the removable disk standard which combines the portability and reliability of tape-based technology with the speed and simplicity of a hard disk drive.
    [Show full text]
  • City of Mesa, AZ Contract # 2018011-02 for Information Technology Solutions and Services With
    ATTACHMENT 5 City of Mesa, AZ Contract # 2018011-02 for Information Technology Solutions and Services with SHI International Corp. Effective: March 1, 2018 The following documents comprise the executed contract between the City of Meza, AZ and SHI International Corp, effective March 1, 2018: I. Executed Contract II. Mayor and Council Approval III. Supplier Response IV. Original RFP ~ mesa-az AGREEMENT PURSUANT TO SOLICITATION CITY OF MESA AGREEMENT NUMBER 2018011 INFORMATION TECHNOLOGY SOLUTIONS & SERVICES CITY OF MESA, Arizona ("City") Department Name City of Mesa - Purchasing Division Mailing Address P.O. Box 1466 Mesa, AZ 85211-1466 Delivery Address 20 East Main St, Suite 400 Mesa, AZ 85201 Attention Sharon Brause, CPPO, CPPB, CPCP Senior Procurement Officer E-Mail [email protected] Phone (480) 644-2815 Fax (480) 644-2655 AND SHI INTERNATIONAL CORP., ("Contractor") Mailing Address 290 Davidson Ave Somerset, NJ 08873 Remit Address PO Box 952121 Dallas, TX 75395-2121 Attention Meghan Flisakowski, Public Program Manager E-Mail [email protected] Phone 512-517-4088 Fax 732-652-6599 Website www.publicsector.shidirect.com/ Page 1 of 33 CITY OF MESA AGREEMENT PURSUANT TO SOLICITATION This Agreement pursuant to solicitation ("Agreement") is entered into this 27th day of February, 2018, by and between the City of Mesa, Arizona, an Arizona municipal corporation ("City"), and SHI INTERNATIONAL CORP, a New Jersey corporation ("Contractor"). The City and Contractor are each a "Party" to the Agreement or together are "Parties" to the Agreement. RECITALS A. The City issued solicitation number 2018011 ("Solicitation") for INFORMATION TECHNOLOGY SOLUTIONS & SERVICES, to which Contractor provided a response ("Response"); and B.
    [Show full text]
  • Digital Archiving for Film-Makers
    PRESENTS Sponsored by Digital Archiving for Filmmakers Best practices for safely backing up, storing and managing your footage—from independent, nonprofit and corporate to broadcast and feature production PRESENTS Digital Archiving for Filmmakers Editor Beth Marchant © 2010 Access Intelligence, LLC Digital Archiving for Filmmakers EDitor’s LETTER Why Archive? www.studiodaily.com he digital revolution has brought many Guidebook Editor things to filmmakers and content creators, Beth Marchant (212) 621-4645 among them better, faster workflows, higher-resolution images and sound, and Graphic Designer access, via file-based formats, to infinitely more data Christina Barnard T at any given time throughout the production and Film & Video post-production process. And therein lies the prob- EDITOR-IN-CHIEF Bryant Frazer lem: With so much content being created by so many, (212) 621-4647 how it is stored, managed and saved for posterity is Studio/Monthly no longer a task limited to archivists in some dusty, EDITOR-IN-CHIEF climate-controlled warehouse. It is the responsibility Beth Marchant of every filmmaker who creates an audiovisual record of what engages and enter- (212) 621-4645 tains us, from news, historical events and the natural world, to popular culture and BUSINESS OPERATIONS/SALES our collective imaginations. But for too many archiving is an afterthought or, in the VICE PRESIDENT/GROUP PUBLISHER Diane Schwartz very least, an acknowledged but far-too-confusing process their current budgets, storage infrastructures and project loads just can’t address right now. That is, until DIRECTOR OF EVENTS LOGISTICS & MARKETING it’s too late. When incorrectly stored and protected originals are accidentally erased Kate Schaeffer through human error, neglect, system failure or natural disaster, everyone loses out.
    [Show full text]