IT Brand Pulse Brand Leader Report
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Sponsorship EXHIBITOR OPTIONS
SPRING 2012 Information Infrastructure for a Customer-Focused, Always-On World Sponsor/Exhibitor Prospectus APRIL 2-5, 2012 | OMNI DALLAS HOTEL | DALLAS, TX OCTOBER 16-19, 2012 | SANTA CLARA CONVENTION CENTER | SANTA CLARA, CA Participate in the World’s Leading Conference on Storage, Data Center Infrastructure and Business Continuity for: • IT Management • Storage Architects • IT Infrastructure Professionals • Business Continuity Planning Experts • Data Management Specialists • Network Professionals For sponsorship opportunities call: Ann Harris at 508-820-8667 or [email protected] www.snwusa.com April 2-5, 2012 • Omni Dallas Hotel • Dallas, Texas October 16-19, 2012 • Hyatt Regency/Santa Clara Convention Center • Santa Clara, California PLAN NOW TO Today’s IT managers and practitioners need ever-more cost- SNW Technology Tracks* efficient, reliable and robust solutions to manage increasing stores PARTICIPATE of data residing in numerous locations, including the cloud. That’s • Backup & Archiving SPRING 2012 where SNW comes in. With more than 150 speakers in 130 • Cloud Computing U.S. CONFERENCE sessions, our conference brings together end user case studies, industry experts, SNIA technical tutorials and more for four days of • Data Center SNW intensive education across the most important IT infrastructure April 2-5, 2012 disciplines. A strategic partnership between Computerworld and the • Data Management Omni Dallas Hotel Storage Networking Industry Association (SNIA), SNW is known Dallas, TX across the industry for its compelling agenda filled with practical • Data Security FALL 2012 advice, hands-on learning and insights into emerging trends, • Professional Development U.S. CONFERENCE technologies, products and solutions. • Solid State Storage SNW A key component of the SNW experience is the Expo, where October 16-19, 2012 attendees meet vendor companies showcasing cutting-edge • Storage Management Santa Clara Convention Ctr. -
PC Gamers Win Big
CASE STUDY Intel® Solid-State Drives Performance, Storage and the Customer Experience PC Gamers Win Big Intel® Solid-State Drives (SSDs) deliver the ultimate gaming experience, providing dramatic visual and runtime improvements Intel® Solid-State Drives (SSDs) represent a revolutionary breakthrough, delivering a giant leap in storage performance. Designed to satisfy the most demanding gamers, media creators, and technology enthusiasts, Intel® SSDs bring a high level of performance and reliability to notebook and desktop PC storage. Faster load times and improved graphics performance such as increased detail in textures, higher resolution geometry, smoother animation, and more characters on the screen make for a better gaming experience. Developers are now taking advantage of these features in their new game designs. SCREAMING LOAD TiMES AND SMOOTH GRAPHICS With no moving parts, high reliability, and a longer life span than traditional hard drives, Intel Solid-State Drives (SSDs) dramatically improve the computer gaming experience. Load times are substantially faster. When compared with Western Digital VelociRaptor* 10K hard disk drives (HDDs), gamers experienced up to 78 percent load time improvements using Intel SSDs. Graphics are smooth and uninterrupted, even at the highest graphics settings. To see the performance difference in a head-to- head video comparing the Intel® X25-M SATA SSD with a 10,000 RPM HDD, go to www.intelssdgaming.com. When comparing frame-to-frame coherency with the Western Digital VelociRaptor 10K HDD, the Intel X25-M responds with zero hitching while the WD VelociRaptor shows hitching seven percent of the time. This means gamers experience smoother visual transitions with Intel SSDs. -
Nimbus Data E-Class Flash Memory Platform Data Sheet
Datasheet E-Class Flash Memory Platform Unmatched solid state storage scalability, fault-tolerance, and efficiency for next-gen datacenters Highlights Next-generation Intelligent Solid State Storage Platform • 100% Flash Memory (solid state storage) The E-Class Flash Memory System is the industry’s first fully-redundant • Up to 50x faster than conventional disk arrays multiprotocol solid state storage system. Scalable to 500 TB with power • Consumes up to 80% less power and rackspace consumption as low 5 watts per TB and density as high as 20 TB per U, the • Unified multi-protocol SAN and NAS platform E-Class outperforms and costs less to operate than conventional 15K pm disk arrays. With no-single-point-of-failure, the Nimbus E-Class is ideal for • No single point of failure for high-availability applications such as enterprise-wide server virtualization, database • Ethernet, Fibre Channel, and/or Infiniband ports clusters, Web infrastructure, VDI , and high-performance computing. • Comprehensive data management software The E-Class consists of a pair of redundant controllers and up to 24 solid Advantages state storage enclosures. Each controller supports four active-active IO • Performance: Up to 800,000 4K IOps modules including 10 Gigabit Ethernet, Fibre Channel, and Infiniband. • Throughput: Up to 8,000 MBps Nimbus software automatically detects controller and path failures, • Latency: As low as 0.2 ms providing non-disruptive failover as well as online software updates and • Efficiency: As low as 5 W per TB online capacity expansion. With RAID protection and hot-swappable flash, • Scalability: From 10 TB to 500 TB power, and cooling modules, components can be easily replaced without • Density: Up to 10 TB per rack U downtime. -
SPDK FTL & Marvell OCSSD for Noisy Neighbor Problem
SPDK FTL & Marvell OCSSD for Noisy Neighbor Problem David Recker Marketing VP Circuit Blvd., Inc. April 2019 4/17/2019 © 2019 Circuit Blvd., Inc. 1 Industry Who We Are Enterprise/Cloud Database and Storage Year Founded Sunnyvale, CA, U.S.A. 2017 Mission We develop next gen database/storage systems leveraging expertise in memory semiconductor, solid-state storage system, and operating systems Open Source Contributions • Linux LightNVM, OCSSD 2.0 specification, OpenSSD FPGA platform • SPDK (since SPDK v17.10) • RocksDB 4/17/2019 © 2019 Circuit Blvd., Inc. 2 OCSSD with SPDK FTL • SPDK FTL on Marvell’s OCSSD Platform • We have been evaluating SPDK FTL on Marvell's SSD SoC platform since Jan ’19 • SPDK (Flash Translation Layer) FTL: The Flash Translation Layer library provides block device access on top of non-block SSDs implementing Open Channel interface. It handles the logical to physical address mapping, responds to the asynchronous media management events, and manages the defragmentation process* • Measured various performance metrics of initial prototype and demonstrate how SPDK OCSSDs can solve the noisy neighbor problem in multi-tenant environments • Share experimental data based on our current implementation (both SPDK FTL and Marvell’s controller being continuously improved) • (Demo) SPDK Driven OCSSD Comparison (Isolation vs Non-Isolation) • Demo table outside (please feel free to drop by for further questions) * SPDK FTL definition: https://spdk.io/doc/ftl.html 4/17/2019 © 2019 Circuit Blvd., Inc. 3 Hardware Setup • SuperMicro X11DPG • 2 * Xeon Scalable Gold 6126 2.6 Ghz (12 cores) • hyperthreading disabled OCSSD1 • 8 * 32 GB DIMM 2666 MT/s • 2 * OCSSD 2.0 OCSSD2 • Marvell 88SS1098 controller • PCIe Gen3x4 slot to each CPU package • nvme id-ns • LBADS=12 (4KiB), MS=0 • ocssd geometry • 8 grp (3), 8 pu (3), 1478 chk (11), 6144 lbk (13) • () means bit length in LBAF • ws_opt=24 (96KiB) CPU1 • 3D TLC NAND CPU2 • write unit: 96KiB (one shot program) • read unit: 32KiB 4/17/2019 © 2019 Circuit Blvd., Inc. -
Information Hiding at SSD NAND Flash Memory Physical Layer
SECURWARE 2014 : The Eighth International Conference on Emerging Security Information, Systems and Technologies DeadDrop-in-a-Flash: Information Hiding at SSD NAND Flash Memory Physical Layer Avinash Srinivasan and Jie Wu Panneer Santhalingam and Jeffrey Zamanski Temple University George Mason University Computer and Information Sciences Volgenau School of Engineering Philadelphia, USA Fairfax, USA Email: [avinash, jiewu]@temple.edu Email: [psanthal, jzamansk]@gmu.edu Abstract—The research presented in this paper, to of logical blocks to physical flash memory is controlled by the best of our knowledge, is the first attempt at the FTL on SSDs, this cannot be used for IH on SSDs. Our information hiding (IH) at the physical layer of a Solid proposed solution is 100% filesystem and OS-independent, State Drive (SSD) NAND flash memory. SSDs, like providing a lot of flexibility in implementation. Note that HDDs, require a mapping between the Logical Block throughout this paper, the term “physical layer” refers Addressing (LB) and physical media. However, the to the physical NAND flash memory of the SSD, and mapping on SSDs is significantly more complex and is handled by the Flash Translation Layer (FTL). FTL readers should not confuse it with the Open Systems is implemented via a proprietary firmware and serves Interconnection (OSI) model physical layer. to both protect the NAND chips from physical access Traditional HDDs, since their advent more than 50 as well as mediate the data exchange between the years ago, have had the least advancement among storage logical and the physical disk. On the other hand, the hardware, excluding their storage densities. -
Flash Memory Summit Pocket Guide 2017
2017 FLASH MEMORY SUMMIT POCKET GUIDE AUGUST 8-10 SANTA CLARA CONVENTION CENTER AUGUST 7 PRE-CONFERENCE TUTORIALS Contents 3 4 Highlights 6 Exhibitors 8 Exhibit Hall Floor Plan 11 Keynote Presentations 2017 Sponsors Gold Sponsors Mobiveil Executive Premier Sponsors SANBlaze Technology Samsung SD Association SK Hynix Bronze Sponsors AccelStor Toshiba America ADATA Technology Electronic Components Apeiron Data Systems ATP Electronics Premier Sponsors Broadcom Brocade Communications Hewlett Packard Enterprise Systems Development Cadence Design Systems Intel Calypso Systems CEA LETI Marvell Semiconductor Celestica Micron Technology CNEX Labs Microsemi Epostar Electronics Excelero NetApp FADU Seagate Technology Fibre Channel Industry Assoc. Foremay Silicon Motion Technology Hagiwara Solutions Western Digital IBM JEDEC Platinum Sponsors Kroll Ontrack Crossbar Lam Research Maxio E8 Storage Mentor Graphics Everspin Technologies Newisys Innodisk NVMdurance NVXL Technology Lite-On Storage Sage Microelectronic NGD Systems SATA-IO Nimbus Data SCSI Trade Association Silicon Storage Technology One Stop Systems SiliconGo Microelectronics Radian Memory Systems SNIA-SSSI Synopsys Smart IOPS Tegile SMART Modular Teledyne LeCroy Technologies Teradyne Transcend Information Swissbit UFSA Symbolic IO ULINK Technology Viking Technology UNH-IOL UniTest Emerald Sponsors VARTA Microbattery VIA Technologies Advantest Virtium Amphenol Xilinx Dera Storage Participating Organizations Diablo Technologies Chosen Voice Gen-Z Consortium Circuit Cellar Connetics USA Hyperstone -
(SPC-1/E™) Official Specification
SPC BENCHMARK 1 (SPC-1™) SPC Benchmark 1/Energy™ Extension (SPC-1/E™) Official Specification Revision 3.83.7 – Effective 28 October 201822 July 2018 “The First Industry Standard Storage Benchmark” Storage Performance Council (SPC) PO Box 3504 Redwood City, CA 94064-3504 Phone (650) 556-9384 www.storageperformance.org Copyright © 2001 - 2018 Storage Performance Council SPC Membership as of 241 July August 20184 Accelstor Ltd. MacroSAN Technologies Co. Ltd. Amazon, Inc. LSI Corporation Austin Automation Center NEC Corporation – Department of Veteran Affairs NetApp, Inc. DataDirect Networks Nimbus Data Systems, Inc. Datera, Inc. Oracle Corporation Cybernetics Pennsylvania State University Datacore Software, Inc. Pure Storage, Inc. Dell, Inc. Ruijie Networks Co. Ltd. Elastifile, Inc. QLogic Corporation ETRI Samsung Information Systems, America Dot Hill Systems Corp. Sanmina EMC Corporation SanDisk Corporation Foundation for Research and Technology Seagate Technology LLC – Institute of Computer Science Silicon Graphics International Fujitsu America, Inc. Skyera, Inc. FusionStack SolidFire, Inc. Gradient Systems Symantec Corporation The George Washington University SuperMicro Computer, Inc. Hewlett-Packard CompanyEnterprise Telecommunication Technology Hitachi Data Systems Association (TTA) Hongik University Toshiba America Information Systems, Huawei Technologies Co., Ltd. Inc. IBM Corporation University of California, Santa Cruz Imation, Corp University of Patras Infortrend Technology, Inc. Violin Memory, Inc. Inspur Corporation Western Digital -
Micron: NAND Flash Architecture and Specification Trends
NAND Flash Architecture and Specification Trends Michael Abraham ([email protected]) Applications Engineering Manager Micron Technology, Inc. Santa Clara, CA USA August 2009 1 Abstract As NAND Flash continues to shrink, page sizes, block sizes, and ECC requirements are increasing while data retention, endurance, and performance are decreasing. These changes impact systems including random write performance and more. Learn how to prepare for these changes and counteract some of them through improved block management techniques and system design. This presentation also discusses some of the tradeoff myths – for example, the myth that you can directly trade ECC for endurance Santa Clara, CA USA August 2009 2 NAND Flash: Shrinking Faster Than Moore’s Law 200 100 Logic 80 DRAM on (nm) ii 60 NAND Resolut 40 Micron 32Gb NAND (34nm) 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 Semiconductor International, 1/1/2007 Santa Clara, CA USA August 2009 3 Memory Organization Trends Over time, NAND block size is increasing. • Larger page sizes increase sequential throughput. • More pages per block reduce die size. 4,194,304 1,048,576 262,144 65,536 16,384 4,096 1, 024 256 64 16 Block size (B) Data Bytes per Page Pages per Block Santa Clara, CA USA August 2009 4 Consumer-grade NAND Flash: Endurance and ECC Trends Process shrinks lead to less electrons ppgger floating gate. ECC used to improve data retention and endurance. To adjust for increasing RBERs, ECC is increasing exponentially to achieve equivalent UBERs. For consumer applications, endurance becomes less important as density increases. -
Fast Network Connectivity Key to Unlocking All-Flash Array Performance
Fast Network Connectivity Key to Unlocking All-flash Array Performance The current generation of all-flash arrays offers enough performance to saturate the network connections between the arrays and application servers in the data center. In many scenarios, the key limiter to all-flash array performance is storage network bandwidth. Therefore, all-flash array vendors have been quick to adopt the latest advances in storage network connectivity. Fast Networks are Here, and Faster Networks are Coming Ethernet is now available with connection speeds up to 400 Gb per second. Fibre Channel now reaches speeds up to 128 Gb per second. As discussed during a recent SNIA presentation, the roadmaps for both technologies forecast another 2x to 4x increase in performance. While the fastest connections are generally used to create a storage network fabric among data center switches, many all- flash arrays support fast storage network connectivity. All-flash Arrays Embrace Fast Network Connectivity DCIG’s research into all-flash arrays identified thirty-seven (37) models that support 32 Gb FC, seventeen (17) that support 100 Gb Ethernet, and ten (10) that support 100 Gb InfiniBand connectivity. These include products from Dell EMC, FUJITSU Storage, Hitachi Vantara, Huawei, Kaminario, NEC Storage, NetApp, Nimbus Data, Pure Storage and Storbyte. Source: DCIG Other Drivers of Fast Network Connectivity Although all-flash storage is a key driver behind fast network connectivity, there are also several other significant drivers. Each of these has implications -
SSD ABC Guide – OCZ Forum Ii © 2014 OCZ Storage Solutions
SSD ABC Guide – OCZ Forum ii © 2014 OCZ Storage Solutions SSD ABC Guide – OCZ Forum Contents 1 - 8 .......................................................................................... Identifying your SSD ..................................................................................................................................................... 1 Products ........................................................................................................................................................................ 2 Unpacking and Handling your SSD .............................................................................................................................. 3 Fitting your SSD ............................................................................................................................................................ 4 Desktop ...................................................................................................................................................................................... 4 Laptop/Notebook ........................................................................................................................................................................ 4 Powering On and Self Testing (POST) your SSD ......................................................................................................... 4 Detecting your SSD ...................................................................................................................................................... -
Flash-Aware Database Management Systems Hardock, Sergej (2020)
Flash-aware Database Management Systems Hardock, Sergej (2020) DOI (TUprints): https://doi.org/10.25534/tuprints-00014476 License: CC-BY-SA 4.0 International - Creative Commons, Attribution Share-alike Publication type: Ph.D. Thesis Division: 20 Department of Computer Science Original source: https://tuprints.ulb.tu-darmstadt.de/14476 Flash-aware Database Management Systems Zur Erlangung des akademischen Grades Doktor-Ingenieur (Dr.-Ing.) Genehmigte Dissertation von Sergej Hardock aus Kiew, Ukraine Tag der Einreichung: 30.7.2020, Tag der Prüfung: 4.11.2020 1. Gutachten: Prof. Dr. Carsten Binnig 2. Gutachten: Prof. Dr.-Ing. Ilia Petrov 3. Gutachten: Alejandro Buchmann, Ph.D. Darmstadt Computer Science Department Technical University of Darmstadt Data Management Lab Flash-aware Database Management Systems Accepted doctoral thesis by Sergej Hardock 1. Review: Prof. Dr. Carsten Binnig 2. Review: Prof. Dr.-Ing. Ilia Petrov 3. Review: Alejandro Buchmann, Ph.D. Date of submission: 30.7.2020 Date of thesis defense: 4.11.2020 Darmstadt Bitte zitieren Sie dieses Dokument als: URN: urn:nbn:de:tuda-tuprints-144769 URL: http://tuprints.ulb.tu-darmstadt.de/id/eprint/14476 Dieses Dokument wird bereitgestellt von tuprints, E-Publishing-Service der TU Darmstadt http://tuprints.ulb.tu-darmstadt.de [email protected] Die Veröffentlichung steht unter folgender Creative Commons Lizenz: Namensnennung - Weitergabe unter gleichen Bedingungen 4.0 International https://creativecommons.org/licenses/by-sa/4.0 This dissertation is dedicated to my parents who encouraged me to follow my dreams. Erklärungen laut Promotionsordnung §8 Abs. 1 lit. c PromO Ich versichere hiermit, dass die elektronische Version meiner Dissertation mit der schriftli- chen Version übereinstimmt. -
OCP19 DRAFT Prospectus 081418
OCP GLOBAL SUMMIT | March 14–15, 2019 San Jose McEnery Convention Center | San Jose, CA SPONSOR PROSPECTUS 2018 OCP Global Summit Stats Attendees by Organizational Role Analyst 4% Marketer 7% Technology Engineer Architect 30% 11% Product Manager 12% Senior Business Executive Development/ 19% Account Manager 17% 3,441 813 100+ Companies Summit Attendees Media/Analysts Represented 81 34 Sponsors/ Countries Exhibitors Represented Page !2 of !19 Previous Summit Exhibitors and Sponsors OCP Global Summit 2018 - San Jose, CA 10Gtek Edgecore Networks Liqid Inc. Radisys Corporation 3M Eoptolink Liquid Telecom Rittal ADC Technologies Inc. Facebook LITE-On Storage Samsung ADLINK Technology Fadu LTO Ultrium Program Samtec Advanced Micro Devices, Fidelity Marvell Schneider Electric Inc. (AMD) Ampere Computing Finisar Mellanox Seagate Amphenol Flex Micron SK hynix Apstra Geist Microsoft Solarflare Artesyn Geist Global Mojo Networks Starline (Universal Electric Corp) ASRock Rack GIGABYTE Murata TE Connectivity Barefoot Networks Gigalight Nephos Inc. Telecom Infra Project (TIP) Bel Power Solutions HPE NGD Systems The Linux Foundation Big Switch Networks Huawei Nimbus Data TidalScale Cambridge Industries Hyve Solutions Nokia Toshiba Memory USA, Inc. America, Inc. Cavium Innovium OpenCAPI Consortium Vertiv ColorChip Inspur OpenPower Wave 2 Wave Solutions Lmtd. Cumulus Intel OpenStack Wiwynn DCD IP Infusion, Inc. OpenSwitch ZT Systems Dell EMC Ixia Penguin Delta Jess-link Products Co. Ltd. QCT/Quanta Distributed Management Lenovo Qualcomm Task Force (DMTF)