Solution Brief Title Goes Here
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Socrates: the New SQL Server in the Cloud
Socrates: The New SQL Server in the Cloud Panagiotis Antonopoulos, Alex Budovski, Cristian Diaconu, Alejandro Hernandez Saenz, Jack Hu, Hanuma Kodavalla, Donald Kossmann, Sandeep Lingam, Umar Farooq Minhas, Naveen Prakash, Vijendra Purohit, Hugh Qu, Chaitanya Sreenivas Ravella, Krystyna Reisteter, Sheetal Shrotri, Dixin Tang, Vikram Wakade Microsoft Azure & Microsoft Research ABSTRACT 1 INTRODUCTION The database-as-a-service paradigm in the cloud (DBaaS) The cloud is here to stay. Most start-ups are cloud-native. is becoming increasingly popular. Organizations adopt this Furthermore, many large enterprises are moving their data paradigm because they expect higher security, higher avail- and workloads into the cloud. The main reasons to move ability, and lower and more flexible cost with high perfor- into the cloud are security, time-to-market, and a more flexi- mance. It has become clear, however, that these expectations ble “pay-as-you-go” cost model which avoids overpaying for cannot be met in the cloud with the traditional, monolithic under-utilized machines. While all these reasons are com- database architecture. This paper presents a novel DBaaS pelling, the expectation is that a database runs in the cloud architecture, called Socrates. Socrates has been implemented at least as well as (if not better) than on premise. Specifically, in Microsoft SQL Server and is available in Azure as SQL DB customers expect a “database-as-a-service” to be highly avail- Hyperscale. This paper describes the key ideas and features able (e.g., 99.999% availability), support large databases (e.g., of Socrates, and it compares the performance of Socrates a 100TB OLTP database), and be highly performant. -
Netezza EOL: IBM IAS Is a Flawed Upgrade Path Netezza EOL: IBM IAS Is a Flawed Upgrade Path
Author: Marc Staimer WHITE PAPER Netezza EOL: What You’re Not Being Told About IBM IAS Is a Flawed Upgrade Path Database as a Service (DBaaS) WHITE PAPER • Netezza EOL: IBM IAS Is a Flawed Upgrade Path Netezza EOL: IBM IAS Is a Flawed Upgrade Path Executive Summary It is a common technology vendor fallacy to compare their systems with their competitors by focusing on the features, functions, and specifications they have, but the other guy doesn’t. Vendors ignore the opposite while touting hardware specs that mean little. It doesn’t matter if those features, functions, and specifications provide anything meaningfully empirical to the business applications that rely on the system. It only matters if it demonstrates an advantage on paper. This is called specsmanship. It’s similar to starting an argument with proof points. The specs, features, and functions are proof points that the system can solve specific operational problems. They are the “what” that solves the problem or problems. They mean nothing by themselves. Specsmanship is defined by Wikipedia as: “inappropriate use of specifications or measurement results to establish supposed superiority of one entity over another, generally when no such superiority exists.” It’s the frequent ineffective sales process utilized by IBM. A textbook example of this is IBM’s attempt to move their Netezza users to the IBM Integrated Analytics System (IIAS). IBM is compelling their users to move away from Netezza. In the fall of 2017, IBM announced to the Enzee community (Netezza users) that they can no longer purchase or upgrade PureData System for Analytics (the most recent IBM name for its Netezza appliances), and it will end-of-life all support next year. -
History of AI at IBM and How IBM Is Leveraging Watson for Intellectual Property
History of AI at IBM and How IBM is Leveraging Watson for Intellectual Property 2019 ECC Conference June 9-11 at Marist College IBM Intellectual Property Management Solutions 1 IBM Intellectual Property Management Solutions © 2017-2019 IBM Corporation Who are We? At IBM for 37 years I currently work in the Technology and Intellectual Property organization, a combination of CHQ and Research. I have worked as an engineer in Procurement, Testing, MLC Packaging, and now T&IP. Currently Lead Architect on IP Advisor with Watson, a Watson based Patent and Intellectual Property Analytics tool. • Master Inventor • Number of patents filed ~ 24+ • Number of submissions in progress ~ 4+ • Consult/Educate outside companies on all things IP (from strategy to commercialization, including IP 101) • Technical background: Semiconductors, Computers, Programming/Software, Tom Fleischman Intellectual Property and Analytics [email protected] Is the manager of the Intellectual Property Management Solutions team in CHQ under the Technology and Intellectual Property group. Current OM for IP Advisor with Watson application, used internally and externally. Past Global Business Services in the PLM and Supply Chain practices. • Number of patents filed – 2 (2018) • Number of submissions in progress - 2 • Consult/Educate outside companies on all things IP (from strategy to commercialization, including IP 101) • Schaumburg SLE Sue Hallen • Technical background: Registered Professional Engineer in Illinois, Structural Engineer by [email protected] degree, lots of software development and implementation for PLM clients 2 IBM Intellectual Property Management Solutions © 2017-2019 IBM Corporation How does IBM define AI? IBM refers to it as Augmented Intelligence…. • Not artificial or meant to replace Human Thinking…augments your work AI Terminology Machine Learning • Provides computers with the ability to continuing learning without being pre-programmed. -
Big Blue in the Bottomless Pit: the Early Years of IBM Chile
Big Blue in the Bottomless Pit: The Early Years of IBM Chile Eden Medina Indiana University In examining the history of IBM in Chile, this article asks how IBM came to dominate Chile’s computer market and, to address this question, emphasizes the importance of studying both IBM corporate strategy and Chilean national history. The article also examines how IBM reproduced its corporate culture in Latin America and used it to accommodate the region’s political and economic changes. Thomas J. Watson Jr. was skeptical when he The history of IBM has been documented first heard his father’s plan to create an from a number of perspectives. Former em- international subsidiary. ‘‘We had endless ployees, management experts, journalists, and opportunityandlittleriskintheUS,’’he historians of business, technology, and com- wrote, ‘‘while it was hard to imagine us getting puting have all made important contributions anywhere abroad. Latin America, for example to our understanding of IBM’s past.3 Some seemed like a bottomless pit.’’1 However, the works have explored company operations senior Watson had a different sense of the outside the US in detail.4 However, most of potential for profit within the world market these studies do not address company activi- and believed that one day IBM’s sales abroad ties in regions of the developing world, such as would surpass its growing domestic business. Latin America.5 Chile, a slender South Amer- In 1949, he created the IBM World Trade ican country bordered by the Pacific Ocean on Corporation to coordinate the company’s one side and the Andean cordillera on the activities outside the US and appointed his other, offers a rich site for studying IBM younger son, Arthur K. -
The Evolution of Ibm Research Looking Back at 50 Years of Scientific Achievements and Innovations
FEATURES THE EVOLUTION OF IBM RESEARCH LOOKING BACK AT 50 YEARS OF SCIENTIFIC ACHIEVEMENTS AND INNOVATIONS l Chris Sciacca and Christophe Rossel – IBM Research – Zurich, Switzerland – DOI: 10.1051/epn/2014201 By the mid-1950s IBM had established laboratories in New York City and in San Jose, California, with San Jose being the first one apart from headquarters. This provided considerable freedom to the scientists and with its success IBM executives gained the confidence they needed to look beyond the United States for a third lab. The choice wasn’t easy, but Switzerland was eventually selected based on the same blend of talent, skills and academia that IBM uses today — most recently for its decision to open new labs in Ireland, Brazil and Australia. 16 EPN 45/2 Article available at http://www.europhysicsnews.org or http://dx.doi.org/10.1051/epn/2014201 THE evolution OF IBM RESEARCH FEATURES he Computing-Tabulating-Recording Com- sorting and disseminating information was going to pany (C-T-R), the precursor to IBM, was be a big business, requiring investment in research founded on 16 June 1911. It was initially a and development. Tmerger of three manufacturing businesses, He began hiring the country’s top engineers, led which were eventually molded into the $100 billion in- by one of world’s most prolific inventors at the time: novator in technology, science, management and culture James Wares Bryce. Bryce was given the task to in- known as IBM. vent and build the best tabulating, sorting and key- With the success of C-T-R after World War I came punch machines. -
Treatment and Differential Diagnosis Insights for the Physician's
Treatment and differential diagnosis insights for the physician’s consideration in the moments that matter most The role of medical imaging in global health systems is literally fundamental. Like labs, medical images are used at one point or another in almost every high cost, high value episode of care. Echocardiograms, CT scans, mammograms, and x-rays, for example, “atlas” the body and help chart a course forward for a patient’s care team. Imaging precision has improved as a result of technological advancements and breakthroughs in related medical research. Those advancements also bring with them exponential growth in medical imaging data. The capabilities referenced throughout this document are in the research and development phase and are not available for any use, commercial or non-commercial. Any statements and claims related to the capabilities referenced are aspirational only. There were roughly 800 million multi-slice exams performed in the United States in 2015 alone. Those studies generated approximately 60 billion medical images. At those volumes, each of the roughly 31,000 radiologists in the U.S. would have to view an image every two seconds of every working day for an entire year in order to extract potentially life-saving information from a handful of images hidden in a sea of data. 31K 800MM 60B radiologists exams medical images What’s worse, medical images remain largely disconnected from the rest of the relevant data (lab results, patient-similar cases, medical research) inside medical records (and beyond them), making it difficult for physicians to place medical imaging in the context of patient histories that may unlock clues to previously unconsidered treatment pathways. -
Cell Broadband Engine Spencer Dennis Nicholas Barlow the Cell Processor
Cell Broadband Engine Spencer Dennis Nicholas Barlow The Cell Processor ◦ Objective: “[to bring] supercomputer power to everyday life” ◦ Bridge the gap between conventional CPU’s and high performance GPU’s History Original patent application in 2002 Generations ◦ 90 nm - 2005 ◦ 65 nm - 2007 (PowerXCell 8i) ◦ 45 nm - 2009 Cost $400 Million to develop Team of 400 engineers STI Design Center ◦ Sony ◦ Toshiba ◦ IBM Design PS3 Employed as CPU ◦ Clocked at 3.2 GHz ◦ theoretical maximum performance of 23.04 GFLOPS Utilized alongside NVIDIA RSX 'Reality Synthesizer' GPU ◦ Complimented graphical performance ◦ 8 Synergistic Processing Elements (SPE) ◦ Single Dual Issue Power Processing Element (PPE) ◦ Memory IO Controller (MIC) ◦ Element Interconnect Bus (EIB) ◦ Memory IO Controller (MIC) ◦ Bus Interface Controller (BIC) Architecture Overview SPU/SPE Synergistic Processing Unit/Element SXU - Synergistic Execution Unit LS - Local Store SMF - Synergistic Memory Frontend EIB - Element Interconnect Bus PPE - Power Processing Element MIC - Memory IO Controller BIC - Bus Interface Controller Synergistic Processing Element (SPE) 128-bit dual-issue SIMD dataflow ○ “Single Instruction Multiple Data” ○ Optimized for data-level parallelism ○ Designed for vectorized floating point calculations. ◦ Workhorses of the Processor ◦ Handle most of the computational workload ◦ Each contains its own Instruction + Data Memory ◦ “Local Store” ▫ Embedded SRAM SPE Continued Responsible for governing SPEs ◦ “Extensions” of the PPE Shares main memory with SPE ◦ can initiate -
Cheryl Watson's
Cheryl Watson’s 1998, No. 6 TUNING Letter A PRACTICAL JOURNAL OF S/390 TUNING AND MEASUREMENT ADVICE Inside this issue... This Issue: My heart attack is important news, at least from my point of view. See page 2. But I'm feeling really great these days, thanks to A Note From Cheryl & Tom .2 modern medicine. Management Issues #20 ......3 Class Schedule .....................4 Upgrading a processor is the focus of this issue. How to size, S/390 News how to understand the difference between speed and capacity, and GRS Star .............................5 how to avoid typical problems after an upgrade are all covered MXG & CA-MICS.................5 starting on page 14. CMOS & Compression ......5 Y2K ......................................6 Java and Component Broker are featured in two articles pro- Java .....................................6 vided by Glenn Anderson of IBM Education (page 6). Component Broker ............7 LE.........................................9 IBM is now recommending that multi-system sites be at WSC Flashes ....................10 OS/390 R5 before freezing systems for Y2K. See WSC Flash The Net..............................11 98044 on page 10 for this important item. Pubs & APARs .................12 Focus: Processor Upgrades A known integrity exposure in ISPF has existed for the five Sizing Processors............14 years since ISPF V4, but new customers keep running into the Migration Issues...............23 problem. See page 29. Upgrade Problems...........26 Cheryl's Updates New Web links and important new manuals and books are Most Common Q&As.......28 listed in our S/390 News starting on page 12. TCP/IP................................28 ISPF Exposure..................29 Don't go to OS/390 R5 without checking with your TCP/IP WLM Update .....................30 vendors or you could be in serious trouble. -
Osisoft Corporate Powerpoint Template
HRS 2016 - IIOT, Advanced Analytics, & “Big Data” Forum • 1976–1979: concept of Teradata grows from research at California Institute of Technology (Caltech) and from the discussions of Citibank's advanced technology group. • 1984: Teradata releases the world's first parallel data warehouses and data marts. • 1986: Fortune Magazine names Teradata "Product of the Year." • 1992: Teradata creates the first system over 1 terabyte, which goes live at Wal-Mart. • 1997: Teradata customer creates world's largest production database at 24 terabytes. • 1999: Teradata customer has world's largest database with 130 terabytes. • 2014: Teradata acquires Rainstor, a company specializing in online big data archiving on Hadoop. https://en.wikipedia.org/wiki/Teradata#History HRS 2016 - IIOT, Advanced Analytics, & “Big Data” Forum • The genesis of Hadoop came from the Google File System paper that was published in October 2003. • This paper spawned another research paper from Google – MapReduce: Simplified Data Processing on Large Clusters. • Development started in the Apache Nutch project, but was moved to the new Hadoop subproject in January 2006. Doug Cutting, who was working at Yahoo! at the time, named it after his son's toy elephant. https://en.wikipedia.org/wiki/Apache_Hadoop#History HRS 2016 - IIOT, Advanced Analytics, & “Big Data” Forum Company Product Description IBM Informix Supports Dynamic In-memory (in-memory columnar processing) Parallel Vector Processing, Actionable Compression, and Data Skipping technologies, collectively called "Blink Technology“ by IBM. Released: March 2011. IBM DB2 BLU IBM DB2 for Linux, UNIX and Windows supports dynamic in-memory (in-memory columnar processing) parallel vector processing, actionable compression, and data- skipping technologies, collectively called IBM BLU Acceleration by IBM. -
Nosql – Factors Supporting the Adoption of Non-Relational Databases
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Trepo - Institutional Repository of Tampere University NoSQL – Factors Supporting the Adoption of Non-Relational Databases Olli Sutinen University of Tampere Department of Computer Sciences Computer Science M.Sc. thesis Supervisor: Marko Junkkari December 2010 University of Tampere Department of Computer Sciences Computer science Olli Sutinen: NoSQL – Factors Supporting the Adoption of Non-Relational Databases M.Sc. thesis November 2010 Relational databases have been around for decades and are still in use for general data storage needs. The web has created usage patterns for data storage and querying where current implementations of relational databases fit poorly. NoSQL is an umbrella term for various new data stores which emerged virtually simultaneously at the time when relational databases were the de facto standard for data storage. It is claimed that the new data stores address the changed needs better than the relational databases. The simple reason behind this phenomenon is the cost. If the systems are too slow or can't handle the load, the users will go to a competing site, and continue spending their time there watching 'wrong' advertisements. On the other hand, scaling relational databases is hard. It can be done and commercial RDBMS vendors have such systems available but it is out of reach of a startup because of the price tag included. This study reveals the reasons why many companies have found existing data storage solutions inadequate and developed new data stores. Keywords: databases, database scalability, data models, nosql Contents 1. -
Oracle Nosql Database and DMS for Enhancing Iot Environments
How Oracle NoSQL Database and Oracle Database Mobile Server Enhance an Internet of Things Data Management Solution ORACLE WHITE P A P E R | JANUARY 2017 Table of Contents Introduction 1 IoT Explained and Data Volumes 1 Business Value 1 Critical Components of an High Performance IoT Environment 2 Basic Architecture 2 Edge Filtering 2 Oracle NoSQL Database Benefits 3 Oracle NoSQL Database Architecture 3 Edge Filtering and the Database Mobile Server 4 Oracle Database Mobile Server Benefits 4 Lambda Architecture 5 Existing Oracle IoT Cloud Service 6 Oracle Technologies 6 Summary 7 HOW ORACLE NOSQL DATABASE AND ORACLE DATABASE MOBILE SERVER ENHANCE AN INTERNET OF THINGS DATA MANAGEMENT SOLUTION Introduction The Internet of Things (IoT) has become an important technology that can facilitate both personal and organizational understanding of the devices that matter. As smart devices become more ubiquitous, architectures need to be put in place that effectively and efficiently deal with the wide range of devices and environments that will exist. This includes scenarios where data is streaming over a network as well as periods where data is produced but the devices are off-line. An IoT strategy can open up new businesses and increase the understanding of existing, intelligent products and services. The business benefits of understanding the massive amounts of data flowing into analytics systems will be impressive. IoT is bridging the gap between IT (Information Technology) and OT (operations technology) to derive insights towards controlling the costs for various industries such as manufacturing, retail, logistics etc. For all those organizations that are implementing IoT strategies, data management will be at the centre of this digital transformation and a key differentiator would be the ability to derive meaningful business insights from gathered data against organizations that will merely gather data. -
Linear Scalability of Distributed Applications
Linear Scalability of Distributed Applications THÈSE NO 5278 (2012) PRÉSENTÉE LE 3 FÉVRIER 2012 À LA FACULTÉ INFORMATIQUE ET COMMUNICATIONS LABORATOIRE DE SYSTÈMES D'INFORMATION RÉPARTIS PROGRAMME DOCTORAL EN INFORMATIQUE, COMMUNICATIONS ET INFORMATION ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE POUR L'OBTENTION DU GRADE DE DOCTEUR ÈS SCIENCES PAR Nicolas Bonvin acceptée sur proposition du jury: Prof. B. Faltings, président du jury Prof. K. Aberer, directeur de thèse Prof. A. Ailamaki, rapporteur Prof. D. Kossmann, rapporteur Prof. C. Pu, rapporteur Suisse 2012 R´esum´e L’explosion des applications sociales, du commerce electronique´ ou de la recherche sur Internet a cre´e´ le besoin de nouvelles technologies et de systemes` adaptes´ au traitement et a` la gestion efficace d’un nombre considerable´ de donnees´ et d’utilisateurs. Ces applications doivent fonc- tionner sans interruption tous les jours de l’annee,´ et doivent donc etreˆ capables de survivre a` de subites et brutales montees´ en charge, ainsi qu’a` toutes sortes de defaillances´ logicielles, materielles,´ humaines et or- ganisationnelles. L’augmentation (ou la diminution) elastique´ et extensible des ressources affectees´ a` une application distribuee,´ tout en satisfaisant des exigences de disponibilite´ et de performance, est essentielle pour la viabilite´ com- merciale mais presente´ de grands defis´ pour les infrastructures actuelles. En effet, le Cloud Computing permet de fournir des ressources a` la de- mande: il devient desormais´ aise´ de demarrer´ des dizaines de serveurs en parallele` (ressources de calcul) ou de stocker un tres` grand nombre de donnees´ (ressources de stockage), memeˆ pour une duree´ tres` limitee,´ en payant uniquement pour les ressources consommees.´ Toutefois, ces infrastructures complexes constituees´ de ressources het´ erog´ enes` a` faible coutˆ sont sujettes aux pannes.