Sql Server to Aurora Postgresql Migration Playbook

Total Page:16

File Type:pdf, Size:1020Kb

Sql Server to Aurora Postgresql Migration Playbook Microsoft SQL Server To Amazon Aurora with Post- greSQL Compatibility Migration Playbook 1.0 Preliminary September 2018 © 2018 Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational purposes only. It represents AWS’s current product offer- ings and practices as of the date of issue of this document, which are subject to change without notice. Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services, each of which is provided “as is” without war- ranty of any kind, whether express or implied. This document does not create any warranties, rep- resentations, contractual commitments, conditions or assurances from AWS, its affiliates, suppliers or licensors. The responsibilities and liabilities of AWS to its customers are controlled by AWS agree- ments, and this document is not part of, nor does it modify, any agreement between AWS and its cus- tomers. - 2 - Table of Contents Introduction 9 Tables of Feature Compatibility 12 AWS Schema and Data Migration Tools 20 AWS Schema Conversion Tool (SCT) 21 Overview 21 Migrating a Database 21 SCT Action Code Index 31 Creating Tables 32 Data Types 32 Collations 33 PIVOT and UNPIVOT 33 TOP and FETCH 34 Cursors 34 Flow Control 35 Transaction Isolation 35 Stored Procedures 36 Triggers 36 MERGE 37 Query hints and plan guides 37 Full Text Search 38 Indexes 38 Partitioning 39 Backup 40 SQL Server Mail 40 SQL Server Agent 41 Service Broker 41 XML 42 Constraints 42 - 3 - Linked Servers 42 AWS Database Migration Service (DMS) 43 Overview 43 Migration Tasks Performed by AWS DMS 43 How AWS DMS Works 44 ANSI SQL 46 Migrate from: SQL Server Constraints 47 Migrate to: Aurora PostgreSQL Table Constraints 51 Migrate from: SQL Server Creating Tables 58 Migrate to: Aurora PostgreSQL Creating Tables 62 Migrate from: SQL Server Common Table Expressions 67 Migrate to: Aurora PostgreSQL Common Table Expressions (CTE) 70 Migrate from: SQL Server Data Types 74 Migrate to: Aurora PostgreSQL Data Types 76 Migrate from: SQL Server Derived Tables 81 Migrate to: Aurora PostgreSQL Derived Tables 82 Migrate from: SQL Server GROUP BY 83 Migrate to: Aurora PostgreSQL GROUP BY 87 Migrate from: SQL Server Table JOIN 90 Migrate to: Aurora PostgreSQL Table JOIN 95 Migrate from: SQL Server Temporal Tables 99 Migrate to: Aurora PostgreSQL Triggers (Temporal Tables alternative) 101 Migrate from: SQL Server Views 102 Migrate to: Aurora PostgreSQL Views 105 Migrate from: SQL Server Window Functions 108 Migrate to: Aurora PostgreSQL Window Functions 110 T-SQL 114 Migrate from: SQL Server Service Broker Essentials 115 Migrate to: Aurora PostgreSQL AWS Lambda or DB links 119 Migrate from: SQL Server Cast and Convert 120 - 4 - Migrate to: Aurora PostgreSQL CAST and CONVERSION 122 Migrate from: SQL Server Common Library Runtime (CLR) 124 Migrate to: Aurora PostgreSQL PL/Perl 125 Migrate from: SQL Server Collations 126 Migrate to: Aurora PostgreSQL Encoding 129 Migrate from: SQL Server Cursors 132 Migrate to: Aurora PostgreSQL Cursors 134 Migrate from: SQL Server Date and Time Functions 139 Migrate to: Aurora PostgreSQL Date and Time Functions 141 Migrate from: SQL Server String Functions 143 Migrate to: Aurora PostgreSQL String Functions 145 Migrate from: SQL Server Databases and Schemas 148 Migrate to: Aurora PostgreSQL Databases and Schemas 151 Migrate from: SQL Server Dynamic SQL 153 Migrate to: Aurora PostgreSQL EXECUTE and PREPARE 156 Migrate from: SQL Server Transactions 159 Migrate to: Aurora PostgreSQL Transactions 163 Migrate from: SQL Server Synonyms 169 Migrate to: Aurora PostgreSQL Views, Types & Functions 171 Migrate from: SQL Server DELETE and UPDATE FROM 173 Migrate to: Aurora PostgreSQL DELETE and UPDATE FROM 176 Migrate from: SQL Server Stored Procedures 179 Migrate to: Aurora PostgreSQL Stored Procedures 182 Migrate from: SQL Server Error Handling 186 Migrate to: Aurora PostgreSQL Error Handling 191 Migrate from: SQL Server Flow Control 194 Migrate to: Aurora PostgreSQL Control Structures 197 Migrate from: SQL Server Full-Text Search 200 Migrate to: Aurora PostgreSQL Full-Text Search 203 Migrate from: SQL Server JSON and XML 206 - 5 - Migrate to: Aurora PostgreSQL JSON and XML 209 Migrate from: SQL Server MERGE 213 Migrate to: Aurora PostgreSQL MERGE 216 Migrate from: SQL Server PIVOT and UNPIVOT 218 Migrate to: Aurora PostgreSQL PIVOT and UNPIVOT 222 Migrate from: SQL Server Triggers 225 Migrate to: Aurora PostgreSQL Triggers 228 Migrate from: SQL Server TOP and FETCH 232 Migrate to: Aurora PostgreSQL LIMIT and OFFSET (TOP and FETCH Equivalent) 235 Migrate from: SQL Server User Defined Functions 238 Migrate to: Aurora PostgreSQL User Defined Functions 241 Migrate from: SQL Server User Defined Types 242 Migrate to: Aurora PostgreSQL User Defined Types 245 Migrate from: SQL Server Sequences and Identity 248 Migrate to: Aurora PostgreSQL Sequences and SERIAL 253 Configuration 258 Migrate from: SQL Server Session Options 259 Migrate to: Aurora PostgreSQL Session Options 262 Migrate from: SQL Server Database Options 265 Migrate to: Aurora PostgreSQL Database Options 266 Migrate from: SQL Server Server Options 267 Migrate to: Aurora PostgreSQL Aurora Parameter Groups 269 High Availability and Disaster Recovery (HADR) 276 Migrate from: SQL Server Backup and Restore 277 Migrate to: Aurora PostgreSQL Backup and Restore 281 Migrate from: SQL Server High Availability Essentials 289 Migrate to: Aurora PostgreSQL High Availability Essentials 293 Indexes 303 Migrate from: SQL Server Clustered and Non Clustered Indexes 304 Migrate to: Aurora PostgreSQL Clustered and Non Clustered Indexes 308 - 6 - Management 313 Migrate from: SQL Server Agent 314 Migrate to: Aurora PostgreSQL Scheduled Lambda 315 Migrate from: SQL Server Alerting 316 Migrate to: Aurora PostgreSQL Alerting 318 Migrate from: SQL Server Database Mail 323 Migrate to: Aurora PostgreSQL Database Mail 326 Migrate from: SQL Server ETL 334 Migrate to: Aurora PostgreSQL ETL 336 Migrate from: SQL Server Export and Import with Text files 361 Migrate to: Aurora PostgreSQL pg_dump and pg_restore 363 Migrate from: SQL Server Viewing Server Logs 366 Migrate to: Aurora PostgreSQL Viewing Server Logs 368 Migrate from: SQL Server Maintenance Plans 371 Migrate to: Aurora PostgreSQL Maintenance Plans 373 Migrate from: SQL Server Monitoring 378 Migrate to: Aurora PostgreSQL Monitoring 381 Migrate from: SQL Server Resource Governor 383 Migrate to: Aurora PostgreSQL Dedicated Amazon Aurora Clusters 385 Migrate from: SQL Server Linked Servers 388 Migrate to: Aurora PostgreSQL DBLink and FDWrapper 391 Migrate from: SQL Server Scripting 393 Migrate to: Aurora PostgreSQL Scripting 395 Performance Tuning 399 Migrate from: SQL Server Execution Plans 400 Migrate to: Aurora PostgreSQL Execution Plans 402 Migrate from: SQL Server Query Hints and Plan Guides 404 Migrate to: Aurora PostgreSQL DB Query Planning 408 Migrate from: SQL Server Managing Statistics 409 Migrate to: Aurora PostgreSQL Table Statistics 411 - 7 - Physical Storage 414 Migrate from: SQL Server Columnstore Index 415 Migrate to: Aurora PostgreSQL 416 Migrate from: SQL Server Indexed Views 417 Migrate to: Aurora PostgreSQL Materialized Views 419 Migrate from: SQL Server Partitioning 422 Migrate to: Aurora PostgreSQL Table Inheritance 424 Security 429 Migrate from: SQL Server Column Encryption 430 Migrate to: Aurora PostgreSQL Column Encryption 432 Migrate from: SQL Server Data Control Language 434 Migrate to: Aurora PostgreSQL Data Control Language 435 Migrate from: SQL Server Transparent Data Encryption 438 Migrate to: Aurora PostgreSQL Transparent Data Encryption 439 Migrate from: SQL Server Users and Roles 444 Migrate to: Aurora PostgreSQL Users and Roles 446 Appendix A: SQL Server 2018 Deprecated Feature List 448 Migration Quick Tips 449 Migration Quick Tips 450 Management 450 SQL 450 Glossary 453 - 8 - Introduction The migration process from SQL Server to Amazon Aurora PostgreSQL typically involves several stages. The first stage is to use the AWS Schema Conversion Tool (SCT) and the AWS Database Migration Ser- vice (DMS) to convert and migrate the schema and data. While most of the migration work can be auto- mated, some aspects require manual intervention and adjustments to both database schema objects and database code. The purpose of this Playbook is to assist administrators tasked with migrating SQL Server databases to Aurora PostgreSQL with the aspects that can't be automatically migrated using the Amazon Web Ser- vices Schema Conversion Tool (AWS SCT). It focuses on the differences, incompatibilities, and sim- ilarities between SQL Server and Aurora PostgreSQL in a wide range of topics including T-SQL, Configuration, High Availability and Disaster Recovery (HADR), Indexing, Management, Performance Tuning, Security, and Physical Storage. The first section of this document provides an overview of AWS SCT and the AWS Data Migration Ser- vice (DMS) tools for automating the migration of schema, objects and data. The remainder of the doc- ument contains individual sections for SQL Server features and their Aurora PostgreSQL counterparts. Each section provides a short overview of the feature, examples, and potential workaround solutions for incompatibilities. You can use this playbook either as a reference to investigate the individual action codes generated by the AWS SCT tool, or to explore a variety of topics where you expect to have some incompatibility issues. When
Recommended publications
  • Perl DBI API Reference
    H Perl DBI API Reference This appendix describes the Perl DBI application programming interface. The API consists of a set of methods and attributes for communicating with database servers and accessing databases from Perl scripts. The appendix also describes MySQL-specific extensions to DBI provided by DBD::mysql, the MySQL database driver. I assume here a minimum version of DBI 1.50, although most of the material applies to earlier versions as well. DBI 1.50 requires at least Perl 5.6.0 (with 5.6.1 preferred). As of DBI 1.611, the minimum Perl version is 5.8.1. I also assume a minimum version of DBD::mysql 4.00. To determine your versions of DBI and DBD::mysql (assuming that they are installed), run this program: #!/usr/bin/perl # dbi-version.pl - display DBI and DBD::mysql versions use DBI; print "DBI::VERSION: $DBI::VERSION\n"; use DBD::mysql; print "DBD::mysql::VERSION: $DBD::mysql::VERSION\n"; If you need to install the DBI software, see Appendix A , “Software Required to Use This Book.” Some DBI methods and attributes are not discussed here, either because they do not apply to MySQL or because they are experimental methods that may change as they are developed or may even be dropped. Some MySQL-specific DBD methods are not discussed because they are obsolete. For more information about new or obsolete methods, see the DBI or DBD::mysql documentation, available at http://dbi.perl.org or by running the following commands: % perldoc DBI % perldoc DBI::FAQ % perldoc DBD::mysql The examples in this appendix are only brief code fragments.
    [Show full text]
  • Accessing DB from Programming Languages
    Accessing DB from programming languages JDBCJDBC andand ODBCODBC • API (application-program interface) for a program to interact with a database server • Application makes calls to – Connect with the database server – Send SQL commands to the database server – Fetch tuples of result one-by-one into program variables • ODBC (Open Database Connectivity) works with C, C++, C#, and Visual Basic – Other API’s such as ADO.NET sit on top of ODBC • JDBC (Java Database Connectivity) works with Java JDBCJDBC • JDBC is a Java API for communicating with database systems supporting SQL. • JDBC supports a variety of features for querying and updating data, and for retrieving query results. • JDBC also supports metadata retrieval, such as querying about relations present in the database and the names and types of relation attributes. • Model for communicating with the database: – Open a connection – Create a “statement” object – Execute queries using the Statement object to send queries and fetch results – Exception mechanism to handle errors JDBCJDBC CodeCode public static void JDBCexample(String dbid, String userid, String passwd) { try { Class.forName ("oracle.jdbc.driver.OracleDriver"); Connection conn = DriverManager.getConnection( "jdbc:oracle:thin:@db.yale.edu:2000:univdb", userid, passwd); Statement stmt = conn.createStatement(); … Do Actual Work …. stmt.close(); conn.close(); } catch (SQLException sqle) { System.out.println("SQLException : " + sqle); } } JDBCJDBC CodeCode (Cont.)(Cont.) • Update to database try { stmt.executeUpdate( "insert
    [Show full text]
  • Delivering Oracle Compatibility
    DELIVERING ORACLE COMPATIBILITY A White Paper by EnterpriseDB www.EnterpriseDB.com TABLE OF CONTENTS Executive Summary 4 Introducing EnterpriseDB Advanced Server 6 SQL Compatibility 6 PL/SQL Compatibility 9 Data Dictionary Views 11 Programming Flexibility and Drivers 12 Transfer Tools 12 Database Replication 13 Enterprise-Class Reliability and Scalability 13 Oracle-Like Tools 14 Conclusion 15 Downloading EnterpriseDB 15 EnterpriseDB EXECUTIVE SUMMARY Enterprises running Oracle® are generally interested in alternative databases for at least three reasons. First, these enterprises are experiencing budget constraints and need to lower their database Total Cost of Ownership (TCO). Second, they are trying to gain greater licensing flexibility to become more agile within the company and in the larger market. Finally, they are actively pursuing vendors who will provide superior technical support and a richer customer experience. And, subsequently, enterprises are looking for a solution that will complement their existing infrastructure and skills. The traditional database vendors have been unable to provide the combination of all three benefits. While Microsoft SQL ServerTM and IBM DB2TM may provide the flexibility and rich customer experience, they cannot significantly reduce TCO. Open source databases, on the other hand, can provide the TCO benefits and the flexibility. However, these open source databases either lack the enterprise-class features that today’s mission critical applications require or they are not equipped to provide the enterprise-class support required by these organizations. Finally, none of the databases mentioned above provide the database compatibility and interoperability that complements their existing applications and staff. The fear of the costs of changing databases, including costs related to application re-coding and personnel re-training, outweigh the expected savings and, therefore, these enterprises remain paralyzed and locked into Oracle.
    [Show full text]
  • Insert Query in Java Using Prepared Statement
    Insert Query In Java Using Prepared Statement Overriding Ike reflates impartially while Barton always triturates his oftenness interpages faithfully, he mistrusts so boyishly. Piet remains circumfluous after Wade locate definably or decentralised any definition. Zacherie is aristocratically large-minded after scrubbier Stillman break-outs his sexes sketchily. By allowing the standard access to create a prepared statements quickly delete data is registered list of query in using insert java prepared statement affecting the stored procedure Stmt db-prepareINSERT INTO foo firstname lastname email VALUES. All these source stuff and easily switch to understand your name in cleanup after the insert statement is part of user row can add to above tutorial in. Previously he spares his picture might require that counted the using insert query in java statement can be grouped in statement and respond to. An interest in case of data expected by the member, we check the program to advancing racial equity for using insert query that does it. To teach good coding world through this clause if it behave in prepared insert query statement in java using a later filled with technological advances. Java PreparedStatement javatpoint. When people prepare the statement Cassandra parses the folder string caches the result. To distress an INSERT statement against multiple rows the normal method is to commend a. Java JDBC Tutorial Inserting Data with User Input luv2code. Update PostgreSQL Record using Prepared Statement in Java. To insert a service in school table using PreparedStatement in Java you need not use below syntax to. The typical workflow of using a prepared statement is as follows.
    [Show full text]
  • Automated Fix Generator for SQL Injection Attacks
    Automated Fix Generator for SQL Injection Attacks Fred Dysart and Mark Sherriff Department of Computer Science, University of Virginia, Charlottesville, VA 22903 {ftd4q, sherriff}@virginia.edu Abstract Thomas’s solution, any variable that defines a prepared A critical problem facing today’s internet community statement’s structure must be within the same scope as is the increasing number of attacks exploiting flaws the execution of that statement. Our solution does not found in Web applications. This paper specifically incur this limitation because the generation of the targets input validation vulnerabilities found in SQL prepared statement fix is not dependant on the actual queries that may lead to SQL Injection Attacks statement declaration in the code. (SQLIAs). We introduce a tool that automatically detects and suggests fixes to SQL queries that are found 3. SecurePHP to contain SQL Injection Vulnerabilities (SQLIVs). Our solution is called SecurePHP. It was written in Testing was performed against phpBB v2.0, an open C# and utilized the .NET framework. There were two source forum package, to determine the accuracy and main design decisions that guided us through efficacy of our software. development. First, we wanted the tool to be easy to use. While all applications should be simple for a user 1. Introduction to interact with, the reasoning behind our decision was According to the National Institute of Standards and that the increasing amount of SQLIVs present in Technology, SQL Injection Vulnerabilities (SQLIVs) software might be from legacy software that is not amounted 14% of the total Web application maintained by a developer with experience fixing vulnerabilities in 2006 [3].
    [Show full text]
  • Support Aggregate Analytic Window Function Over Large Data by Spilling
    Support Aggregate Analytic Window Function over Large Data by Spilling Xing Shi and Chao Wang Guangdong University of Technology, Guangzhou, Guangdong 510006, China North China University of Technology, Beijing 100144, China Abstract. ​Analytic function, also called window function, is to query the aggregation of data over a sliding window. For example, a simple query over the online stock platform is to return the average price of a stock of the last three days. These functions are commonly used features in SQL databases. They are supported in most of the commercial databases. With the increasing usage of cloud data infra and machine learning technology, the frequency of queries with analytic window functions rises. Some analytic functions only require const space in memory to store the state, such as SUM, AVG, while others require linear space, such as MIN, MAX. When the window is extremely large, the memory space to store the state may be too large. In this case, we need to spill the state to disk, which is a heavy operation. In this paper, we proposed an algorithm to manipulate the state data in the disk to reduce the disk I/O to make spill available and efficiency. We analyze the complexity of the algorithm with different data distribution. 1. Introducion In this paper, we develop novel spill techniques for analytic window function in SQL databases. And discuss different various types of aggregate queries, e.g., COUNT, AVG, SUM, MAX, MIN, etc., over a relational table. Aggregate ​analytic function, also called aggregate window function, is to query the aggregation of data over a sliding window.
    [Show full text]
  • SQL Standards Update 1
    2017-10-20 SQL Standards Update 1 SQL STANDARDS UPDATE Keith W. Hare SC32 WG3 Convenor JCC Consulting, Inc. October 20, 2017 2017-10-20 SQL Standards Update 2 Introduction • What is SQL? • Who Develops the SQL Standards • A brief history • SQL 2016 Published • SQL Technical Reports • What's next? • SQL/MDA • Streaming SQL • Property Graphs • Summary 2017-10-20 SQL Standards Update 3 Who am I? • Senior Consultant with JCC Consulting, Inc. since 1985 • High performance database systems • Replicating data between database systems • SQL Standards committees since 1988 • Convenor, ISO/IEC JTC1 SC32 WG3 since 2005 • Vice Chair, ANSI INCITS DM32.2 since 2003 • Vice Chair, INCITS Big Data Technical Committee since 2015 • Education • Muskingum College, 1980, BS in Biology and Computer Science • Ohio State, 1985, Masters in Computer & Information Science 2017-10-20 SQL Standards Update 4 What is SQL? • SQL is a language for defining databases and manipulating the data in those databases • SQL Standard uses SQL as a name, not an acronym • Might stand for SQL Query Language • SQL queries are independent of how the data is actually stored – specify what data you want, not how to get it 2017-10-20 SQL Standards Update 5 Who Develops the SQL Standards? In the international arena, the SQL Standard is developed by ISO/ IEC JTC1 SC32 WG3. • Officers: • Convenor – Keith W. Hare – USA • Editor – Jim Melton – USA • Active participants are: • Canada – Standards Council of Canada • China – Chinese Electronics Standardization Institute • Germany – DIN Deutsches
    [Show full text]
  • A First Attempt to Computing Generic Set Partitions: Delegation to an SQL Query Engine Frédéric Dumonceaux, Guillaume Raschia, Marc Gelgon
    A First Attempt to Computing Generic Set Partitions: Delegation to an SQL Query Engine Frédéric Dumonceaux, Guillaume Raschia, Marc Gelgon To cite this version: Frédéric Dumonceaux, Guillaume Raschia, Marc Gelgon. A First Attempt to Computing Generic Set Partitions: Delegation to an SQL Query Engine. DEXA’2014 (Database and Expert System Applications), Sep 2014, Munich, Germany. pp.433-440. hal-00993260 HAL Id: hal-00993260 https://hal.archives-ouvertes.fr/hal-00993260 Submitted on 28 Oct 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A First Attempt to Computing Generic Set Partitions: Delegation to an SQL Query Engine Fr´ed´ericDumonceaux, Guillaume Raschia, and Marc Gelgon LINA (UMR CNRS 6241), Universit´ede Nantes, France [email protected] Abstract. Partitions are a very common and useful way of organiz- ing data, in data engineering and data mining. However, partitions cur- rently lack efficient and generic data management functionalities. This paper proposes advances in the understanding of this problem, as well as elements for solving it. We formulate the task as efficient process- ing, evaluating and optimizing queries over set partitions, in the setting of relational databases.
    [Show full text]
  • Look out the Window Functions and Free Your SQL
    Concepts Syntax Other Look Out The Window Functions and free your SQL Gianni Ciolli 2ndQuadrant Italia PostgreSQL Conference Europe 2011 October 18-21, Amsterdam Look Out The Window Functions Gianni Ciolli Concepts Syntax Other Outline 1 Concepts Aggregates Different aggregations Partitions Window frames 2 Syntax Frames from 9.0 Frames in 8.4 3 Other A larger example Question time Look Out The Window Functions Gianni Ciolli Concepts Syntax Other Aggregates Aggregates 1 Example of an aggregate Problem 1 How many rows there are in table a? Solution SELECT count(*) FROM a; • Here count is an aggregate function (SQL keyword AGGREGATE). Look Out The Window Functions Gianni Ciolli Concepts Syntax Other Aggregates Aggregates 2 Functions and Aggregates • FUNCTIONs: • input: one row • output: either one row or a set of rows: • AGGREGATEs: • input: a set of rows • output: one row Look Out The Window Functions Gianni Ciolli Concepts Syntax Other Different aggregations Different aggregations 1 Without window functions, and with them GROUP BY col1, . , coln window functions any supported only PostgreSQL PostgreSQL version version 8.4+ compute aggregates compute aggregates via by creating groups partitions and window frames output is one row output is one row for each group for each input row Look Out The Window Functions Gianni Ciolli Concepts Syntax Other Different aggregations Different aggregations 2 Without window functions, and with them GROUP BY col1, . , coln window functions only one way of aggregating different rows in the same for each group
    [Show full text]
  • Lecture 3: Advanced SQL
    Advanced SQL Lecture 3: Advanced SQL 1 / 64 Advanced SQL Relational Language Relational Language • User only needs to specify the answer that they want, not how to compute it. • The DBMS is responsible for efficient evaluation of the query. I Query optimizer: re-orders operations and generates query plan 2 / 64 Advanced SQL Relational Language SQL History • Originally “SEQUEL" from IBM’s System R prototype. I Structured English Query Language I Adopted by Oracle in the 1970s. I IBM releases DB2 in 1983. I ANSI Standard in 1986. ISO in 1987 I Structured Query Language 3 / 64 Advanced SQL Relational Language SQL History • Current standard is SQL:2016 I SQL:2016 −! JSON, Polymorphic tables I SQL:2011 −! Temporal DBs, Pipelined DML I SQL:2008 −! TRUNCATE, Fancy sorting I SQL:2003 −! XML, windows, sequences, auto-gen IDs. I SQL:1999 −! Regex, triggers, OO • Most DBMSs at least support SQL-92 • Comparison of different SQL implementations 4 / 64 Advanced SQL Relational Language Relational Language • Data Manipulation Language (DML) • Data Definition Language (DDL) • Data Control Language (DCL) • Also includes: I View definition I Integrity & Referential Constraints I Transactions • Important: SQL is based on bag semantics (duplicates) not set semantics (no duplicates). 5 / 64 Advanced SQL Relational Language Today’s Agenda • Aggregations + Group By • String / Date / Time Operations • Output Control + Redirection • Nested Queries • Join • Common Table Expressions • Window Functions 6 / 64 Advanced SQL Relational Language Example Database SQL Fiddle: Link sid name login age gpa sid cid grade 1 Maria maria@cs 19 3.8 1 1 B students 2 Rahul rahul@cs 22 3.5 enrolled 1 2 A 3 Shiyi shiyi@cs 26 3.7 2 3 B 4 Peter peter@ece 35 3.8 4 2 C cid name 1 Computer Architecture courses 2 Machine Learning 3 Database Systems 4 Programming Languages 7 / 64 Advanced SQL Aggregates Aggregates • Functions that return a single value from a bag of tuples: I AVG(col)−! Return the average col value.
    [Show full text]
  • SQL Version Analysis
    Rory McGann SQL Version Analysis Structured Query Language, or SQL, is a powerful tool for interacting with and utilizing databases through the use of relational algebra and calculus, allowing for efficient and effective manipulation and analysis of data within databases. There have been many revisions of SQL, some minor and others major, since its standardization by ANSI in 1986, and in this paper I will discuss several of the changes that led to improved usefulness of the language. In 1970, Dr. E. F. Codd published a paper in the Association of Computer Machinery titled A Relational Model of Data for Large shared Data Banks, which detailed a model for Relational database Management systems (RDBMS) [1]. In order to make use of this model, a language was needed to manage the data stored in these RDBMSs. In the early 1970’s SQL was developed by Donald Chamberlin and Raymond Boyce at IBM, accomplishing this goal. In 1986 SQL was standardized by the American National Standards Institute as SQL-86 and also by The International Organization for Standardization in 1987. The structure of SQL-86 was largely similar to SQL as we know it today with functionality being implemented though Data Manipulation Language (DML), which defines verbs such as select, insert into, update, and delete that are used to query or change the contents of a database. SQL-86 defined two ways to process a DML, direct processing where actual SQL commands are used, and embedded SQL where SQL statements are embedded within programs written in other languages. SQL-86 supported Cobol, Fortran, Pascal and PL/1.
    [Show full text]
  • Modèle Bull 2009 Blanc Fr
    CNAF PostgreSQL project Philippe BEAUDOIN, Project leader [email protected] 2010, Dec 7 CNAF - Caisse Nationale des Allocations Familiales - Key organization of the French social security system - Distributes benefits to help - Families - Poor people - 123 CAF (local organizations) all over France - 11 million families and 30 million people - 69 billion € in benefits distributed (2008) 2 ©Bull, 2010 CNAF PostgreSQL project The project … in a few clicks GCOS 8 GCOS 8 CRISTAL SDP CRISTAL SDP (Cobol) (Cobol) (Cobol) (Cobol) DBSP RFM-II RHEL-LINUX PostgreSQL Bull - NovaScale 9000 servers (mainframes) 3 ©Bull, 2010 CNAF PostgreSQL project DBSP GCOS 8 CRISTAL SDP (Cobol) (Cobol) InfiniBand link LINUX S.C. S.C. (c+SQL) (c+SQL) PostgreSQL S.C. = Surrogate Client 4 ©Bull, 2010 CNAF PostgreSQL project CNAF I.S. - CRISTAL + SDP : heart of the Information System - Same application running on Bull (GCOS 8) and IBM (z/OS+DB2) mainframes - Also J2E servers, under AIX, … and a lot of peripheral applications - 6 to 8 CRISTAL versions per year ! 5 ©Bull, 2010 CNAF PostgreSQL project Migration to PostgreSQL project plan - A lot of teams all over France (developers, testers, production,...) - 3 domains - CRISTAL application, SDP application - Infrastructure and production - National project leading CRISTAL Developmt Tests 1 CAF C. Deployment C. PRODUCTION Preparation 1 Depl S SDP Develpmt Tests 9/08 1/09 5/09 9/09 12/09 3/10 6 ©Bull, 2010 CNAF PostgreSQL project How BULL participated to the project - Assistance to project leading - Technical expertise
    [Show full text]