Uniform Data Access Platform for SQL and Nosql Database Systems

Total Page:16

File Type:pdf, Size:1020Kb

Uniform Data Access Platform for SQL and Nosql Database Systems Information Systems 69 (2017) 93–105 Contents lists available at ScienceDirect Information Systems journal homepage: www.elsevier.com/locate/is Uniform data access platform for SQL and NoSQL database systems ∗ Ágnes Vathy-Fogarassy , Tamás Hugyák University of Pannonia, Department of Computer Science and Systems Technology, P.O.Box 158, Veszprém, H-8201 Hungary a r t i c l e i n f o a b s t r a c t Article history: Integration of data stored in heterogeneous database systems is a very challenging task and it may hide Received 8 August 2016 several difficulties. As NoSQL databases are growing in popularity, integration of different NoSQL systems Revised 1 March 2017 and interoperability of NoSQL systems with SQL databases become an increasingly important issue. In Accepted 18 April 2017 this paper, we propose a novel data integration methodology to query data individually from different Available online 4 May 2017 relational and NoSQL database systems. The suggested solution does not support joins and aggregates Keywords: across data sources; it only collects data from different separated database management systems accord- Uniform data access ing to the filtering options and migrates them. The proposed method is based on a metamodel approach Relational database management systems and it covers the structural, semantic and syntactic heterogeneities of source systems. To introduce the NoSQL database management systems applicability of the proposed methodology, we developed a web-based application, which convincingly MongoDB confirms the usefulness of the novel method. Data integration JSON ©2017 Elsevier Ltd. All rights reserved. 1. Introduction solution to retrieve data from heterogeneous source systems and to deliver them to the user. Integration of existing information systems is becoming more There exist a lot of solutions which aim to integrate data from and more inevitable in order to satisfy customer and business separated relational systems into a new one. But the problem is needs. General data retrieval from different database systems is further complicated if the source database systems are hetero- a challenging task and has become a widely researched topic geneous, namely they implement different types of data models [1–7] . In many cases, we need information simultaneously from (e.g relation data model vs. different non-relational data mod- different database systems which are separated from each other els). While all relational database management systems are based (e.g. query patient information from separated hospital information on the well-known relational data model, NoSQL systems imple- systems or manage computer-integrated manufacturing). In these ment different semistructured data models (column stores, key- cases, users need homogeneous logical views of data physically value stores, document databases, graph databases) [8,9] . As data distributed over heterogeneous data sources. These views have to models define how the logical structure of a database is mod- represent the collected information as if data were stored in a uni- eled, they play a significant role in data access. The aim of this form way. The basic problem originates from the fact, that source paper is to suggest a generalized solution to query relational and systems were built separately and the need of information linking non-relational database systems simultaneously. The contribution arose later. If the source systems were designed and built irrespec- of this work is to present a novel data integration method and tively of each other, then the implemented database schemas and workflow, which implement data integration of different source semantics of data may be significantly different. The main tasks systems in the way that users do not need any programming skills. of general data access are (i) to identify those data elements in The central element of the proposed method is an intermediate source systems that match each other and (ii) to formulate such data model presented in JSON. global database access methods that can handle the differences of The organization of this paper is as follows. Section 2 gives the source systems. To solve the problem we need an intermediate a brief overview of data integration methods. In Section 3 , the main problems of data integration processes are introduced. Section 4 presents the proposed solution from theoretical aspect and in Section 5 we introduce a data integration application which ∗ Corresponding author. E-mail addresses: [email protected] , [email protected] (Á. implements the proposed method. Finally, Section 6 concludes the Vathy-Fogarassy), [email protected] (T. Hugyák). paper. http://dx.doi.org/10.1016/j.is.2017.04.002 0306-4379/© 2017 Elsevier Ltd. All rights reserved. 94 Á. Vathy-Fogarassy, T. Hugyák / Information Systems 69 (2017) 93–105 2. Background manner [16] . Such ontology-based data integration methods are widely used in commerce and e-commerce [17–19] , furthermore 2.1. Data integration methods in geospatial information systems [20,21] , web search [22,23] and in life sciences [3,24,25] as well. Data integration can be performed on different abstraction lev- els [10] and according to these levels, the applied techniques can be grouped as (i) manual integration, (ii) supplying the user with a 2.2. Data integration solutions common user interface, (iii) developing an integration application, (iv) using a middleware interface, (v) working out a uniform data Generally, data integration methods principally aim to integrate access, (vi) and building a common data storage [11] . The first four data arising from different SQL systems. However, the problem of approaches offer simpler solutions, but each has its own signifi- data integration is further complicated by the fact that data can be cant drawbacks. Manual data integration is a very time-consuming stored in different database management systems that implement and costly process. An integration application can access various different data models (e.g relational database management systems databases and return the merged result to the user. The main vs. different NoSQL systems). drawback of this solution is that it can handle only those databases As the role of NoSQL systems in recent years has increased sig- which are integrated into them and so if the number and type nificantly, demands on data interoperability of NoSQL systems is of source systems increase, the system becomes more and more also increased. In the last few years, only a few case studies and complex. In the case of developing common user interfaces, data methodological recommendations were published which aim to is still separately presented and homogenization and integration migrate data stored in NoSQL systems (e.g. [26–28] ). In [29] Atzeni of data have to be done by the users [11] . Middlewares - such as and his co-authors present a very impressive work, which aims to OLE DB or .NET providers, or ODBC and Java Database Connectivity offer a uniform interface called Save Our Systems (SOS), that al- (JDBC) database access drivers - provide generalized functionalities lows access to data stored in different NoSQL systems (HBase, Re- to query and manipulate data in different database management dis, and MongoDB). The architecture of the proposed system con- systems, but these systems can not resolve the structural and se- tains a Common Interface, which implements uniform data re- mantical differences of source databases. trieval and manipulation methods, and a Common Data Model, Building a common data storage and migrating all required in- which is responsible for managing the translations from the uni- formation into a new database provides a more complex solution. form data model to the specific structures of the NoSQL systems. This process is usually performed programmatically and includes In this solution, simultaneous data access is implemented instead data extraction, transformation, and loading steps (ETL process). of data migration. The main drawback of the proposed method is The most important step of the ETL process is the data transfor- that the expressive power of the implemented methods is limited mation, which maps data types and structures from the source sys- exclusively to single objects of NoSQL systems. Furthermore, this tems into data types and structure of the target system. Currently, solution does not provide users with data migration (e.g, schema due to the problems of the whole migration process (e.g. diversity matching) facilities. of source storage structures and data types, the problem of data To our best knowledge, only a few works currently exist to inconsistency, lack of management strategies), the migration pro- query data from NoSQL and relational database systems simulta- cess is typically done manually or semi-automatic way [12] . But neously. In [30] the Unified Modelset Definition is given, which above all, the most important drawback of the data migration pro- provides a basis to query data from relational database manage- cess is that it requires storage capacity and the migrated data set ment systems (RDBMS) and NoSQL systems. OPEN-PaaS-DataBase does not follow the change of the original data in the source sys- API (ODBAPI) [31] is a unified REST-based API, which enables the tems automatically. If we want to follow changes of data stored in execution of CRUD operations on relational and NoSQL data stores. the source systems, migration process must be repeated frequently, It provides users with a set of operators to manage different type and so data migration process becomes very time-consuming and of databases, but data migration and JOIN functions are not imple- costly. mented in this solution. Roijackers and his colleagues [32] present Uniform data access provides a logical integration of data and a framework to seamlessly bridge the gap between SQL and doc- eliminates many of the before mentioned disadvantages. In this ument stores. In the proposed method documents are logically in- case, the required information will not be stored in a target corporated into the relational database and querying is performed database, so users and developers do not have to deal with the via an extended SQL language.
Recommended publications
  • SQL Server Column Store Indexes Per-Åke Larson, Cipri Clinciu, Eric N
    SQL Server Column Store Indexes Per-Åke Larson, Cipri Clinciu, Eric N. Hanson, Artem Oks, Susan L. Price, Srikumar Rangarajan, Aleksandras Surna, Qingqing Zhou Microsoft {palarson, ciprianc, ehans, artemoks, susanpr, srikumar, asurna, qizhou}@microsoft.com ABSTRACT SQL Server column store indexes are “pure” column stores, not a The SQL Server 11 release (code named “Denali”) introduces a hybrid, because they store all data for different columns on new data warehouse query acceleration feature based on a new separate pages. This improves I/O scan performance and makes index type called a column store index. The new index type more efficient use of memory. SQL Server is the first major combined with new query operators processing batches of rows database product to support a pure column store index. Others greatly improves data warehouse query performance: in some have claimed that it is impossible to fully incorporate pure column cases by hundreds of times and routinely a tenfold speedup for a store technology into an established database product with a broad broad range of decision support queries. Column store indexes are market. We’re happy to prove them wrong! fully integrated with the rest of the system, including query To improve performance of typical data warehousing queries, all a processing and optimization. This paper gives an overview of the user needs to do is build a column store index on the fact tables in design and implementation of column store indexes including the data warehouse. It may also be beneficial to build column enhancements to query processing and query optimization to take store indexes on extremely large dimension tables (say more than full advantage of the new indexes.
    [Show full text]
  • 9. SQL FUNDAMENTALS What Is SQL?
    ROHINI COLLEGE OF ENGINEERING & TECHNOLOGY 9. SQL FUNDAMENTALS What is SQL? • SQL stands for Structured Query Language • SQL allows you to access a database • SQL is an ANSI standard computer language • SQL can execute queries against a database • SQL can retrieve data from a database • SQL can insert new records in a database • SQL can delete records from a database • SQL can update records in a database • SQL is easy to learn SQL is an ANSI (American National Standards Institute) standard computer language for accessing and manipulating database systems. SQL statements are used to retrieve and update data in a database. • SQL works with database programs like MS Access, DB2, Informix, MS SQL Server, Oracle, Sybase, etc. There are many different versions of the SQL language, but to be in compliance with the ANSI standard, they must support the same major keywords in a similar manner (such as SELECT, UPDATE, DELETE, INSERT, WHERE, and others). SQL Database Tables • A database most often contains one or more tables. Each table is identified by a name (e.g. "Customers" or "Orders"). Tables contain records (rows) with data. Below is an example of a table called "Persons": CS8492-DATABASE MANAGEMENT SYSTEMS ROHINI COLLEGE OF ENGINEERING & TECHNOLOGY SQL Language types: Structured Query Language(SQL) as we all know is the database language by the use of which we can perform certain operations on the existing database and also we can use this language to create a database. SQL uses certain commands like Create, Drop, Insert, etc. to carry out the required tasks. These SQL commands are mainly categorized into four categories as: 1.
    [Show full text]
  • When Relational-Based Applications Go to Nosql Databases: a Survey
    information Article When Relational-Based Applications Go to NoSQL Databases: A Survey Geomar A. Schreiner 1,* , Denio Duarte 2 and Ronaldo dos Santos Mello 1 1 Departamento de Informática e Estatística, Federal University of Santa Catarina, 88040-900 Florianópolis - SC, Brazil 2 Campus Chapecó, Federal University of Fronteira Sul, 89815-899 Chapecó - SC, Brazil * Correspondence: [email protected] Received: 22 May 2019; Accepted: 12 July 2019; Published: 16 July 2019 Abstract: Several data-centric applications today produce and manipulate a large volume of data, the so-called Big Data. Traditional databases, in particular, relational databases, are not suitable for Big Data management. As a consequence, some approaches that allow the definition and manipulation of large relational data sets stored in NoSQL databases through an SQL interface have been proposed, focusing on scalability and availability. This paper presents a comparative analysis of these approaches based on an architectural classification that organizes them according to their system architectures. Our motivation is that wrapping is a relevant strategy for relational-based applications that intend to move relational data to NoSQL databases (usually maintained in the cloud). We also claim that this research area has some open issues, given that most approaches deal with only a subset of SQL operations or give support to specific target NoSQL databases. Our intention with this survey is, therefore, to contribute to the state-of-art in this research area and also provide a basis for choosing or even designing a relational-to-NoSQL data wrapping solution. Keywords: big data; data interoperability; NoSQL databases; relational-to-NoSQL mapping 1.
    [Show full text]
  • Data Dictionary a Data Dictionary Is a File That Helps to Define The
    Cleveland | v. 216.369.2220 • Columbus | v. 614.291.8456 Data Dictionary A data dictionary is a file that helps to define the organization of a particular database. The data dictionary acts as a description of the data objects or items in a model and is used for the benefit of the programmer or other people who may need to access it. A data dictionary does not contain the actual data from the database; it contains only information for how to describe/manage the data; this is called metadata*. Building a data dictionary provides the ability to know the kind of field, where it is located in a database, what it means, etc. It typically consists of a table with multiple columns that describe relationships as well as labels for data. A data dictionary often contains the following information about fields: • Default values • Constraint information • Definitions (example: functions, sequence, etc.) • The amount of space allocated for the object/field • Auditing information What is the data dictionary used for? 1. It can also be used as a read-only reference in order to obtain information about the database 2. A data dictionary can be of use when developing programs that use a data model 3. The data dictionary acts as a way to describe data in “real-world” terms Why is a data dictionary needed? One of the main reasons a data dictionary is necessary is to provide better accuracy, organization, and reliability in regards to data management and user/administrator understanding and training. Benefits of using a data dictionary: 1.
    [Show full text]
  • USER GUIDE Optum Clinformatics™ Data Mart Database
    USER GUIDE Optum Clinformatics Data Mart Database 1 | P a g e TABLE OF CONTENTS TOPIC PAGE # 1. REQUESTING DATA 3 Eligibility 3 Forms 3 Contact Us 4 2. WHAT YOU WILL NEED 4 SAS Software 4 VPN 5 3. ABSTRACTS, MANUSCRIPTS, THESES, AND DISSERTATIONS 5 Referencing Optum Data 5 Optum Review 5 4. DATA USER SET-UP AND ACCESS INFORMATION 6 Server Log-In After Initial Set-Up 6 Server Access 6 Establishing a Connection to Enterprise Guide 7 Instructions to Add SAS EG to the Cleared Firewall List 8 How to Proceed After Connection 8 5. BEST PRACTICES FOR DATA USE 9 Saving Programs and Back-Up Procedures 9 Recommended Coding Practices 9 6. APPENDIX 11 Version Date: 27-Feb-17 2 | P a g e Optum® ClinformaticsTM Data Mart Database The Optum® ClinformaticsTM Data Mart is an administrative health claims database from a large national insurer made available by the University of Rhode Island College of Pharmacy. The statistically de-identified data includes medical and pharmacy claims, as well as laboratory results, from 2010 through 2013 with over 22 million commercial enrollees. 1. REQUESTING DATA The following is a brief outline of the process for gaining access to the data. Eligibility Must be an employee or student at the University of Rhode Island conducting unfunded or URI internally funded projects. Data will be made available to the following users: 1. Faculty and their research team for projects with IRB approval. 2. Students a. With a thesis/dissertation proposal approved by IRB and the Graduate School (access request form, see link below, must be signed by Major Professor).
    [Show full text]
  • Activant Prophet 21
    Activant Prophet 21 Understanding Prophet 21 Databases This class is designed for… Prophet 21 users that are responsible for report writing System Administrators Operations Managers Helpful to be familiar with SQL Query Analyzer and how to write basic SQL statements Objectives Explain the difference between databases, tables and columns Extract data from different areas of the system Discuss basic SQL statements using Query Analyzer Use Prophet 21 to gather SQL Information Utilize Data Dictionary This course will NOT cover… Basic Prophet 21 functionality Seagate’s Crystal Reports Definitions Columns Contains a single piece of information (fields) Record (rows) One complete set of columns Table A collection of rows View Virtual table created for easier data extraction Database Collection of information organized for easy selection of data SQL Query A graphical user interface used to extract Analyzer data SQL Query Analyzer Accessed through Microsoft SQL Server SQL Server Tools Enterprise Manager Perform administrative functions such as backing up and restoring databases and maintaining SQL Logins Profiler Run traces of activity in your system Basic SQL Commands sp_help sp_help <table_name> select * from <table_name> Select <field_name> from <table_name> Most Common Reporting Areas Address and Contact tables Order Processing Inventory Purchasing Accounts Receivable Accounts Payable Production Orders Address and Contact tables Used in conjunction with other tables These tables hold every address/contact in the
    [Show full text]
  • IEEE Paper Template in A4 (V1)
    International Journal of Electrical Electronics & Computer Science Engineering Special Issue - NCSCT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222 March, 2018 | Available Online at www.ijeecse.com Structural and Non-Structural Query Language Vinayak Sharma1, Saurav Kumar Jha2, Shaurya Ranjan3 CSE Department, Poornima Institute of Engineering and Technology, Jaipur, Rajasthan, India [email protected], [email protected], [email protected] Abstract: The Database system is rapidly increasing and it The most important categories are play an important role in all commercial-scientific software. In the current scenario every field work is related to DDL (Data Definition Language) computer they store their data in the database. Using DML (Data Manipulation Language) database helps to maintain the large records. This paper aims to summarize the different database and their usage in DQL (Data Query Language) IT field. In company database is more appropriate or more 1. Data Definition Language: Data Definition suitable to store the data. Language, DDL, is the subset of SQL that are used by a Keywords: DBS, Database Management Systems-DBMS, database user to create and built the database objects, Database-DB, Programming Language, Object-Oriented examples are deletion or the creation of a table. Some of System. the most Properties of DDL commands discussed I. INTRODUCTION below: CREATE TABLE The database system is used to develop the commercial- scientific application with database. The application DROP INDEX requires set of element for collection transmission, ALTER INDEX storage and processing of data with computer. Database CREATE VIEW system allow the database develop application. This ALTER TABLE paper deal with the features of Nosql and need of Nosql in the market.
    [Show full text]
  • Product Master Data Management Reference Guide
    USAID GLOBAL HEALTH SUPPLY CHAIN PROGRAM Procurement and Supply Management PRODUCT MASTER DATA MANAGEMENT REFERENCE GUIDE Version 1.0 February 2020 The USAID Global Health Supply Chain Program-Procurement and Supply Management (GHSC-PSM) project is funded under USAID Contract No. AID-OAA-I-15-0004. GHSC-PSM connects technical solutions and proven commercial processes to promote efficient and cost-effective health supply chains worldwide. Our goal is to ensure uninterrupted supplies of health commodities to save lives and create a healthier future for all. The project purchases and delivers health commodities, offers comprehensive technical assistance to strengthen national supply chain systems, and provides global supply chain leadership. GHSC-PSM is implemented by Chemonics International, in collaboration with Arbola Inc., Axios International Inc., IDA Foundation, IBM, IntraHealth International, Kuehne + Nagel Inc., McKinsey & Company, Panagora Group, Population Services International, SGS Nederland B.V., and University Research Co., LLC. To learn more, visit ghsupplychain.org DISCLAIMER: The views expressed in this publication do not necessarily reflect the views of the U.S. Agency for International Development or the U.S. government. Contents Acronyms ....................................................................................................................................... 3 Executive Summary ...................................................................................................................... 4 Background
    [Show full text]
  • Master Data Management: Dos & Don’Ts
    Master Data Management: dos & don’ts Keesjan van Unen, Ad de Goeij, Sander Swartjes, and Ard van der Staaij Master Data Management (MDM) is high on the agenda for many organizations. At Board level too, people are now fully aware that master data requires specific attention. This increasing attention is related to the concern for the overall quality of data. By enhancing the quality of master data, disrup- tions to business operations can be limited and therefore efficiency is increased. Also, the availability of reliable management information will increase. However, reaching this goal is not an entirely smooth process. MDM is not rocket science, but it does have specific problems that must be addressed in the right way. This article describes three of these attention areas, including several dos and don’ts. C.J.W.A. van Unen is a senior manager at KPMG Management Consulting, IT Advisory. [email protected] A.S.M de Goeij RE is a manager at KPMG Management Consulting, IT Advisory. [email protected] S. Swartjes is an advisor at KPMG Risk Consulting, IT Advisory. [email protected] A.J. van der Staaij is an advisor at KPMG Risk Consulting, IT Advisory. [email protected] Compact_ 2012 2 International edition 1 Treat master data as an asset Introduction Do not go for gold immediately Every organization has to deal with master data, and seeks Although it may be very tempting, you should not go for ways to manage the quality of this specific group of data in the ‘ideal model’ when (re)designing the MDM organiza- an effective and efficient manner.
    [Show full text]
  • Getting the Data You Can't
    The Marriage of VFE, SQL, Visual Studio John W. Fitzgerald, MD Gregory P. Sanders, MD CHUG 2014 Fall Conference GETTING THE DATA YOU CAN’T GET: WHEN MEL JUST ISN’T ENOUGH The Problem . Built in data symbols are powerful but may not provide the data you need MEDS_AFTER() PROB_LIST_CHANGES() ALL_NEW() ….etc . Mldefs.txt may be available to address your needs but are undocumented, difficult to locate, and may not survive version changes. There are tables with no MEL connections . Complex relationships are generally not available Our Approach . Create SQL Centricity data access statements in Visual Studio, Visual Scripts as an .exe . Store in Centricity Client . Pass arguments to the .exe from VFE . Return results from .exe call . Parse the results as required. Non-trivial Set of Skills Required . VFE . Data Symbols . Centricity table structure and relationships . Model solution in Crystal, Access . SQL statement construction . Visual Studio or equivalent . Lots of debugging THE CLIENT . Folder on each local machine . EXE sits in Client folder for easy access . Code is run locally . Client calls EXE and passes its authentication to it . Client has access to network (ie, SQL server) . EXE must be pushed out to all Clients Dataflow: Call & Query Centricity SQL Centricity calls EXE EXE queries SQL Database EXE Dataflow: Compile & Return Centricity SQL EXE returns data to Centricity SQL returns data to EXE EXE Inside the EXE . Capture Passed Data . Compile SQL Statement . Connect to Database . Organize Queried Data . Pass Back to Centricity Capture Passed Data . Output Data . File name and path designated with “ /o “ primer . Input Data . Data passed to EXE from Centricity .
    [Show full text]
  • IVOA Astronomical Data Query Language Version
    International Virtual Observatory Alliance IVOA Astronomical Data Query Language Version 2.0 IVOA Recommendation 30 Oct 2008 This version: 2.0 Latest version: http://www.ivoa.net/Documents/latest/ADQL.html Previous version(s): 2.0-20081028 Editor(s): Pedro Osuna and Inaki Ortiz Author(s): Inaki Ortiz, Jeff Lusted, Pat Dowler, Alexander Szalay, Yuji Shirasaki, Maria A. Nieto- Santisteban, Masatoshi Ohishi, William O’Mullane, Pedro Osuna, the VOQL-TEG and the VOQL Working Group. Abstract This document describes the Astronomical Data Query Language (ADQL). ADQL has been developed based on SQL92. This document describes the subset of the SQL grammar supported by ADQL. Special restrictions and extensions to SQL92 have been defined in order to support generic and astronomy specific operations. 1 Status of This Document This document has been produced by the VO Query Language Working Group. It has been reviewed by IVOA Members and other interested parties, and has been endorsed by the IVOA Executive Committee as an IVOA Recommendation. It is a stable document and may be used as reference material or cited as a normative reference from another document. IVOA's role in making the Recommendation is to draw attention to the specification and to promote its widespread deployment. This enhances the functionality and interoperability inside the Astronomical Community. Please note that version 1.0 was never promoted to Proposed Recommendation. A list of current IVOA Recommendations and other technical documents can be found at http://www.ivoa.net/Documents/.
    [Show full text]
  • SQL - Data Query Language
    SQL - Data Query language Eduardo J Ruiz October 20, 2009 1 Basic Structure • The simple structure for a SQL query is the following: select a1...an from t1 .... tr where C Where t1 . tr is a list of relations in the database schema, a1 . an the elements in the resultant relation and C a list of constraints. • ai can be 1. A constant 2. An attribute in the resultant relation 3. A function applied to one attribute or several attributes 4. An aggregation function 5. * if we want to list all the attributes of a particular relation • n is the arity of the resultant relation. • C contains the list of restrictions that enforces the joins and the selections • Some queries are equivalent to πa1...an σC (t1 × t2 . tr) Select all the attributes from the students with a GPA greater than 3.0 σgpa>3.0student select * from student where gpa > 3.0 The following query use a function to present the student full-name (concat is a mysql function) πpantherid,name+lastname(σgpa>3.0∧gender=0F 0 (student)) 1 select panterid, concat(name,’ ’,lastname) from student where gpa > 3.0 and gender=’F’ Cartesian product. All the students with all the courses. σpantherid,courseid(student × enrolled) select panterid, courseid from student,enrolled • Query construction: 1. Select all the useful tables from the schema 2. Add the necessary join conditions (this is the most important step) 3. Add the necessary filter conditions 4. Select the attributes (apply fancy functions as needed) 5. Sort the results 2 Renaming • You can rename tables and attributes • Why: to avoid ambiguity (two tables with the same attributes, self joins), to use clearer names, to add names to anonymous columns In the following example we rename the concatenated attribute to full name and we add an extra attribute.
    [Show full text]