HP Nonstop SQL/MX Release 3.1 Query Guide

Total Page:16

File Type:pdf, Size:1020Kb

HP Nonstop SQL/MX Release 3.1 Query Guide TP663851.fm Page 1 Monday, October 17, 2011 11:48 AM HP NonStop SQL/MX Release 3.1 Query Guide Abstract This guide describes how to understand query execution plans and write optimal queries for an HP NonStop™ SQL/MX database. It is intended for database administrators and application developers who use NonStop SQL/MX to query an SQL/MX database and who have a particular interest in issues related to query performance. Product Version NonStop SQL/MX Releases 3.1 Supported Release Version Updates (RVUs) This publication supports J06.12 and all subsequent J-series RVUs and H06.23 and all subsequent H-series RVUs, until otherwise indicated by its replacement publications. Part Number Published 663851-001 October 2011 TP663851.fm Page 2 Monday, October 17, 2011 11:48 AM Document History Part Number Product Version Published 640323-001 NonStop SQL/MX Release 3.0 February 2011 663851-001 NonStop SQL/MX Release 3.1 October 2011 TP663851.fm Page 1 Monday, October 17, 2011 11:48 AM Legal Notices Copyright 2011 Hewlett-Packard Development Company L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Export of the information contained in this publication may require authorization from the U.S. Department of Commerce. Microsoft, Windows, and Windows NT are U.S. registered trademarks of Microsoft Corporation. Intel, Itanium, Pentium, and Celeron are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Java is a registered trademark of Oracle and/or its affiliates. Motif, OSF/1, UNIX, X/Open, and the "X" device are registered trademarks and IT DialTone and The Open Group are trademarks of The Open Group in the U.S. and other countries. Open Software Foundation, OSF, the OSF logo, OSF/1, OSF/Motif, and Motif are trademarks of the Open Software Foundation, Inc. OSF MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THE OSF MATERIAL PROVIDED HEREIN, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. OSF shall not be liable for errors contained herein or for incidental consequential damages in connection with the furnishing, performance, or use of this material. © 1990, 1991, 1992, 1993 Open Software Foundation, Inc. This documentation and the software to which it relates are derived in part from materials supplied by the following: © 1987, 1988, 1989 Carnegie-Mellon University. © 1989, 1990, 1991 Digital Equipment Corporation. © 1985, 1988, 1989, 1990 Encore Computer Corporation. © 1988 Free Software Foundation, Inc. © 1987, 1988, 1989, 1990, 1991 Hewlett-Packard Company. © 1985, 1987, 1988, 1989, 1990, 1991, 1992 International Business Machines Corporation. © 1988, 1989 Massachusetts Institute of Technology. © 1988, 1989, 1990 Mentat Inc. © 1988 Microsoft Corporation. © 1987, 1988, 1989, 1990, 1991, 1992 SecureWare, Inc. © 1990, 1991 Siemens Nixdorf Informationssysteme AG. © 1986, 1989, 1996, 1997 Sun Microsystems, Inc. © 1989, 1990, 1991 Transarc Corporation. This software and documentation are based in part on the Fourth Berkeley Software Distribution under license from The Regents of the University of California. OSF acknowledges the following individuals and institutions for their role in its development: Kenneth C.R.C. Arnold, Gregory S. Couch, Conrad C. Huang, Ed James, Symmetric Computer Systems, Robert Elz. © 1980, 1981, 1982, 1983, 1985, 1986, 1987, 1988, 1989 Regents of the University of California. Printed in the US TP663851.fm Page 2 Monday, October 17, 2011 11:48 AM TP663851.fm Page i Monday, October 17, 2011 11:48 AM HP NonStop SQL/MX Release 3.1 Query Guide Index Examples Figures Tables Legal Notices What’s New in This Manual ix Manual Information ix New and Changed Information ix Changes to the SQL/MX 3.1 manual: ix Changes to the H06.22/J06.11 manual: ix Changes to the H06.20/J06.09 manual: x Changes to the H06.19/J06.08 Manual x Changes to the H06.16/J06.05 Manual x Changes to the H06.05 Manual xi About This Manual xiii Audience xiii Organization xiii Related Documentation xiv Notation Conventions xvii Hypertext Links xvii General Syntax Notation xvii Change Bar Notation xix 1. Compiling and Executing a Query Overview 1-1 How the Compiler Works 1-2 Compilation Steps 1-2 Parsing, Binding, and Normalizing 1-3 Query Plan Caching 1-5 Improving Query Performance 1-6 Factors That Can Affect Compile Time 1-19 How the Executor Processes the Plan 1-19 2. Accessing SQL/MX Data Access Methods 2-1 Hewlett-Packard Company—663851-001 i TP663851.fm Page ii Monday, October 17, 2011 11:48 AM Contents 3. Keeping Statistics Current Storage-Key Access 2-1 Index-Only Access 2-2 Alternate Index Access 2-4 Full Table Scan 2-5 Understanding Unexpected Access Paths 2-9 MultiDimensional Access Method (MDAM) 2-13 Specifying MDAM 2-13 Comparing MDAM With Single Subset Access 2-14 How MDAM Processes Queries 2-15 Influencing the Optimizer to Use MDAM 2-17 Controlling the Number of Key Columns Used by MDAM 2-18 MDAM’s Use of DENSE and SPARSE Algorithms 2-18 3. Keeping Statistics Current Histogram Statistics 3-1 Updating Histogram Statistics 3-2 Sampling and UPDATE STATISTICS 3-5 Performance Issues and Accuracy in Sampling 3-7 Collecting Statistics for Multiple Columns 3-8 Testing the Results of UPDATE STATISTICS 3-8 Testing the Results for SQL/MP Tables 3-8 Testing the Results for SQL/MX Tables 3-9 4. Reviewing Query Execution Plans Displaying Execution Plans 4-1 Using the EXPLAIN Function 4-1 Using the EXPLAIN Statement Shortcut 4-2 Using the Visual Query Planner 4-2 The Optimizer and Executor 4-3 Description of the EXPLAIN Function Results 4-3 Displaying Selected Columns of the Execution Plan 4-5 Extracting EXPLAIN Output From Embedded SQL Programs 4-6 Using the EXPLAIN Statement to Review the Execution Plan 4-6 Optimization Tips 4-9 Verifying DAM Access 4-13 Using the Visual Query Planner 4-14 Visual Query Planner Requirements 4-14 Getting Help for Visual Query Planner 4-14 Graphically Displaying Execution Plans 4-14 Reviewing Run-Time Statistics 4-21 HP NonStop SQL/MX Release 3.1 Query Guide—663851-001 ii TP663851.fm Page iii Monday, October 17, 2011 11:48 AM Contents 5. Forcing Execution Plans Simple Query Example 4-22 Using Measure 4-22 5. Forcing Execution Plans Why Force a Plan? 5-1 Checklist for Forcing Plans 5-2 Displaying the Optimized Plan 5-2 Reviewing the Optimized Plan 5-3 Translating the Operator Tree to Text Format 5-5 Using SHOWSHAPE and SET SHOWSHAPE to View the Text Format 5-5 Using Visual Query Planner to Get the Shape 5-6 Manually Writing the Shape 5-7 Writing the Forced Shape Statement 5-8 Scope of CONTROL QUERY SHAPE 5-8 Shaping Portions of an Operator Tree 5-9 Using Logical and Physical Specifications 5-9 Forcing Shapes on Views 5-10 What Happens if No Plan Is Returned? 5-10 Migrating Forced Shapes From NonStop SQL/MP 5-10 Forcing Group By Operations to the Data Access Manager 5-10 Forcing Parallel Plans 5-13 6. Query Plan Caching Overview 6-1 Types of Cacheable Queries 6-2 Examples of Cacheable Expressions 6-3 Examples of Queries That Are Not Cacheable 6-4 Choosing an Appropriate Size for the Query Cache 6-6 Query Plan Caching Statistics 6-6 SYSTEM_DEFAULTS Table Settings for Query Plan Caching Attributes 6-7 QUERYCACHE Function 6-9 QUERYCACHEENTRIES Function 6-11 Querying the Query Plan Caching Virtual Tables 6-13 Reviewing the Query Plan Caching Statistics With the DISPLAY_QC and DISPLAY_QC_ENTRIES Commands 6-15 7. SQL/MX Operators Operators 7-5 BLOCKED_UNION Operator 7-5 CALL Operator 7-6 HP NonStop SQL/MX Release 3.1 Query Guide—663851-001 iii TP663851.fm Page iv Monday, October 17, 2011 11:48 AM Contents 7. SQL/MX Operators CURSOR_DELETE Operator 7-8 CURSOR_UPDATE Operator 7-9 ESP_ACCESS Operator 7-11 ESP_EXCHANGE Operator 7-12 EXPLAIN Operator 7-16 EXPR Operator 7-16 EXPLAIN_CMD Operator 7-17 FILE_SCAN Operator 7-17 FILE_SCAN_UNIQUE Operator 7-19 FirstN Operator 7-22 HASH_GROUPBY Operator 7-22 HASH_PARTIAL_GROUPBY_LEAF Operator 7-24 HASH_PARTIAL_GROUPBY_ROOT Operator 7-25 HYBRID_HASH_ANTI_SEMI_JOIN Operator 7-26 HYBRID_HASH_JOIN Operator 7-27 HYBRID_HASH_SEMI_JOIN Operator 7-28 INDEX_SCAN Operator 7-29 INDEX_SCAN_UNIQUE Operator 7-31 INSERT Operator 7-33 INSERT_VSBB Operator 7-34 LEFT_HYBRID_HASH_JOIN Operator 7-35 LEFT_MERGE_JOIN Operator 7-36 LEFT_NESTED_JOIN Operator 7-38 LEFT_ORDERED_HASH_JOIN Operator 7-39 MATERIALIZE Operator 7-40 MERGE_ANTI_SEMI_JOIN Operator 7-41 MERGE_JOIN Operator 7-42 MERGE_SEMI_JOIN Operator 7-44 MERGE_UNION Operator 7-45 MultiUnion Operator 7-46 NESTED_ANTI_SEMI_JOIN Operator 7-46 NESTED_JOIN Operator 7-47 NESTED_SEMI_JOIN Operator 7-49 NEXTVALUEFOR Operator 7-50 NESTED_SEMI_JOIN Operator 7-50 7-50 ORDERED_HASH_JOIN Operator 7-52 ORDERED_HASH_SEMI_JOIN Operator 7-53 ORDERED_UNION Operator 7-54 HP NonStop SQL/MX Release 3.1 Query Guide—663851-001 iv TP663851.fm Page v Monday, October 17, 2011 11:48 AM Contents 8. Parallelism PACK Operator
Recommended publications
  • SAQE: Practical Privacy-Preserving Approximate Query Processing for Data Federations
    SAQE: Practical Privacy-Preserving Approximate Query Processing for Data Federations Johes Bater Yongjoo Park Xi He Northwestern University University of Illinois (UIUC) University of Waterloo [email protected] [email protected] [email protected] Xiao Wang Jennie Rogers Northwestern University Northwestern University [email protected] [email protected] ABSTRACT 1. INTRODUCTION A private data federation enables clients to query the union of data Querying the union of multiple private data stores is challeng- from multiple data providers without revealing any extra private ing due to the need to compute over the combined datasets without information to the client or any other data providers. Unfortu- data providers disclosing their secret query inputs to anyone. Here, nately, this strong end-to-end privacy guarantee requires crypto- a client issues a query against the union of these private records graphic protocols that incur a significant performance overhead as and he or she receives the output of their query over this shared high as 1,000× compared to executing the same query in the clear. data. Presently, systems of this kind use a trusted third party to se- As a result, private data federations are impractical for common curely query the union of multiple private datastores. For example, database workloads. This gap reveals the following key challenge some large hospitals in the Chicago area offer services for a cer- in a private data federation: offering significantly fast and accurate tain percentage of the residents; if we can query the union of these query answers without compromising strong end-to-end privacy. databases, it may serve as invaluable resources for accurate diag- To address this challenge, we propose SAQE, the Secure Ap- nosis, informed immunization, timely epidemic control, and so on.
    [Show full text]
  • Veritas: Shared Verifiable Databases and Tables in the Cloud
    Veritas: Shared Verifiable Databases and Tables in the Cloud Lindsey Alleny, Panagiotis Antonopoulosy, Arvind Arasuy, Johannes Gehrkey, Joachim Hammery, James Huntery, Raghav Kaushiky, Donald Kossmanny, Jonathan Leey, Ravi Ramamurthyy, Srinath Settyy, Jakub Szymaszeky, Alexander van Renenz, Ramarathnam Venkatesany yMicrosoft Corporation zTechnische Universität München ABSTRACT A recent class of new technology under the umbrella term In this paper we introduce shared, verifiable database tables, a new "blockchain" [11, 13, 22, 28, 33] promises to solve this conundrum. abstraction for trusted data sharing in the cloud. For example, companies A and B could write the state of these shared tables into a blockchain infrastructure (such as Ethereum [33]). The KEYWORDS blockchain then gives them transaction properties, such as atomic- ity of updates, durability, and isolation, and the blockchain allows Blockchain, data management a third party to go through its history and verify the transactions. Both companies A and B would have to write their updates into 1 INTRODUCTION the blockchain, wait until they are committed, and also read any Our economy depends on interactions – buyers and sellers, suppli- state changes from the blockchain as a means of sharing data across ers and manufacturers, professors and students, etc. Consider the A and B. To implement permissioning, A and B can use a permis- scenario where there are two companies A and B, where B supplies sioned blockchain variant that allows transactions only from a set widgets to A. A and B are exchanging data about the latest price for of well-defined participants; e.g., Ethereum [33]. widgets and dates of when widgets were shipped.
    [Show full text]
  • SQL Database Management Portal
    APPENDIX A SQL Database Management Portal This appendix introduces you to the online SQL Database Management Portal built by Microsoft. It is a web-based application that allows you to design, manage, and monitor your SQL Database instances. Although the current version of the portal does not support all the functions of SQL Server Management Studio, the portal offers some interesting capabilities unique to SQL Database. Launching the Management Portal You can launch the SQL Database Management Portal (SDMP) from the Windows Azure Management Portal. Open a browser and navigate to https://manage.windowsazure.com, and then login with your Live ID. Once logged in, click SQL Databases on the left and click on your database from the list of available databases. This brings up the database dashboard, as seen in Figure A-1. To launch the management portal, click the Manage icon at the bottom. Figure A-1. SQL Database dashboard in Windows Azure Management Portal 257 APPENDIX A N SQL DATABASE MANAGEMENT PORTAL N Note You can also access the SDMP directly from a browser by typing https://sqldatabasename.database. windows.net, where sqldatabasename is the server name of your SQL Database. A new web page opens up that prompts you to log in to your SQL Database server. If you clicked through the Windows Azure Management Portal, the database name will be automatically filled and read-only. If you typed the URL directly in a browser, you will need to enter the database name manually. Enter a user name and password; then click Log on (Figure A-2).
    [Show full text]
  • Plan Stitch: Harnessing the Best of Many Plans
    Plan Stitch: Harnessing the Best of Many Plans Bailu Ding Sudipto Das Wentao Wu Surajit Chaudhuri Vivek Narasayya Microsoft Research Redmond, WA 98052, USA fbadin, sudiptod, wentwu, surajitc, [email protected] ABSTRACT optimizer chooses multiple query plans for the same query, espe- Query performance regression due to the query optimizer select- cially for stored procedures and query templates. ing a bad query execution plan is a major pain point in produc- The query optimizer often makes good use of the newly-created indexes and statistics, and the resulting new query plan improves tion workloads. Commercial DBMSs today can automatically de- 1 tect and correct such query plan regressions by storing previously- in execution cost. At times, however, the latest query plan chosen executed plans and reverting to a previous plan which is still valid by the optimizer has significantly higher execution cost compared and has the least execution cost. Such reversion-based plan cor- to previously-executed plans, i.e., a plan regresses [5, 6, 31]. rection has relatively low risk of plan regression since the deci- The problem of plan regression is painful for production applica- sion is based on observed execution costs. However, this approach tions as it is hard to debug. It becomes even more challenging when ignores potentially valuable information of efficient subplans col- plans change regularly on a large scale, such as in a cloud database lected from other previously-executed plans. In this paper, we service with automatic index tuning. Given such a large-scale pro- propose a novel technique, Plan Stitch, that automatically and op- duction environment, detecting and correcting plan regressions in portunistically combines efficient subplans of previously-executed an automated manner becomes crucial.
    [Show full text]
  • Exploring Query Re-Optimization in a Modern Database System
    Radboud University Nijmegen Institute of Computing and Information Sciences Exploring Query Re-Optimization In a Modern Database System Master Thesis Computing Science Supervisor: Prof. dr. ir. Arjen P. Author: de Vries BSc Laurens N. Kuiper Second reader: Prof. dr. ir. Djoerd Hiemstra May 2021 Abstract The standard approach to query processing in database systems is to optimize before executing. When available statistics are accurate, optimization yields the optimal plan, and execution is as quick as it can be. However, when queries become more complex, the quality of statistics degrades, which leads to sub-optimal query plans, sometimes up to several orders of magnitude worse. Improving statistics for these queries is infeasible because the cardinality estimation error propagates exponentially through the plan. This problem can be alleviated through re-optimization, which is to progressively optimize the query, executing parts of the plan at a time, improving plan quality at the cost of additional overhead. In this work, re-optimization is implemented and simulated in DuckDB, a main-memory database system designed for analytical workloads, and evaluated on the Join Order Benchmark. Total plan cost of the benchmark is reduced by up to 44% using this method, and a reduction in end-to-end query latency of up to 20% is observed using only simple re-optimization schemes, showing the potential of this approach. Acknowledgements The past year has been a challenging and exciting year. From breaking my leg to starting a job, and finally finishing my thesis. I would like to thank everyone who supported me during this period, and helped me get here.
    [Show full text]
  • Queryguard: Privacy-Preserving Latency-Aware Query Optimization for Edge Computing
    QueryGuard: Privacy-preserving Latency-aware Query Optimization for Edge Computing Runhua Xu ∗, Balaji Palanisamy y and James Joshi z School of Computing and Information University of Pittsburgh, Pittsburgh, PA USA ∗[email protected], [email protected], [email protected] Abstract—The emerging edge computing paradigm has en- adopting distributed database techniques in edge computing abled applications having low response time requirements to brings new challenges and concerns. First, in contrast to meet the quality of service needs of applications by moving the cloud data centers where cloud servers are managed through computations to the edge of the network that is geographically closer to the end-users and end-devices. Despite the low latency strict and regularized policies, edge nodes may not have the advantages provided by the edge computing model, there are same degree of regulatory and monitoring oversight. This significant privacy risks associated with the adoption of edge may lead to higher privacy risks compared to that in cloud computing services for applications dealing with sensitive data. servers. In particular, when dealing with a join query, tra- In contrast to cloud data centers where system infrastructures are ditional distributed query processing techniques sometimes managed through strict and regularized policies, edge computing nodes are scattered geographically and may not have the same ship selected or projected data to different nodes, some of degree of regulatory and monitoring oversight. This can lead which may be untrusted or semi-trusted. Thus, such techniques to higher privacy risks for the data processed and stored at may lead to greater disclosure of private information within the edge nodes, thus making them less trusted.
    [Show full text]
  • Lesson 4: Optimize a Query Using the Sybase IQ Query Plan
    Author: Courtney Claussen – SAP Sybase IQ Technical Evangelist Contributor: Bruce McManus – Director of Customer Support at Sybase Getting Started with SAP Sybase IQ Column Store Analytics Server Lesson 4: Optimize a Query using the SAP Sybase IQ Query Plan Copyright (C) 2012 Sybase, Inc. All rights reserved. Unpublished rights reserved under U.S. copyright laws. Sybase and the Sybase logo are trademarks of Sybase, Inc. or its subsidiaries. SAP and the SAP logo are trademarks or registered trademarks of SAP AG in Germany and in several other countries all over the world. All other trademarks are the property of their respective owners. (R) indicates registration in the United States. Specifications are subject to change without notice. Table of Contents 1. Introduction ................................................................................................................1 2. SAP Sybase IQ Query Processing ............................................................................2 3. SAP Sybase IQ Query Plans .....................................................................................4 4. What a Query Plan Looks Like ................................................................................5 5. What a Query Plan Will Tell You ............................................................................7 5.1 Query Tree Nodes and Node Detail ......................................................................7 5.1.1 Root Node Details ..........................................................................................7
    [Show full text]
  • Query Processing for SQL Updates
    Query Processing for SQL Updates Cesar´ A. Galindo-Legaria Stefano Stefani Florian Waas [email protected] [email protected][email protected] Microsoft Corp., One Microsoft Way, Redmond, WA 98052 ABSTRACT tradeoffs between plans that do serial or random I/Os. This moti- A rich set of concepts and techniques has been developed in the vates the integration of update processing in the general framework context of query processing for the efficient and robust execution of query processing. To be successful, this integration needs to of queries. So far, this work has mostly focused on issues related model update processing in a suitable way and consider the special to data-retrieval queries, with a strong backing on relational alge- requirements of updates. One of the few papers in this area is [2], bra. However, update operations can also exhibit a number of query which deals with delete operations, but there is little documented processing issues, depending on the complexity of the operations on the integration of updates with query processing, to the best of and the volume of data to process. Such issues include lookup and our knowledge. matching of values, navigational vs. set-oriented algorithms and In this paper, we present an overview of the basic concepts used trade-offs between plans that do serial or random I/Os. to support SQL Data Manipulation Language (DML) by the query In this paper we present an overview of the basic techniques used processor in Microsoft SQL Server. We focus on the query process- to support SQL DML (Data Manipulation Language) in Microsoft ing aspects of the problem, how data is modeled, primitive opera- SQL Server.
    [Show full text]
  • CPS216: Advanced Database Systems Notes 03:Query Processing
    CPS216: Advanced Database Systems Notes 03:Query Processing (Overview, contd.) Shivnath Babu Overview of SQL query Query parse Processing parse tree Query rewriting Query statistics logical query plan Optimization Physical plan generation physical query plan execute Query Execution result SQL query Initial logical plan parse parse tree Rewrite rules Query rewriting Logical plan statistics logical query plan Physical plan generation “Best” logical plan physical query plan execute result Query Rewriting π π B,D B,D σ σ R.C = S.C R.A = “c” Λ R.C = S.C σ R.A = “c” X X RS RS We will revisit it towards the end of this lecture SQL query parse parse tree Query rewriting statistics Best logical query plan Physical plan generation Best physical query plan execute result Physical Plan Generation π B,D Project Natural join Hash join σ R.A = “c” S Index scan Table scan R RS Best logical plan SQL query parse parse tree Enumerate possible physical plans Query rewriting statistics Best logical query plan Find the cost of each plan Physical plan generation Best physical query plan Pick plan with execute minimum cost result Physical Plan Generation Logical Query Plan Physical P1 P2 …. Pn plans C1 C2 …. Cn Costs Pick minimum cost one Plans for Query Execution • Roadmap – Path of a SQL query – Operator trees – Physical Vs Logical plans – Plumbing: Materialization Vs pipelining Modern DBMS Architecture Applications SQL DBMS Parser Logical query plan Query Optimizer Physical query plan Query Executor Access method API calls Storage Manager Storage system API calls File system API calls OS Disk(s) Logical Plans Vs.
    [Show full text]
  • Protecting User Privacy Using Declarative Preferences During Distributed Query Processing
    Don’t Reveal My Intension: Protecting User Privacy Using Declarative Preferences during Distributed Query Processing Nicholas L. Farnan1,AdamJ.Lee1, Panos K. Chrysanthis1, and Ting Yu2 1 Department of Computer Science, University of Pittsburgh 2 Department of Computer Science, North Carolina State University Abstract. In a centralized setting, the declarative nature of SQL is a major strength: a user can simply describe what she wants to retrieve, and need not worry about how the resulting query plan is actually gener- ated and executed. However, in a decentralized setting, two query plans that produce the same result might actually reveal vastly different infor- mation about the intensional description of a user’s query to the servers participating its evaluation. In cases where a user considers portions of her query to be sensitive, this is clearly problematic. In this paper, we address the specification and enforcement of querier privacy constraints on the execution of distributed database queries. We formalize a notion of intensional query privacy called (I,A)-privacy, and extend the syn- tax of SQL to allow users to enforce strict (I,A)-privacy constraints or partially ordered privacy/performance preferences over the execution of their queries. 1 Introduction Applications are increasingly becoming more decentralized, relying on data ex- change between autonomous and distributed data stores. This reliance is in some cases a design decision motivated by the need for scalability; for example, when user demand for a service exceeds what a system at a single site would be able to provide and replication becomes a necessity. In other instances, it is a fact of life that must be dealt with.
    [Show full text]
  • Anylog: a Grand Unification of the Internet of Things
    AnyLog: a Grand Unification of the Internet of Things Daniel Abadi Owen Arden University of Maryland University of California, [email protected] Santa Cruz [email protected] Faisal Nawab Moshe Shadmon University of California, AnyLog Santa Cruz [email protected] [email protected] ABSTRACT specify the permissions of that data. Some data will be published AnyLog is a decentralized platform for data publishing, sharing, with open access—in which case it will be queryable by any user of and querying IoT (Internet of Things) data that enables an unlim- AnyLog. Other data will be published in encrypted form, in which ited number of independent participants to publish and access the case only users with access to the decryption key may access it. contents of IoT datasets stored across the participants. AnyLog AnyLog is designed to provide a powerful query interface to the provides decentralized publishing and querying functionality over entire wealth of data produced by IoT devices. Questions such as: structured data in an analogous fashion to how the world wide “What was the maximum temperature reported in Palo Alto on June web (WWW) enables decentralized publishing and accessing of 21, 2008?” or “What was the difference in near accidents between unstructured data. However, AnyLog differs from the traditional self-driving cars that used deep-learning model X vs. self-driving WWW in the way that it provides incentives and financial reward cars that used deep-learning model Y?” or “How many cars passed for performing tasks that are critical to the well-being of the system the toll bridge in the last hour?” or “How many malfunctions were as a whole, including contribution, integration, storing, and pro- reported by a turbine of a particular model in all deployments in cessing of data, as well as protecting the confidentiality, integrity, the last year?” can all be expressed using clean and clearly spec- and availability of that data.
    [Show full text]
  • DAX Query Plans Introduction to Performance Analysis and DAX Optimizations Using Query Plans
    DAX Query Plans Introduction to performance analysis and DAX optimizations using query plans Author: Alberto Ferrari Published: Version 1.0 Revision 2 – July 17, 2012 Contact: – Summary: This paper is an introduction to query optimization of DAX code through the usage of the DAX query plans. It uses the Contoso database, which you can download from here: http://sdrv.ms/131eTUK and the Tabular version of AdventureWorks, available on CodePlex. Acknowledgments: I would like to thank the peer reviewers that helped me improving this document: Marco Russo, Chris Webb, Greg Galloway, Ashvini Sharma, Owen Graupman and all our ssas-insiders friends. I would also like to give a special thanks to T.K. Anand, Marius Dumitru, Cristian Petculescu, Jeffrey Wang, Ashvini Sharma, and Akshai Mirchandani who constantly answer to all of our fancy questions about SSAS. BI professionals always face the need to produce fast queries and measures. In order to obtain the best performance, a correct data model is needed but, once the model is in place, to further proceed with improvements, DAX optimization is the next step. Optimizing DAX requires some knowledge of the xVelocity engine internals and the ability to correctly read and interpret a DAX query plan. In this paper we focus on very basic optimizations and we will guide you through the following topics: How to find the DAX query plan The difference between the logical and physical query plan A brief description of the difference between formula engine and storage engine Some first insights into the query plan operators The goal of the paper is not that of showing complex optimization techniques.
    [Show full text]