Approximate Partition Selection for Big-Data Workloads Using Summary Statistics

Total Page:16

File Type:pdf, Size:1020Kb

Approximate Partition Selection for Big-Data Workloads Using Summary Statistics Approximate Partition Selection for Big-Data Workloads using Summary Statistics Kexin Rong†∗, Yao Luy, Peter Bailis,∗ Srikanth Kandula,y Philip Levis∗ Microsoft,y Stanford∗ ∗fkrong,pbailis,[email protected], yflu.yao,[email protected] partition 1 partition 2 partition N ABSTRACT Data X Y X Y X Y Offline ant 1 bee 10 cat 100 Many big-data clusters store data in large partitions that … … … … … … … support access at a coarse, partition-level granularity. As a result, approximate query processing via row-level sam- Stats Builder … pling is inefficient, often requiring reads of many partitions. Stats … In this work, we seek to answer queries quickly and ap- Precomputed proximately by reading a subset of the data partitions and Query-specific combining partial answers in a weighted manner without Stats … modifying the data layout. We illustrate how to efficiently perform this query processing using a set of pre-computed Query Weighted selection SELECT X, SUM(Y) - partition 1, weight=10 summary statistics, which inform the choice of partitions Partition Picker - partition 3, weight =2 and weights. We develop novel means of using the statistics Budget: 10 partitions - … Online to assess the similarity and importance of partitions. Our experiments on several datasets and data layouts demon- Figure 1: Our system PS3 makes novel use of strate that to achieve the same relative error compared to summary statistics to perform importance and uniform partition sampling, our techniques offer from 2.7× similarity-aware sampling of partitions. to 70× reduction in the number of partitions read, and the statistics stored per partition require fewer than 100KB. PVLDB Reference Format: of the partitions; a 10% uniform row sample would touch al- Kexin Rong, Yao Lu, Peter Bailis, Srikanth Kandula, Philip Levis. most all partitions. As a result, recent work from a produc- Approximate Partition Selection for Big-Data Workloads using tion AQP system shows that row-level sampling only offers Summary Statistics. PVLDB, 13(11): 2606-2619, 2020. significant speedups for complex queries where substantial DOI: https://doi.org/10.14778/3407790.3407848 query processing remains after the sampling [41]. In contrast to row-level sampling, the I/O cost of con- 1. INTRODUCTION structing a partition-level sample is proportional to the sam- 1 Approximate Query Processing (AQP) systems allow users pling fraction . In our example above, a 1% partition-level to trade off between accuracy and query execution speed. In sample would only read 1% of the data. We are especially applications such as data exploration and visualization, this interested in big data clusters, where data is stored in chunks trade-off is not only acceptable but often desirable. Sam- of tens to hundreds of megabytes, instead of disk blocks or pling is a common approximation technique, wherein the pages which are typically a few kilobytes [32, 52]. Partition- query is evaluated on a subset of the data, and much of the level sampling is already used in production due to its ap- literature focuses on row-level samples [12, 15, 22]. pealing performance: commercial databases create statistics When data is stored in media that does not support using partition samples [5, 28] and several Big Data stores random access (e.g., flat files in data lakes and columnar allow sampling partitions of tables [2, 6, 8]. stores [1, 52]), constructing a row-level sample can be as ex- However, a key challenge remains in how to construct pensive as scanning the entire dataset. For example, if data partition-level samples that can answer a given query ac- is split into partitions with 100 rows, a 1% uniform row sam- curately. Since all or none of the rows in a partition are ple would in expectation require fetching 64% (1 − 0:99100) included in the sample, the correlation between rows (e.g., due to layout) can lead to inaccurate answers. A uniformly random partition-level sample does not make a represen- tative sample of the dataset unless the rows are randomly This work is licensed under the Creative Commons Attribution- NonCommercial-NoDerivatives 4.0 International License. To view a copy distributed among partitions [23], which happens rarely in of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. For practice [19]. In addition, even a uniform random sample any use beyond those covered by this license, obtain permission by emailing of rows can miss rare groups in the answer or miss the [email protected]. Copyright is held by the owner/author(s). Publication rights rows that contribute substantially to SUM-like aggregates. licensed to the VLDB Endowment. Proceedings of the VLDB Endowment, Vol. 13, No. 11 1 ISSN 2150-8097. In the paper, we use \partition" to refer to the finest gran- DOI: https://doi.org/10.14778/3407790.3407848 ularity at which the storage layer maintains statistics. 2606 It is not known how to compute stratified [12] or measure- several importance groups, and allocate the sampling bud- biased [30] samples over partitions, which helps with queries get across groups such that the more important groups get with group-by's and complex aggregates. a greater proportion of the budget. The training overhead In this work, we introduce PS3 (Partition Selection with is a one time cost for each dataset and workload, and for Summary Statistics), a system that supports AQP via high-value datasets in clusters that are frequently queried, weighted partition selection (Figure 1). Our primary use this overhead is amortized over time. case is in large-scale production query processing systems In addition, we leverage the redundancy and skewness of such as Spark [14], F1 [51], SCOPE [20] where queries only the partitions for further optimization. For two partitions read and datasets are bulk appended. Our goal is to mini- that output similar answers to an input query, it suffices to mize the approximation error given a sampling budget, or only include one of them in the sample. While directly com- fraction of data that can be read. Motivated by observations paring the contents of the two partitions is expensive, we can from production clusters at Microsoft and in the literature use the query-specific summary statistics as a proxy for the that many datasets remain in the order that they were in- similarity between partitions. We also observe that datasets gested [42], PS3 does not require any specific layout or re- commonly exhibit significant skew in practice. For example, partitioning of data. Instead of storing precomputed sam- in a prototypical production service request log dataset at ples [13, 15, 21], which requires significant storage budgets Microsoft, the most popular application version out of the to offer good approximations for a wide range of queries [24, 167 distinct versions accounts for almost half of the dataset. 43], PS3 performs sampling exclusively during query opti- Inspired by prior works in AQP that recognize the impor- mization. Finally, similar to the query scope studied in prior tance of outliers [15, 22], we use summary statistics (e.g., work [13, 47, 53], PS3 supports single-table queries with the occurrences of heavy hitters in a partition) to identify SUM, COUNT(*), AVG aggregates, GROUP BY on columnsets a small number of partitions that are likely to contain rare with moderate distinctiveness, predicates that are conjunc- groups, and dedicate a portion of the sampling budget to tions, disjunctions or negations over single-column clauses. evaluate these partitions exactly. To select partitions that are most relevant to a query, PS3 In summary, this paper makes the following contributions: leverages the insight that partition-level summary statistics 1. We introduce PS3, a system that makes novel uses are relatively inexpensive to compute and store. The key of summary statistics to perform weighted partition question is which statistics to use. Systems such as Spark selection for many popular queries. Given the query SQL and ZoneMaps already maintain statistics such as max- semantics, summary statistics and a sampling budget, imum and minimum values of a column to assist in query op- the system intelligently combines a few sampling tech- timization [42]. Following similar design considerations, we niques to produce a set of partitions to sample and the look for statistics with small space requirements that can be weight of each partition. computed for each partition in one pass at ingest time. For functionality, we look for statistics that are discriminative 2. We propose a set of lightweight sketches for data parti- enough to support decisions such as whether the partition tions that are not only practical to implement, but can contributes disproportionally large values of the aggregates. also produce rich partition summary statistics. While We propose such a set of statistics for partition sampling the sketches are well known, this is the first time the { measures, heavy hitters, distinct values, and histograms statistics are used for weighted partition selection. { which include but expand on conventional catalog-level 3. We evaluate on a number of real-world datasets with statistics. The total storage overhead scales with the num- real and synthetic workloads. Our evaluation shows ber of partitions instead of with the dataset size. We only that each component of PS3 contributes meaningfully maintain single-column statistics to keep the overhead low. to the final accuracy and together, the system outper- The resulting storage overhead can be orders of magnitudes forms alternatives across datasets and layouts, deliver- smaller than approaches using auxiliary indices to reduce ing from 2.7× to 70× reduction in data read given the the cost of random access [30]. While the set of statistics is same error compared to uniform partition sampling. by no means complete, we show that each type of statistics contributes to the sampling performance, and, in aggregate, delivers effective AQP results.
Recommended publications
  • Plantuml Language Reference Guide (Version 1.2021.2)
    Drawing UML with PlantUML PlantUML Language Reference Guide (Version 1.2021.2) PlantUML is a component that allows to quickly write : • Sequence diagram • Usecase diagram • Class diagram • Object diagram • Activity diagram • Component diagram • Deployment diagram • State diagram • Timing diagram The following non-UML diagrams are also supported: • JSON Data • YAML Data • Network diagram (nwdiag) • Wireframe graphical interface • Archimate diagram • Specification and Description Language (SDL) • Ditaa diagram • Gantt diagram • MindMap diagram • Work Breakdown Structure diagram • Mathematic with AsciiMath or JLaTeXMath notation • Entity Relationship diagram Diagrams are defined using a simple and intuitive language. 1 SEQUENCE DIAGRAM 1 Sequence Diagram 1.1 Basic examples The sequence -> is used to draw a message between two participants. Participants do not have to be explicitly declared. To have a dotted arrow, you use --> It is also possible to use <- and <--. That does not change the drawing, but may improve readability. Note that this is only true for sequence diagrams, rules are different for the other diagrams. @startuml Alice -> Bob: Authentication Request Bob --> Alice: Authentication Response Alice -> Bob: Another authentication Request Alice <-- Bob: Another authentication Response @enduml 1.2 Declaring participant If the keyword participant is used to declare a participant, more control on that participant is possible. The order of declaration will be the (default) order of display. Using these other keywords to declare participants
    [Show full text]
  • BEA Weblogic Platform Security Guide
    UNCLASSIFIED Report Number: I33-004R-2005 BEA WebLogic Platform Security Guide Network Applications Team of the Systems and Network Attack Center (SNAC) Publication Date: 4 April 2005 Version Number: 1.0 National Security Agency ATTN: I33 9800 Savage Road Ft. Meade, Maryland 20755-6704 410-854-6191 Commercial 410-859-6510 Fax UNCLASSIFIED UNCLASSIFIED Acknowledgment We thank the MITRE Corporation for its collaborative effort in the development of this guide. Working closely with our NSA representatives, the MITRE team—Ellen Laderman (task leader), Ron Couture, Perry Engle, Dan Scholten, Len LaPadula (task product manager) and Mark Metea (Security Guides Project Oversight)—generated most of the security recommendations in this guide and produced the first draft. ii UNCLASSIFIED UNCLASSIFIED Warnings Do not attempt to implement any of the settings in this guide without first testing in a non-operational environment. This document is only a guide containing recommended security settings. It is not meant to replace well-structured policy or sound judgment. Furthermore, this guide does not address site-specific configuration issues. Care must be taken when implementing this guide to address local operational and policy concerns. The security configuration described in this document has been tested on a Solaris system. Extra care should be taken when applying the configuration in other environments. SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
    [Show full text]
  • Plantuml Language Reference Guide
    Drawing UML with PlantUML Language Reference Guide (Version 5737) PlantUML is an Open Source project that allows to quickly write: • Sequence diagram, • Usecase diagram, • Class diagram, • Activity diagram, • Component diagram, • State diagram, • Object diagram. Diagrams are defined using a simple and intuitive language. 1 SEQUENCE DIAGRAM 1 Sequence Diagram 1.1 Basic examples Every UML description must start by @startuml and must finish by @enduml. The sequence ”->” is used to draw a message between two participants. Participants do not have to be explicitly declared. To have a dotted arrow, you use ”-->”. It is also possible to use ”<-” and ”<--”. That does not change the drawing, but may improve readability. Example: @startuml Alice -> Bob: Authentication Request Bob --> Alice: Authentication Response Alice -> Bob: Another authentication Request Alice <-- Bob: another authentication Response @enduml To use asynchronous message, you can use ”->>” or ”<<-”. @startuml Alice -> Bob: synchronous call Alice ->> Bob: asynchronous call @enduml PlantUML : Language Reference Guide, December 11, 2010 (Version 5737) 1 of 96 1.2 Declaring participant 1 SEQUENCE DIAGRAM 1.2 Declaring participant It is possible to change participant order using the participant keyword. It is also possible to use the actor keyword to use a stickman instead of a box for the participant. You can rename a participant using the as keyword. You can also change the background color of actor or participant, using html code or color name. Everything that starts with simple quote ' is a comment. @startuml actor Bob #red ' The only difference between actor and participant is the drawing participant Alice participant "I have a really\nlong name" as L #99FF99 Alice->Bob: Authentication Request Bob->Alice: Authentication Response Bob->L: Log transaction @enduml PlantUML : Language Reference Guide, December 11, 2010 (Version 5737) 2 of 96 1.3 Use non-letters in participants 1 SEQUENCE DIAGRAM 1.3 Use non-letters in participants You can use quotes to define participants.
    [Show full text]
  • Hive Where Clause Example
    Hive Where Clause Example Bell-bottomed Christie engorged that mantids reattributes inaccessibly and recrystallize vociferously. Plethoric and seamier Addie still doth his argents insultingly. Rubescent Antin jibbed, his somnolency razzes repackages insupportably. Pruning occurs directly with where and limit clause to store data question and column must match. The ideal for column, sum of elements, the other than hive commands are hive example the terminator for nonpartitioned external source table? Cli to hive where clause used to pick tables and having a column qualifier. For use of hive sql, so the source table csv_table in most robust results yourself and populated using. If the bucket of the sampling is created in this command. We want to talk about which it? Sql statements are not every data, we should run in big data structures the. Try substituting synonyms for full name. Currently the where at query, urban private key value then prints the where hive table? Hive would like the partitioning is why the. In hive example hive data types. For the data, depending on hive will also be present in applying the example hive also widely used. The electrician are very similar to avoid reading this way, then one virtual column values in data aggregation functionalities provided below is sent to be expressed. Spark column values. In where clause will print the example when you a script for example, it will be spelled out of subquery must produce such hash bucket level of. After copy and collect_list instead of the same type is only needs to calculate approximately percentiles for sorting phase in keyword is expected schema.
    [Show full text]
  • Asymmetric-Partition Replication for Highly Scalable Distributed Transaction Processing in Practice
    Asymmetric-Partition Replication for Highly Scalable Distributed Transaction Processing in Practice Juchang Lee Hyejeong Lee Seongyun Ko SAP Labs Korea SAP Labs Korea POSTECH [email protected] [email protected] [email protected] Kyu Hwan Kim Mihnea Andrei Friedrich Keller SAP Labs Korea SAP Labs France SAP SE [email protected] [email protected] [email protected] ∗ Wook-Shin Han POSTECH [email protected] ABSTRACT 1. INTRODUCTION Database replication is widely known and used for high Database replication is one of the most widely studied and availability or load balancing in many practical database used techniques in the database area. It has two major use systems. In this paper, we show how a replication engine cases: 1) achieving a higher degree of system availability and can be used for three important practical cases that have 2) achieving better scalability by distributing the incoming not previously been studied very well. The three practi- workloads to the multiple replicas. With geographically dis- cal use cases include: 1) scaling out OLTP/OLAP-mixed tributed replicas, replication can also help reduce the query workloads with partitioned replicas, 2) efficiently maintain- latency when it accesses a local replica. ing a distributed secondary index for a partitioned table, and Beyond those well-known use cases, this paper shows how 3) efficiently implementing an online re-partitioning opera- an advanced replication engine can serve three other prac- tion. All three use cases are crucial for enabling a high- tical use cases that have not been studied very well.
    [Show full text]
  • Partitionable Blockchain
    Partitionable Blockchain A thesis submitted to the Kent State University Honors College in partial fulfillment of the requirements for University Honors by Joseph Oglio May, 2020 Thesis written by Joseph Oglio Approved by _____________________________________________________________________, Advisor ______________________________________, Chair, Department of Computer Science Accepted by ___________________________________________________, Dean, Honors College ii TABLE OF CONTENTS LIST OF FIGURES…..………………………………………………………………….iv ACKNOWLEDGMENTS……………...………………………………………………...v CHAPTER 1. INTRODUCTION……………………………………………….……………….1 2. PARTITIONABLE BLOCKCHAIN...…………………………….…………….7 2.1. Notation……………………………..…………………...…………………7 2.2. The Partitionable Blockchain Consensus Problem…………………..……10 2.3. Impossibility...………………………………………………………….…10 2.4. Partitioning Detectors……………………………………………………..12 2.5. Algorithm PART………………………………………………………….15 2.6. Performance Evaluation…………………………………………………..19 2.7. Multiple Splits…………………………………………………………….21 3. CONCLUSION, EXTENSIONS, AND FUTURE WORK…………………….25 BIBLIOGRAPHY………………………………………………………………….…...27 iii LIST OF FIGURES Figure 2.1 Implementation of ◊AGE using WAGE …………………..…….…………….14 Figure 2.2 Algorithm PART.………………………………………………….…………...22 Figure 2.3 Global blockchain tree generated by PART.…………………………..……….23 Figure 2.4 PART with perfect AGE. Split at round 100.………………………….……….23 Figure 2.5 PART+◊AGE. Split at round100, detector recognizes it at round 200…..…….23 Figure 2.6 PART+◊AGE. No split in computation, detector mistakenly
    [Show full text]
  • Alter Table Add Partition Oracle
    Alter Table Add Partition Oracle Inquilinous and tetrasyllabic Wallie never lists polygonally when Woochang zigzagged his most. Antagonizing Wallie usually bares some Sammy or overworn fervidly. Preconceived and Anglo-French Douglas gunges his secretaryship kick-offs dodge downwind. Thanks for object. The alter table archive table that corresponds to all tables and high workload produced during a new attributes or subpartitions of clustered table. You add a nice explanation! Select ibm kc did they belong to map lower boundary for the online clause allows for creating index if a single logical entity, even if it! This oracle adds a global index corrects this speeds up the add a single partition with the dba should do an equipartitioned local index is. Partitions produced per partition or add key in oracle adds a view to. You can be executed and oracle is an example we encourage you can be limited time to alter table in this index or local indexes are performed. The alter statement adds a version, then dropping index or false depending on the move a partitioned. One load to users table to this statement for adding literal values clause in a partition contains more composite partitioning is dropping table and high. The number of indexes in? In the primary key values from one can be rolled up the tablespace location. Corresponding local index partition level and oracle table itself. Specifying a hash partition operation oracle adds a tablespace location for rows from both tables are local index is useful in above results in? The alter table itself to continue your pdf request was obvious that the duration.
    [Show full text]
  • How Voltdb Does Transactions
    TECHNICAL NOTE How VoltDB does Transactions Note to readers: Some content about read-only transaction coordination in this document is out of date due to changes made in v6.4 as a result of Jepsen testing. Updates are forthcoming. A few years ago (VoltDB version 3.0) we made substantial changes to the transaction management system at the heart of VoltDB. There are several older posts and articles describing the original system but I was surprised to find that we never explained at length how the current transaction system is constructed. Well, no time like the present. Hammock Time: starting with a thought experiment Start with some DRAM and a single CPU. How do we structure a program to run commands to create, query and update structured data at the highest possible throughput? How do you run the maximum number of commands in a unit of time against in-memory data? One solution is to fill a queue with the commands you want to run. Then, run a loop that reads a command off the queue and runs that command to completion. It is easy to see that you can fully saturate a single core running commands in this model. Except for the few nanoseconds it requires to poll() a command from the command queue and offer() a response to a response queue, the CPU is spending 100% of its cycles running the desired work. In VoltDB’s case, the end-user’s commands are execution plans for ad hoc SQL statements, execution plans for distributed SQL fragments (more on this in a second), and stored procedure invocations.
    [Show full text]
  • The Complete IMS HALDB Guide All You Need to Know to Manage Haldbs
    Front cover The Complete IMS HALDB Guide All You Need to Know to Manage HALDBs Examine how to migrate databases to HALDB Explore the benefits of using HALDB Learn how to administer and modify HALDBs Jouko Jäntti Cornelia Hallmen Raymond Keung Rich Lewis ibm.com/redbooks International Technical Support Organization The Complete IMS HALDB Guide All You Need to Know to Manage HALDBs June 2003 SG24-6945-00 Note: Before using this information and the product it supports, read the information in “Notices” on page xi. First Edition (June 2003) This edition applies to IMS Version 7 (program number 5655-B01) and IMS Version 8 (program number 5655-C56) or later for use with the OS/390 or z/OS operating system. © Copyright International Business Machines Corporation 2003. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . .xi Trademarks . xii Preface . xiii The team that wrote this redbook. xiii Become a published author . xiv Comments welcome. xv Part 1. HALDB overview . 1 Chapter 1. HALDB introduction and structure . 3 1.1 An introduction to High Availability Large Databases . 4 1.2 Features and benefits . 5 1.3 Candidates for HALDB . 6 1.4 HALDB definition process . 7 1.4.1 Database Recovery Control (DBRC) . 9 1.5 DL/I processing . 9 1.6 Logical relationships with HALDB . 10 1.7 Partition selection . 10 1.7.1 Partition selection using key ranges . 10 1.7.2 Partition selection using a partition selection exit routine .
    [Show full text]
  • Designing and Architecting for Internet Scale: Sharding
    Designing and Architecting for Internet Scale: Sharding Article based on The Cloud at Your Service EARLY ACCESS EDITION The when, how, and why of enterprise cloud computing Jothy Rosenberg and Arthur Mateos MEAP Release: February 2010 Softbound print: Summer 2010 | 200 pages ISBN: 9781935182528 This article is taken from the book The Cloud at Your Service. The authors discuss the concept of sharding, a term invented by and a concept heavily employed by Google that is about how to partition databases so they scale infinitely. They show how Yahoo!, Flickr, Facebook, and many other sites that deal with huge user communities have been using this method of database partitioning to enable a good user experience concurrently with rapid growth. Get 35% off any version of The Cloud at Your Service, with the checkout code fcc35. Offer is only valid through http://www.manning.com. When you think Internet scale—by which we mean exposure to the enormous population of entities (human and machine) any number of which could potentially access our application—any number of very popular services might appropriately come to mind. Google is the obvious king of the realm, but Facebook, Flickr, Yahoo!, and many others have had to face the issues of dealing with scaling to hundreds of millions of users. In every case, each of these services was throttled at some point by the way they used the database where information about their users was being accessed. These problems were so severe that major reengineering was required, sometimes many times over. That’s a good lesson to learn early as you are thinking about how your Internet scale applications should be designed in the first place.
    [Show full text]
  • Database DB2 Multisystem
    IBM i 7.2 Database DB2 Multisystem IBM Note Before using this information and the product it supports, read the information in “Notices” on page 57. This edition applies to IBM i 7.2 (product number 5770-SS1) and to all subsequent releases and modifications until otherwise indicated in new editions. This version does not run on all reduced instruction set computer (RISC) models nor does it run on CISC models. This document may contain references to Licensed Internal Code. Licensed Internal Code is Machine Code and is licensed to you under the terms of the IBM License Agreement for Machine Code. © Copyright International Business Machines Corporation 1999, 2013. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Db2 Multisystem....................................................................................................1 PDF file for DB2 Multisystem....................................................................................................................... 1 Best practice for partitioning data...............................................................................................................1 DB2 Multisystem overview.......................................................................................................................... 2 Benefits of using DB2 Multisystem........................................................................................................ 3 DB2 Multisystem: Basic terms and concepts.......................................................................................
    [Show full text]
  • Oracle9i Has Several Types of Tablespaces
    THE DATA DEFINITION LANGUAGE Tablespaces Oracle9i has several types of tablespaces. These are: 1. Data tablespaces - Contains data and the objects that control the data (i.e. indexes, synonyms). 2. System tablespaces - Contains information Oracle needs to control itself. 3. Temporary tablespaces - Contains temporary information. For example, the space is used if Oracle needs temporary disk space for sorting a large set of rows. 4. Tool tablespaces - Contains space required for Oracle tools. 5. Rollback tablespaces - Contains information such as uncommitted database changes and enables Oracle’s internal mechanisms for read consistency. 6. User tablespaces - Default tablespace assigned to the user. 1 Creating a Tablespace There are several components to the Create Tablespace command: ? Datafile - Contains the name of the file used by the tablespace. ? Autoextend clause - Enables or disables the automatic extension of the tablespace. Off disables this feature. On enables the feature. The next keyword specifies the amount of disk space to allocate when an extension occurs. Unlimited sets no limit. The Maxsize keyword specifies the maximum amount of space to allocate. ? Size - Determines the maximum size of the file used for the tablespace. Two setting symbols are used. These are K and M. Each increment of K increases the file or tablespace size by 1024 bytes. Each increment of M increases the file or tablespace size by 1,048,576 bytes. ? Initial extent - The amount of disk space initially set-aside for the tablespace. An extent is a portion of disk space that is reserved for the tablespace. This disk space is unavailable to any other application until it is released.
    [Show full text]