
DBKDA 2014 : The Sixth International Conference on Advances in Databases, Knowledge, and Data Applications Efficient Data Integrity Checking for Untrusted Database Systems Anderson Luiz Silverio´ Marcelo Carlomagno Carlos Ronaldo dos Santos Mello and Ricardo Felipe Custodio´ Information Security Group Grupo de Banco de Dados Laboratorio´ de Seguranc¸a em Computac¸ao˜ Royal Holloway University of London Universidade Federal de Santa Catarina Universidade Federal de Santa Catarina Egham, Surrey, TW20 0EX, UK Florianopolis,´ Brazil Florianopolis,´ Brazil Email: marcelo.carlos.2009 Email: [email protected] Email: [email protected] @rhul.ac.uk [email protected] Abstract—Unauthorized changes on database contents can Most of the remaining work uses authenticated structures result in significant losses for organizations and individuals. This [7], [8], [9], based on Merkle Hash Trees (MHT) [10] or Skip- brings the need for mechanisms capable of assuring the integrity Lists [11]. These works are most simpler to put in practice, of stored data. Existing solutions either make use of costly cryptographic functions, with great impact on performance, or since they don’t require modifications to the kernel of the require the modification of the database engine. Modifying the Database Management System (DBMS). However, the use database engine may be infeasible in real world environments, of authenticated structures limits its use to static databases. especially for systems already deployed. In this paper, we propose Authenticated structures are not efficient in dynamic databases a technique that uses low cost cryptographic functions and is because for each update the structure must be recalculated. independent of the database engine. Our approach allows for the detection of malicious data update, insertion and deletion In this paper, we address the problem of ensuring data operations. This is achieved by the insertion of a small amount of integrity and authenticity in outsourced database scenarios. protection data in the database. The protection data is calculated Moreover, we provide efficient and secure means of ensur- by the data owner using Message Authentication Codes. In ing data integrity and authenticity while incurring minimal addition, our experiments have shown that the overhead of computational overhead. We provide techniques based on calculating and storing the protection data is lower than previous work. Message Authentication Codes (MACs) to detect malicious Keywords–Data Integrity; Outsourced Data; Untrusted Database; and/or unauthorized insertions, updates and deletions of data. Data Security. Is this paper, we extend the work of [12], by enhancing the experimental evaluation, providing the algorithms for the I. INTRODUCTION proposed techniques and presenting a technique to provide Database security has been studied extensively by both completeness assurance of queries. the database and cryptographic communities. In recent years, The remainder of this paper is divided into five sections. some schemes have been proposed to check the integrity of In SectionII , we discuss related work. In Section III, we the data, that is, to check if the data has not been modified, present techniques for providing data integrity and authenticity inserted or deleted by an unauthorised user or process. These assurance. In SectionIV, we analyse the performance impact schemas often try to solve one of the following aspects of the of our proposed method and SectionV presents our final data [1], [2]: considerations and future works. • Correctness: From the viewpoint of data integrity, cor- rectness means that the data has not been tampered with. II. RELATED WORK • Completeness: When a client poses a query to the database server it is returned a set of tuples that satisfies The major part of integrity verification found in literature is the query. The completeness aspect of the integrity means based on authenticated structures. Namely, Merkle Hash Trees that all tuples that satisfy the posed query are returned MHT [10] and Skip-Lists [11]. by the server. Li et al. [4] present the Merkle B-Tree (MB-Tree), where Trying to assure data integrity, many techniques have been the B+-tree of a relational table is extended with digest proposed [3], [4], [5], [6]. However, most of them rely on information as in an MHT. The MB-Tree is then used to techniques that require modification of the database kernel provide proofs of correctness and completeness for posed or the development of new database management systems. queries to the server. Despite presenting an interesting idea Such requirements make the utilization of the integrity assur- and showing good results in their experiments, their approach ance mechanisms in real-world scenarios difficult. This effort suffers from a major drawback. To deploy this approach, the becomes more evident when we consider adding integrity database server needs to be adapted as the B+-tree needs to protection to already deployed database systems. be extended to support an MHT. Such modifications may not Copyright (c) IARIA, 2014. ISBN: 978-1-61208-334-6 118 DBKDA 2014 : The Sixth International Conference on Advances in Databases, Knowledge, and Data Applications be feasible in real world environments, especially those that focus is on query integrity while in our work we are focused are already in use. on the integrity of the data itself. Di Battista and Palazzi [7] propose to implement an au- III. PROVIDING INTEGRITY ASSURANCE FOR DATABASE thenticated skip list into a relational table. They create a new table, called security table, which stores an authenticated skip CONTENT list. The new table is then used to provide assurance of the To achieve a low cost method to provide integrity and authenticity and completeness of posed queries. This approach authenticity, we propose to perform the cryptographic op- overcomes the requirement of a new DBMS, present in the erations on the client side (application), using of Message previous approach. While only a new table is necessary within Authentication Codes (MAC) [17], [18]. The implementation this approach, its implementation can be done as a plug-in to consists of adding a new column to each table. This new the DBMS. However, the experimental results are superficial. column stores the output of the MAC function applied to the It is not clear what is the actual overhead in terms of each SQL concatenation (jj) of the attributes (all columns, or a subset operation. Moreover, their experiments show that the overhead of them) of a row n, as shown in (1). The function also increase as the database increases, while in our approach the utilises a key k, which is only known by the application. The overhead is constant in terms of the database size. value of the MAC column is later used to verify integrity and Miklau and Suciu [9] implement a hash tree into a relational authenticity. table, providing integrity checks for the data owner. The data owner needs to securely store the root node of the MACn = MAC(k; Column1jj:::jjColumni) (1) tree. To verify the integrity, the clients need to rebuild the The use of a MAC function ensures the integrity of the tree and compare the root node calculated and stored. If INSERT and UPDATE operations. However, the table is still they match, the data was not tampered with. Despite using vulnerable to the unauthorized deletion of rows. To overcome simple cryptographic functions, such as hash, the use of trees this issue, we propose a new algorithm for linking sequential compromises the efficiency of their method. A tuple insert rows, called “Chained-MAC (CMAC)”. The result of the using their method is 10 times slower than a normal insert, CMAC is then stored into a new column. The value of this while a query is executed 6 times slower. In our experiments, column, given a row n, a key k, and MAC as the MAC presented in sectionIV, we show that the naive implementation n value of the row n, is calculated as shown in (2), where ⊕ of our method is as good as their method. denotes the exclusive OR operation (XOR). E. Mykletun et al. [13] study the problem of providing correctness assurance of the data. Their work is most closely CMAC = MAC(k; (MAC ⊕ MAC )) (2) related to what we present in this paper. They present an ap- n n−1 n proach for verifying data integrity, based on digital signatures. The use of CMAC provides an interesting property to the The client has a key pair and uses its private key to sign each data stored in the table where it is used. When used, the tuple he/she sends to the server. When retrieving a tuple, the CMAC links the rows in a way that an attacker cannot delete client uses the correspondent public key to verify the integrity a row without being detected, since he does not have access to of the retrieved tuple. This work was extended by Narasimha the secret key to produce a valid value to update the CMAC and Tsudik [14] to also provide proof of completeness. column of adjacent rows. Moreover, calculating the CMAC is The motivation of the authors to use digital signature is very efficient, since we calculate only two MACs and a ⊕. to allow integrity checking in multi-querier and multi-owner Updating rows is also efficient. The CMAC is not a cascading models. Therefore, for multi-querier and multi-owner models, operation, that is, it only needs to be updated when the MAC their work is preferable. On the other hand, if the querier of a given row is updated. Figure1 shown an example of a and the data owner are the same, our work can provide table with the MAC and CMAC columns. The circles represent integrity assurance more efficiently. Moreover, our method can the value of the MAC/CMAC and the arrows shows the MACs provide the same security level while consuming less of the used to calculate a specific CMAC.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-