PERSONAL DATA ATTIC AS ALTERNATIVE TO PUBLIC CLOUD

BRIAN POLLACK

Submitted in partial fulfillment of the requirements for the degree of

Master of Science

Department of Electrical Engineering and Computer Science

CASE WESTERN RESERVE UNIVERSITY

May 2019 CASE WESTERN RESERVE UNIVERSITY

SCHOOL OF GRADUATE STUDIES

We hereby approve the thesis/dissertation of

Brian Pollack

candidate for the degree of Master of Science*.

Committee Chair

Michael Rabinovich

Committee Member

Vincenzo Liberatore

Committee Member

Xusheng Xiao

Date of Defense

March 6, 2019

*We also certify that written approval has been obtained

for any proprietary material contained therein. Contents

List of Tables iv

List of Figures x

1 Introduction 1 1.1 Concept ...... 2 1.2 Application in Healthcare ...... 4 1.2.1 Current Medical Record Problems ...... 4 1.2.2 Existing Solutions ...... 5 1.2.3 Benefits of Our Approach in Healthcare ...... 6 1.3 Application in SaaS Cloud Offerings ...... 7 1.3.1 Reasons for Keeping Data Private ...... 7 1.3.2 Potential Benefits to Using Data Attic ...... 7 1.3.3 Potential Implementation ...... 9 1.4 Security Considerations ...... 9

2 Related Work 11 2.1 Dropbox ...... 11 2.2 ownCloud ...... 12

3 Choosing a Protocol 13 3.1 Performance ...... 13 3.1.1 Local Storage ...... 14

i 3.1.2 NFS ...... 14 3.1.3 iSCSI ...... 15 3.1.4 WebDAV ...... 16 3.1.5 Conclusion ...... 17 3.2 Security ...... 22 3.2.1 NFS ...... 22 3.2.2 iSCSI ...... 22 3.2.3 WebDAV ...... 22 3.2.4 Conclusion ...... 22 3.3 Ease of Use ...... 23 3.3.1 NFS ...... 23 3.3.2 iSCSI ...... 23 3.3.3 WebDAV ...... 24 3.3.4 Conclusion ...... 24 3.4 Conclusion ...... 25

4 Implementation 26 4.1 Data Attic Server ...... 28 4.1.1 Server Specification ...... 28 4.1.2 Server Installation ...... 28 4.1.3 Server Management ...... 30 4.2 Data Attic Client ...... 34 4.2.1 WebDAV Client ...... 34 4.2.2 Data Attic Daemon ...... 34 4.2.3 Client Application Library ...... 35 4.2.4 Client Configuration Utility ...... 37 4.2.5 Client Installation ...... 38 4.2.6 Sharing Access to Data Attic ...... 42

ii 5 Evaluation 44 5.1 Servers ...... 44 5.2 Performance Test Script ...... 44 5.3 Performance Test Results ...... 46 5.3.1 Write, no bandwidth restrictions ...... 47 5.3.2 Write, 25/3 network speed ...... 47 5.3.3 Write, 100/20 network speed ...... 47 5.3.4 Write, 1000/1000 network speed ...... 48 5.3.5 Read, no bandwidth restrictions ...... 48 5.3.6 Read, 25/3 network speed ...... 48 5.3.7 Read, 100/20 network speed ...... 48 5.3.8 Read, 1000/1000 network speed ...... 48 5.3.9 Summary ...... 49 5.4 Gzip Test ...... 74 5.4.1 Gzip Test Results ...... 75

6 Future Work 79 6.1 QR Codes for Sharing Credentials ...... 79 6.2 Federation ...... 81 6.3 Synchronization and Backup Offsite ...... 81 6.4 Synchronization Client for Personal Computer ...... 82

7 Conclusion 83

A Performance Test Data 85

iii List of Tables

3.1 Preliminary performance test results for writing over local network. . 18 3.2 Preliminary performance test results for reading over local network. . 19 3.3 Preliminary performance test results for writing over simulated internet. 20 3.4 Preliminary performance test results for reading over simulated internet. 21

A.1 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 0 ms. Upload bandwidth: Max mbps. Download bandwidth: Max mbps...... 86 A.2 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 5 ms. Upload bandwidth: Max mbps. Download bandwidth: Max mbps...... 87 A.3 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 10 ms. Upload band- width: Max mbps. Download bandwidth: Max mbps...... 88 A.4 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 20 ms. Upload band- width: Max mbps. Download bandwidth: Max mbps...... 89 A.5 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 50 ms. Upload band- width: Max mbps. Download bandwidth: Max mbps...... 90

iv A.6 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 100 ms. Upload band- width: Max mbps. Download bandwidth: Max mbps...... 91 A.7 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 0 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps...... 92 A.8 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 5 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps...... 93 A.9 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 10 ms. Upload band- width: 3 mbps. Download bandwidth: 25 mbps...... 94 A.10 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 20 ms. Upload band- width: 3 mbps. Download bandwidth: 25 mbps...... 95 A.11 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 50 ms. Upload band- width: 3 mbps. Download bandwidth: 25 mbps...... 96 A.12 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 100 ms. Upload band- width: 3 mbps. Download bandwidth: 25 mbps...... 97 A.13 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 0 ms. Upload bandwidth: 20 mbps. Download bandwidth: 100 mbps...... 98 A.14 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 5 ms. Upload bandwidth: 20 mbps. Download bandwidth: 100 mbps...... 99

v A.15 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 10 ms. Upload band- width: 20 mbps. Download bandwidth: 100 mbps...... 100 A.16 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 20 ms. Upload band- width: 20 mbps. Download bandwidth: 100 mbps...... 101 A.17 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 50 ms. Upload band- width: 20 mbps. Download bandwidth: 100 mbps...... 102 A.18 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 100 ms. Upload band- width: 20 mbps. Download bandwidth: 100 mbps...... 103 A.19 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 0 ms. Upload bandwidth: 1000 mbps. Download bandwidth: 1000 mbps...... 104 A.20 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 5 ms. Upload bandwidth: 1000 mbps. Download bandwidth: 1000 mbps...... 105 A.21 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 10 ms. Upload band- width: 1000 mbps. Download bandwidth: 1000 mbps...... 106 A.22 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 20 ms. Upload band- width: 1000 mbps. Download bandwidth: 1000 mbps...... 107 A.23 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 50 ms. Upload band- width: 1000 mbps. Download bandwidth: 1000 mbps...... 108

vi A.24 Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 100 ms. Upload band- width: 1000 mbps. Download bandwidth: 1000 mbps...... 109 A.25 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 0 ms. Upload bandwidth: Max mbps. Download bandwidth: Max mbps...... 110 A.26 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 5 ms. Upload bandwidth: Max mbps. Download bandwidth: Max mbps...... 111 A.27 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 10 ms. Upload bandwidth: Max mbps. Download bandwidth: Max mbps...... 112 A.28 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 20 ms. Upload bandwidth: Max mbps. Download bandwidth: Max mbps...... 113 A.29 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 50 ms. Upload bandwidth: Max mbps. Download bandwidth: Max mbps...... 114 A.30 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 100 ms. Upload band- width: Max mbps. Download bandwidth: Max mbps...... 115 A.31 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 0 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps...... 116 A.32 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 5 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps...... 117

vii A.33 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 10 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps...... 118 A.34 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 20 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps...... 119 A.35 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 50 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps...... 120 A.36 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 100 ms. Upload band- width: 3 mbps. Download bandwidth: 25 mbps...... 121 A.37 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 0 ms. Upload bandwidth: 20 mbps. Download bandwidth: 100 mbps...... 122 A.38 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 5 ms. Upload bandwidth: 20 mbps. Download bandwidth: 100 mbps...... 123 A.39 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 10 ms. Upload bandwidth: 20 mbps. Download bandwidth: 100 mbps...... 124 A.40 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 20 ms. Upload bandwidth: 20 mbps. Download bandwidth: 100 mbps...... 125 A.41 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 50 ms. Upload bandwidth: 20 mbps. Download bandwidth: 100 mbps...... 126

viii A.42 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 100 ms. Upload band- width: 20 mbps. Download bandwidth: 100 mbps...... 127 A.43 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 0 ms. Upload bandwidth: 1000 mbps. Download bandwidth: 1000 mbps...... 128 A.44 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 5 ms. Upload bandwidth: 1000 mbps. Download bandwidth: 1000 mbps...... 129 A.45 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 10 ms. Upload bandwidth: 1000 mbps. Download bandwidth: 1000 mbps...... 130 A.46 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 20 ms. Upload bandwidth: 1000 mbps. Download bandwidth: 1000 mbps...... 131 A.47 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 50 ms. Upload bandwidth: 1000 mbps. Download bandwidth: 1000 mbps...... 132 A.48 Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 100 ms. Upload band- width: 1000 mbps. Download bandwidth: 1000 mbps...... 133

ix List of Figures

1.1 High level overview of data attic. A user keeps the device on his or her premises and allows access over the internet to collaborators...... 3

3.1 Preliminary write comparison on local network...... 18 3.2 Preliminary read comparison on local network...... 19 3.3 Preliminary write comparison on simulated internet...... 20 3.4 Preliminary read comparison on local network...... 21

4.1 This figure shows an overview of the Data Attic Server (in the owner’s premises) and an example Data Attic client (in a collaborator’s premises). The Data Attic client shows an existing C program recompiled using the Data Attic Client Application Library...... 27 4.2 Login screen for data attic management...... 32 4.3 User management screen for data attic server...... 32 4.4 Group management screen for data attic server...... 33 4.5 Directory management screen for data attic server...... 33 4.6 Login screen for data attic client management...... 39 4.7 Main management page for data attic client management...... 39 4.8 Data attic detail page for data attic client management...... 40 4.9 The process to add a new collaborator...... 43

5.1 Write comparison with no added latency and no bandwidth restriction 50

x 5.2 Write comparison with 5ms added RTT latency and no bandwidth restriction...... 50 5.3 Write comparison with 10ms added RTT latency and no bandwidth restriction...... 51 5.4 Write comparison with 20ms added RTT latency and no bandwidth restriction...... 51 5.5 Write comparison with 50ms added RTT latency and no bandwidth restriction...... 52 5.6 Write comparison with 100ms added RTT latency and no bandwidth restriction...... 52 5.7 Write comparison with no added latency and 25 mbps download, 3 mbps upload bandwidth restriction...... 53 5.8 Write comparison with 5ms added RTT latency and 25 mbps download, 3 mbps upload bandwidth restriction...... 53 5.9 Write comparison with 10ms added RTT latency and 25 mbps down- load, 3 mbps upload bandwidth restriction...... 54 5.10 Write comparison with 20ms added RTT latency and 25 mbps down- load, 3 mbps upload bandwidth restriction...... 54 5.11 Write comparison with 50ms added RTT latency and 25 mbps down- load, 3 mbps upload bandwidth restriction...... 55 5.12 Write comparison with 100ms added RTT latency and 25 mbps down- load, 3 mbps upload bandwidth restriction...... 55 5.13 Write comparison with no added latency and 100 mbps download, 20 mbps upload bandwidth restriction...... 56 5.14 Write comparison with 5ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction...... 56

xi 5.15 Write comparison with 10ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction...... 57 5.16 Write comparison with 20ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction...... 57 5.17 Write comparison with 50ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction...... 58 5.18 Write comparison with 100ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction...... 58 5.19 Write comparison with no added latency and 1000 mbps download, 1000 mbps upload bandwidth restriction...... 59 5.20 Write comparison with 5ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction...... 59 5.21 Write comparison with 10ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction...... 60 5.22 Write comparison with 20ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction...... 60 5.23 Write comparison with 50ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction...... 61 5.24 Write comparison with 100ms added RTT latency and 1000 mbps download, 1000 mbps upload bandwidth restriction...... 61 5.25 Read comparison with no added latency and no bandwidth restriction 62 5.26 Read comparison with 5ms added RTT latency and no bandwidth re- striction...... 62 5.27 Read comparison with 10ms added RTT latency and no bandwidth restriction...... 63 5.28 Read comparison with 20ms added RTT latency and no bandwidth restriction...... 63

xii 5.29 Read comparison with 50ms added RTT latency and no bandwidth restriction...... 64 5.30 Read comparison with 100ms added RTT latency and no bandwidth restriction...... 64 5.31 Read comparison with no added latency and 25 mbps download, 3 mbps upload bandwidth restriction...... 65 5.32 Read comparison with 5ms added RTT latency and 25 mbps download, 3 mbps upload bandwidth restriction...... 65 5.33 Read comparison with 10ms added RTT latency and 25 mbps down- load, 3 mbps upload bandwidth restriction...... 66 5.34 Read comparison with 20ms added RTT latency and 25 mbps down- load, 3 mbps upload bandwidth restriction...... 66 5.35 Read comparison with 50ms added RTT latency and 25 mbps down- load, 3 mbps upload bandwidth restriction...... 67 5.36 Read comparison with 100ms added RTT latency and 25 mbps down- load, 3 mbps upload bandwidth restriction...... 67 5.37 Read comparison with no added latency and 100 mbps download, 20 mbps upload bandwidth restriction...... 68 5.38 Read comparison with 5ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction...... 68 5.39 Read comparison with 10ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction...... 69 5.40 Read comparison with 20ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction...... 69 5.41 Read comparison with 50ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction...... 70

xiii 5.42 Read comparison with 100ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction...... 70 5.43 Read comparison with no added latency and 1000 mbps download, 1000 mbps upload bandwidth restriction...... 71 5.44 Read comparison with 5ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction...... 71 5.45 Read comparison with 10ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction...... 72 5.46 Read comparison with 20ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction...... 72 5.47 Read comparison with 50ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction...... 73 5.48 Read comparison with 100ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction...... 73 5.49 Gzip comparison with 50ms added RTT latency and 25 mbps down- load, 3 mbps upload bandwidth restriction...... 77 5.50 Gzip comparison with 50ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction...... 77 5.51 Gzip comparison with 50ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction...... 78

6.1 The process to add a new collaborator with a QR code...... 80

xiv Personal Data Attic as Alternative to Public Cloud Abstract by BRIAN POLLACK

In this thesis we explore an alternative for a user who does not wish to relinquish his or her data to a cloud service provider. We propose the data attic: a storage server that can be installed on a user’s premises. A data attic user may grant access to institutions such as medical providers whose software could automatically save patients’ information to the patient’s own data attic, allowing the patient to share medical information among all healthcare facilities he or she accesses and to assemble the medical records from all those providers in one place, under the patient’s control. We provide a prototype data attic server, data attic client, and a C library used to recompile existing C programs, retrofitting them to store data on a data attic. We run performance tests that show the data attic is a viable approach that does not present insurmountable implementation or performance challenges. Introduction

With an increased use of cloud applications such as internet applications (Facebook or Twitter), Software as a Service (SaaS) providers (Google Drive), or document sharing services (Dropbox), users of the cloud applications are increasingly trusting the cloud application provider with their sensitive personal information. The current model for cloud applications has many shortcomings. A user has his or her personal data stored among numerous applications. A user does not have control over his or her personal data that is handled by a cloud application. The user cannot easily switch to a new provider and cannot easily ensure that his or her information is deleted or backed up according to the user’s needs. Consider a patient’s health care records as an example. A patient has health care records for each of the doctor’s offices that he or she visits. One doctor’s office does not typically have the ability to access or update medical records at other offices. If a patient visits a hospital that he or she has never been before, they will have no medical records. All cloud providers provide value to the user. SaaS, for example, removes the need to install and support software on a user’s equipment because the user accesses the software in a web browser. This value, however, comes at the cost of dispersed and hard to control data. In this thesis, we propose a solution that allows a user to keep the value of cloud applications while gaining value by having control over the storage of their own data.

1 In this thesis we propose a solution called the data attic. The data attic is a storage server that can be installed anywhere - including at the user’s home or business. The user can retain full control over the storage of his or her data while having the ability to share it with collaborators or cloud applications over the internet. We have evaluated alternative protocols and frameworks to use for the data attic and argued for WebDAV over HTTPS as the most appropriate framework. We have implemented a proof of concept data attic server. We have also implemented a proof of concept data attic client to be installed on a user’s computer and proof of concept data attic client library that can be used to recompile existing C applications to allow them to interact with the data attic. We demonstrated that this data attic can work with off-the-shelf applications and evaluated its performance. We conclude that the data attic is a viable approach which does not present insurmountable implementation challenges or impose prohibitive performance penalties.

1.1 Concept

We propose a server which will be kept at the user’s premises. The premises can be the user’s home, office, or even a data center. The server consists of a computer with mul- tiple programs installed. This computer can be the user’s personal computer, a virtual machine, or a physical server. We refer to this server as the data attic server and it stores a user’s data. This data attic server will retain multiple layers of access control (firewalls, passwords to login, etc.) but may be shared with outside collabo- rators which can be cloud providers or other people or businesses if the user chooses to enable sharing. Each collaborator that wishes to access someone’s data attic server will need software on his or her machine. The collaborator will install software that we refer to as the data attic client. The data attic client may be as simple as a commercially

2 Figure 1.1: High level overview of data attic. A user keeps the device on his or her premises and allows access over the internet to collaborators. available file transfer program or may be as complex as custom software designed to interact with a data attic server. A user may grant a collaborator access to either one or multiple directories on his or her data attic. The user may also grant multiple collaborators access to the same directory. All access will be restricted with a username and password and we advise the user to maintain a firewall in front of the data attic server. Because the data attic client is under full control of the user, the user may ensure that every detail of the system precisely fits his or her expectations. Our proposed data attic has many benefits:

1. It can be ensured that backups are being taken and stored offsite. When using currently available cloud systems the user expects the cloud provider to not lose data, but our solution gives full control to the user.

2. If information is deleted on our proposed system, the user can ensure that it

3 is removed. Conversely, it can be ensured that certain data is never deleted to meet their required data compliance standards.

3. Users can share directories with multiple users to allow collaboration.

4. Users can revoke access to a shared directory at any time.

5. All data remains physically on premises of the user. While there is nothing to stop a collaborator from retaining copies of data to which he or she has access, this is a limitation of any type of collaboration tool. It might be important to a law firm, for example, that data they use is not stored in a cloud service that leaves their premises.

1.2 Application in Healthcare

Throughout this thesis, we will explain our proposed data attic using the example of a healthcare provider. Our proposed system can be applied to any similar use, and we will demonstrate its usefulness to a healthcare provider scenario, such as a doctor’s office, in this thesis.

1.2.1 Current Medical Record Problems

While medical professionals are mandated to use electronic medical records (EMRs), there is currently no widespread system in place to share a patient’s records among multiple healthcare professionals. There are a few prototypes emerging but nothing has grasped control of a meaningful portion of the market. This means that a person’s medical records are spread among all the providers he or she has previously visited. This situation poses multiple issues:

1. Procedure and logistical delays: care may be delayed while the necessary med- ical information is gathered. A healthcare provider may need to call another

4 provider, for example. In a health emergency situation, for example when a person is not conscious, the patient cannot convey the source of relevant data or may not be able to consent to have records sent from one healthcare provider to another.

2. Redundant or unnecessary procedures: a healthcare provider may perform pro- cedures or tests that have been completed in another healthcare facility, or that otherwise would not be performed if the healthcare provider had complete access to the patient’s medical history.

3. Lack of information: having non-complete medical history can complicate med- ical decisions by healthcare staff and may lead to suboptimal treatment.

4. User control: patients rarely have access to their own healthcare files. A user has no control over the retention, backup, or security of the data kept at each healthcare provider’s facility.

1.2.2 Existing Solutions

A major current approach is for a patient to simply visit one major healthcare system, the Cleveland Clinic, for example, for all of his or her needs. Two obvious drawbacks are (a) that the patient is locked into one system regardless of convenience or price and (b) that in an emergency situation or when the primary healthcare system is not available to the patient, other healthcare providers will not have access to the patient’s medical history and will not update the patient’s medical records with his or her main provider.

Other approaches, such as Microsoft’s HealthVault, MedeFile, or Apple Health ag- gregate medical history from multiple providers into one system. Drawbacks of these types of services require the patient to entrust these companies with the safeguarding

5 of the patient’s data. The patient has no way of verifying that a backup is made or setting data retention policies. Additionally a large service that centrally holds medical information might pose an enticing target for attackers because they hold so many patients’ medical information. Some services, like Apple Health, do not store information centrally (Apple Health stores information on the patient’s iPhone). This highlights another inadequacy: users of Apple Health cannot benefit from the sharing of records among multiple healthcare providers; information is only aggregated for the user to see.

1.2.3 Benefits of Our Approach in Healthcare

By using our proposed data attic, patients could store their medical data in their own homes. Each patient would provide credentials to any healthcare institution (or family member, etc.) that they choose and can set access policies to control who can see which information. Additionally, a patient could keep emergency access creden- tials in their wallet or in their phone for use by a hospital, if he or she is incapacitated.

Besides the sharing of data among healthcare providers the patient can set data retention and backup policies to ensure their data remains safe. Because of the decentralized nature of this system there is not one, large and enticing, target for at- tackers. Each individual patient’s storage would be secured, and while an individual user has fewer resources to ensure the security of their data than a large service, there would be millions of users that an attacker must breach.

With the emergence of high-speed internet connections to homes, including hun- dreds of megabits per second connections, it is feasible to store even large files such as imaging files, video, test results, etc. This data attic would be a vast improvement over the current solution, which, often times is no data sharing at all.

6 1.3 Application in SaaS Cloud Offerings

Software as a service (SaaS) products, such as Google Drive, are very popular, and almost always require the user to relinquish control of his or her data to the company offering the software. For example, with Google Drive, a user uploads documents (or creates documents with the online tools) and Google stores the documents for access over the internet. For many users, this storage mechanism is exactly what they want, and a large reason that they use Google Drive. They do not want to store the data themselves. For some people or businesses, however, the lack of control over their data could be a large reason to not use a product such as Google Drive.

1.3.1 Reasons for Keeping Data Private

There are numerous reasons a user might not want to lose control over his or her data by using a SaaS product. The SaaS provider (in this example, Google) most likely performs analysis and data mining on all data in their service. The provider may have a security breach and accidentally expose the data to a malicious party. There could even be legal reasons prohibiting certain data from being stored by the SaaS provider. Additionally, data access will always be faster from local storage than from the internet. Imagine terabytes of data being stored in the cloud: not only may that exceed a user’s monthly data allowance from their ISP but it would take a long time (almost a day to access 1 terabyte with a 100 mbps link) to either upload or download that data.

1.3.2 Potential Benefits to Using Data Attic

There are numerous potential benefits to storing data for SaaS applications locally, rather than in centralized data centers.

7 User Control

Data attics allow the user to have complete control over the storage, retention, access, and backup of their data. A user can choose to make local and/or offsite backups, and can choose to keep incremental backups. There is more flexibility to eliminate the possibility of losing data if control remains with the user. For certain types of data, the user may wish to have complete control over retention by either forcing data to never be deleted or forcing data to be deleted after a certain period of time. Additionally, if the SaaS provider suffers from a data breach, they will simply not have the user’s data to be stolen. It would still be possible for the user to have his or her data storage compromised or for the SaaS provider to be infiltrated with malicious code that could still leak the user’s data, but it would potentially be less likely as the provider does not have all users’ data stored in one place.

Provider Independence

Someone who uses a data attic to retain data separately from a SaaS provider could choose to switch SaaS providers at any time. Providers would have to offer a way to read each other’s data, but there has been progress to develop open source data portability standards already with the Data Transfer Project (https://datatransferproject.dev) [10].

Data Availability

If the user’s internet connection goes offline, with a traditional SaaS offering the user would not be able to access his or her content. By using an on-premises data attic, the user could retain access to the data even in an event of a loss of internet connectivity. Additionally, data access would almost always be faster if the data was on-premises already rather than having to be transferred over the internet.

8 Data Access

By using the data attic, a user can access his or her data at any time. Likewise, the SaaS provider can easily be shut out from accessing the data. This may be useful to prevent a subpoena or other legal requirements. Perhaps the SaaS provider is in a country that is more likely to require the government to have access to all data.

1.3.3 Potential Implementation

SaaS providers do not necessarily want to integrate their products with a data attic. Many of the potential benefits we outlined would not help the SaaS provider, mainly making it easier for a user to switch providers and restricting the provider from mining the stored data. Additionally, the SaaS provider would not make money on the amount of data stored (by selling services by the gigabyte, as Google does with Google Drive) and would need to change their business model to charge a flat rate for the service, or to a different usage-based model. It may be necessary to still allow the SaaS provider to perform data mining on the data attic. It may be possible to restrict the mining or put some form of limitation on it, but this would still be better in some cases than the SaaS provider having full and exclusive access to the data.

1.4 Security Considerations

Data stored in the cloud is not as secure from the government as data retained on- premises. The Fourth Amendment to the Constitution of the United States protects Amer- ican citizens from “unreasonable searches and seizures”. If the government wants to search a person’s (or company’s) diary or computer or perform some other search, the government needs to demonstrate to a judge that the requested search is reasonable

9 [13]. The Supreme Court of the United States has ruled that information provided to a third party is not protected by the Fourth Amendment. To obtain access to information stored by a third party, the government simply needs to obtain a subpena, which is much easier for the government to obtain than a search warrant [13]. For information stored in the cloud, Dropbox or Google or any other cloud service provider is that “third party” who can be easily compelled to disclose a user’s personal data. If the user chooses to implement our proposed Data Attic, however, they would retain full control of the data and would still retain the constitutional protections of the Fourth Amendment.

10 Related Work

Dropbox (https://dropbox.com) and ownCloud (https://owncloud.org) are two solu- tions similar to our proposed data attic:

2.1 Dropbox

Dropbox is a cloud storage product that has over 500 million users in over 180 coun- tries [5]. Dropbox stores data on their servers and provides a sync client for users to install on their computers and mobile devices, as well as a web interface. The sync client is written in Python and uploads files in chunks of up to 4 megabytes and uses a hash to avoid re-uploading the same chunk multiple times. Additionally, the sync client uses delta encoding to reduce the amount of data exchanged when updating files [4]. Users may elect that certain files or folders are shared with other users of Dropbox [5]. Dropbox is different from the data attic because, with Dropbox, the main (au- thoritative) datastore is in the cloud (on Dropbox’s servers) and a user accesses this data from their personal machines. Dropbox does not allow a user to run his or her own Dropbox server at home.

11 2.2 ownCloud ownCloud is a cloud storage product that is managed entirely by the user. ownCloud consists of two components: a server and a client. To use ownCloud, the user first installs the ownCloud server on the user’s premises. The ownCloud server acts as the data storage repository for any data the user chooses to put into ownCloud. The user then installs the ownCloud sync client on the user’s machines. The sync client allows the user to synchronize documents from the user’s local machine to the ownCloud server. Notice that ownCloud works similar to Dropbox. The difference between ownCloud and Dropbox is that the user installs the ownCloud server wherever the user wants his or her data to be stored, where Dropbox is fully controlled by Dropbox. Typically, users will install the ownCloud server on-premises for fast data access. [11] ownCloud is a similar product to our proposed Data Attic. We believe that the data attic is a better approach for a user to share data with many people, includ- ing companies who have custom software, because ownCloud is mainly designed for collaborative editing of documents. ownCloud has a built-in collaborative editing software that allows multiple users to edit documents in their web browsers and see each others’ changes in real time [11]. The data attic is designed mostly for pro- grammatic access by a software program, rather than real-time editing of documents. Once one client checks out a file, the data attic will lock that file for a period of time to allow the client to make changes and upload those changes.

12 Choosing a Protocol

Having storage in a separate geographic area and on a separate network from the application provides a certain number of challenges. Increased latency and limited bandwidth are among the challenges that limit file storage across geographic regions. A few main requirements for the protocol we use include:

1. Fast - must be able to save and retrieve data quickly.

2. Secure - must be able to provide end-to-end encryption for data.

3. Easy to Set Up and Maintain - must be easy to initialize and maintain the link between storage server and storage client.

3.1 Performance

To choose an appropriate protocol for this storage solution we began by investigating NFS and iSCSI, two popular protocols used for intranet-based network storage. We also investigated WebDAV, a popular protocol for HTTP-based internet file sharing. We set up two physical machines on the same local network: one as a storage server and the other as a storage client. We simulated remote network conditions: 30 megabits per second and 25 ms latency (50 ms round trip latency) for each upload and download between the storage server and client. We used the utility

TC to artificially slow upload speeds and add upload latency. By enabling TC on

13 each server we were able to simulate the desired 30 megabits upload, 30 megabits download, and 50 ms round trip latency. We believe that this environment simulates an internet connection that a consumer in most areas of the United States could reasonably maintain. We set up an NFS, iSCSI, and WebDAV server on the storage server machine and mounted the respective drives on the storage client machine.

We wrote a test script to copy files between 1 megabyte and 5 gigabytes to and from the respective storage mounts. This type of data use may be similar to what would be used with the data attic. The test script ran each evaluation five times to compare the mean of the results. The test script first ran on the network with no artificial slowdown then ran again with slowdown. The results for each protocol are as follows:

3.1.1 Local Storage

We first ran the performance test using local hard disk storage to provide a baseline. The hard disks in the test machines are traditional spinning disks. As expected for a spinning hard drive, the read speeds are significantly faster than write speeds.

3.1.2 NFS

We mounted the NFS using NFSv4. The sync option was enabled on the server, requiring all writes to be committed to hard disk before affirming to the client that the files have been written. The async option was enabled on the client, which buffers writes to local memory before writing to the server. During our preliminary tests NFS was significantly slower if synchronous writes are forced on the client

(using sync). The sync option on the client forces every call to fwrite, for example, to commit the write to the server, rather than buffering all the writes and only committing after a close, flush, or sync [12].

14 Our test results, as shown in Figure 3.1, indicate that NFS writes over the lo- cal network are approximately the same speed as writing to local disk. This result is interesting; it indicates that the bottleneck of writing is with the hard drive when using NFS locally. NFS reads, as shown in Figure 3.2, tend to be slower than local reads. For the 5GB file, the write time was 122 seconds both for writing both locally and over NFS. The file read, however, required 59.2 seconds, which is 71% more time with NFS than local disk.

When we simulate internet speeds, both NFS reads and writes are significantly slower than the local disk, as shown in Figures 3.4 and 3.3. For the 5GB file, the write time was 1400 seconds and the read time was 1390 seconds. This is 5.03% and 4.28% more time than the theoretical minimum transfer time for writing and reading, respectively. The theoretical minimum transfer time for a 5GB file over a 30 megabit per second (both upload and download) link is approximately 1333 seconds.

3.1.3 iSCSI

Next we mounted the iSCSI target using all default options. We created an ext4 filesystem to be used on the iSCSI target. Ext4 is a filesystem that is commonly used on Linux and is capable of very large disk capacities [8].

On the local network, the iSCSI mount performed slightly worse than local disk and NFS for writing, as shown in Figure 3.1. The iSCSI mount performed similarly to NFS for reading, as shown in Figure 3.2. For the 5GB file, the write time was 171 seconds and the read time was 61.1 seconds. The file transfer times were 40.2% and 76.6% more time than local disk for writing and reading over the local network, respectively.

15 On the simulated internet connection, the iSCSI mount performed similarly to NFS for both writing and reading, as shown in Figures 3.4 and 3.3. For the 5GB file, the write time was 1410 seconds and the read time was 1390 seconds. This is 5.78% and 4.28% more time than the theoretical minimum time for writing and reading, respectively.

3.1.4 WebDAV

Last we mounted the WebDAV directory using davfs2. It prompts for the WebDAV share’s username and password then mounts the directory similarly to NFS and iSCSI. For performance reasons davfs2 does not immediately upload writes to the server. It first writes to a temporary file, and will upload in the background after a number of seconds which is specified in the configuration file. Even an operation

such as a sync() will not initiate an upload [3]. Because of this limitation we did not run write tests because they show results consistent with writing to a temporary file on the local disk. We did, however, run read tests.

On the local network, the WebDAV mount performed the worst of the options tested, as shown in Figure 3.2. The read time was 102 seconds for the 5GB file, which is 195% more time than local disk or 72.3% more time than NFS.

On the simulated internet connection, the WebDAV mount performed about the same as both NFS and iSCSI, as shown in Figure 3.4. The read time was 1440 seconds for the 5GB file, which is 8.03% more time than the theoretical minimum or 3.60% more time than both NFS and iSCSI.

16 3.1.5 Conclusion

We conclude that any of the three protocols tested would be about equal in per- formance for our data attic implementation. The main limiting factor seems to be the internet connection speed and any of NFS, iSCSI, or WebDAV should perform similarly over the internet.

17 Figure 3.1: Preliminary write comparison on local network.

Table 3.1: Preliminary performance test results for writing over local network.

File Size (MB) Local Disk Time (s) NFS Time (s) iSCSI Time (s) 1 0.0418 0.0713 0.0732 2 0.0733 0.113 0.112 5 0.167 0.245 0.261 20 0.63 0.902 0.945 50 1.56 2.22 2.31 100 3.13 4.41 4.6 500 13.1 17.2 20.1 1000 25.3 26.4 36.5 5000 122 122 171

18 Figure 3.2: Preliminary read comparison on local network.

Table 3.2: Preliminary performance test results for reading over local network.

File Size (MB) Local Disk Time (s) NFS iSCSI WebDAV 1 0.0173 0.0466 0.0362 0.0424 2 0.0226 0.0437 0.0374 0.0424 5 0.0344 0.077 0.0659 0.0906 20 0.143 0.252 0.374 0.307 50 0.351 0.61 0.669 0.746 100 0.718 1.21 1.25 1.49 500 3.46 5.7 6.13 7.34 1000 6.92 11.9 12.3 14.8 5000 34.6 59.2 61.1 102

19 Figure 3.3: Preliminary write comparison on simulated internet.

Table 3.3: Preliminary performance test results for writing over simulated internet.

File Size (MB) Local Disk Time (s) NFS Time (s) iSCSI Time (s) 1 0.0397 2.46 2.88 2 0.0733 2.09 2.42 5 0.164 2.71 3.43 20 0.628 6.68 6.5 50 1.57 16.1 15.5 100 3.13 31.6 30.7 500 13.3 149 149 1000 25.4 289 289 5000 122 1400 1410

20 Figure 3.4: Preliminary read comparison on local network.

Table 3.4: Preliminary performance test results for reading over simulated internet.

File Size (MB) Local Disk Time (s) NFS iSCSI WebDAV 1 0.0146 2.17 0.952 0.593 2 0.022 2.16 0.661 0.639 5 0.0345 3.17 1.5 1.52 20 0.142 6.46 5.68 5.75 50 0.348 14.2 14.1 14.2 100 0.721 28.2 28 28.3 500 3.45 140 140 141 1000 6.92 279 279 282 5000 34.6 1390 1390 1440

21 3.2 Security

3.2.1 NFS

NFSv4 allows encryption using Kerberos 5. If enabled all NFS network traffic will be encrypted [12].

3.2.2 iSCSI iSCSI does not allow traffic to be encrypted, although IPsec could be used on top of iSCSI to encrypt the communication [1]. It could, alternatively, be tunneled through an encrypted VPN connection. This setup would be more difficult to maintain.

3.2.3 WebDAV

WebDAV can use HTTPS for all communication [6]. HTTPS is ubiquitous and pro- vides enough security for our application.

3.2.4 Conclusion

Both NFS and WebDAV provide security in their protocols without using additional layers of complexity such as a VPN tunnel or IPsec. We feel that any of the three protocols could be implemented with sufficient security but either NFS or WebDAV would be easier to implement securely.

22 3.3 Ease of Use

3.3.1 NFS

NFS was fairly easy to set up. It required a directory to be shared and the con- figuration of an NFS server software, which is commonly available for most Linux distributions. The client typically mounts an NFS share like a regular disk mount and the application is unaware that the filesystem is remote. All files that are placed into NFS are available to the server machine with no additional setup.

3.3.2 iSCSI iSCSI had the most difficult setup of the three protocols tested. The initial iSCSI server setup was similar to that of NFS. A major difference between NFS and iSCSI is that NFS uses the filesystem of the server but iSCSI uses any filesystem and it is a task for the client to format the iSCSI target [12, 1]. When using NFS, the files in an NFS shared folder are available for the server and multiple clients to access. There is no formatting necessary and the files can be accessed as soon as the client connects to the server [12]. When using iSCSI, however, the client must format that iSCSI target before it may be used. An iSCSI target is initially blank, just like a physical hard drive, when it is initially created. To be used, the client must format it with a filesystem such as EXT4. Note that the iSCSI target must be formatted only once - even if multiple clients access the iSCSI target [1]. In our testing we used the EXT4 filesystem, which is commonly used on Linux and is capable of very large disk capacities [8], but other filesystems may be used. Many clients may technically connect to the same iSCSI target, although this is not a good idea. If file locking is not handled by the filesystem that was chosen when formatting the iSCSI target, data corruption may permanently destroy a user’s files [1].

23 Additionally, the iSCSI target must be formatted using a filesystem that is com- patible with the computer that the client is using. If the iSCSI target was formatted using a filesystem compatible with , for example, a collaborator who is using Linux may not be able to open it.

3.3.3 WebDAV

WebDAV utilizes HTTP or HTTPS to expose files through a web server. Clients are allowed to perform remote authoring operations through defined HTTP methods [6]. To setup a WebDAV share, we enabled the mod dav module in the ubiquitous Apache web server. The client then simply connects using either a local-disk-like mounting software such as davfs2 or numerous libraries for programming languages. One unique aspect of WebDAV is that it may be easier to traverse certain fire- walls. Because WebDAV uses HTTP and HTTPS, even if a network uses a firewall blocking certain outbound traffic or mandates outbound traffic use proxies or some other complex configuration, WebDAV traffic may be allowed through, where other protocols may be blocked. This unique constraint would probably not be seen in many environments, but could affect the data attic in environments such as large corporations, such as hospitals, with complex networks.

3.3.4 Conclusion

Due to the complexity of setting up iSCSI, we feel that it is not appropriate for our application. We conclude that both NFS and WebDAV are sufficiently simple to implement.

24 3.4 Conclusion

Due to limitations such as lack of simple secure implementation and the necessity to format a fixed amount of space when using iSCSI, we eliminate iSCSI from consideration as a viable protocol to use with our data attic.

Either NFS or WebDAV seems to be robust enough to set up and maintain connections from at-home storage servers to clients. Both had fairly simple setups and would be able to be maintained.

Because of the widespread use of Apache and its modular system which would allow further development of functionality we choose to use WebDAV for our data attic implementation. We can use either the standard distribution of mod dav as the WebDAV server interface, or we can modify it easily, if we determine custom server options are necessary.

25 Implementation

There are two main components to the data attic: the data attic server and the data attic client. The data attic server server is located at the user’s premises and holds all the data that is being stored. The data attic client is used by one or many collaborators who can be anywhere in the world. The data attic server and data attic client communicate or HTTPS (or HTTP) and WebDAV. Refer to Figure 4.1 for a high-level diagram of an example data attic server and data attic client.

26 Figure 4.1: This figure shows an overview of the Data Attic Server (in the owner’s premises) and an example Data Attic client (in a collaborator’s premises). The Data Attic client shows an existing C program recompiled using the Data Attic Client Application Library.

27 4.1 Data Attic Server

The data attic server is the component that is located at the user’s premises. It holds all the data that is being stored and manages authentication for collaborators.

4.1.1 Server Specification

We built a prototype data attic server on Red Hat Enterprise Linux 7. While we used a machine dedicated for use by our data attic server, the machine could be shared with other applications if the user wishes. The software required for our prototype data attic includes:

1. Apache (We used HTTPD). The Apache server provides the main file server. It is also used for the web-based user interface that allows the user to configure the data attic server.

2. mod dav. The mod_dav plugin for Apache runs the WebDAV protocol for communication between the data attic server and data attic client.

3. PHP, Python, , and various Python libraries. These are for the web-based user interface that allows the user to configure the data attic server.

Additionally, the machine needs to be accessible from the internet. We had our server behind a NAT, which is typical of a home internet connection, and we used Universal Plug and Play (UPnP) to forward the HTTP and HTTPS ports to our prototype server.

4.1.2 Server Installation

We built a Python script to automate the install of the data attic server software. The script can be run on an existing machine but we recommend a fresh installation

28 of Red Hat Enterprise Linux 7 or CentOS 7 for best results. The installation script, in order, accomplishes the following tasks:

1. Disable SELinux. This is necessary so SELinux does not think that the data attic is a security vulnerability.

2. Install EPEL. We require packages not included in the default repositories. Extra Packages for Enterprise Linux (EPEL) is needed.

3. Update all server packages. To be sure the server is running the latest

software, use yum to update all of the server’s installed packages.

4. Install HTTPD and PHP. HTTPD is required for the data attic communi- cation. Both HTTPD and PHP are required for the web-based user interface that allows the user to configure the data attic server.

5. Create base directory for all data attic storage. By default the data attic

uses /var/www/storage/ as the base directory.

6. Create directory to hold the data attic server configuration files. By

default the data attic uses the /etc/storage directory to hold configuration files

7. Install Pip. pip is used to install packages for Python. Use pip to install flask and gunicorn which are required for the web-based user interface that allows the user to configure the data attic server.

8. Create Linux user for the data attic

9. Copy configuration files into place

10. Initialize database used by data attic management tool

11. Create, enable, and start service for the data attic management tool

29 12. Copy web administration files into place for the web-based user interface that allows the user to configure the data attic server.

13. Set proper Linux ownership of installed files

14. Create default user for the data attic

15. Allow ports 80 and 443 through the firewall

16. Forward ports 80 and 443 from the NAT appliance to the data attic machine using UPnP

4.1.3 Server Management

We built a management software to make it easy for a user to manage the collabora- tors that have access to the data attic server. The management software is broken into two parts: a Python REST tool that runs on Flask, and an HTML tool that uses PHP to communicate with the Python REST tool. The Python REST tool provides a REST interface to perform all management tasks including user, group, and directory management, and a utility that generates Apache configuration files. The HTML tool provides a convenient frontend experience that allows a user to easily interact with the data attic. The user can perform all operations enabled in the Python REST interface using the HTML application.

To manage the data attic, a user uses his or her web browser to connect to the data attic server’s hostname or IP address. The user is shown a login screen for authentication (See Figure 4.2. The user can now modify users, groups, and directories:

1. User: people who have the ability to access the data attic.

2. Group: one or multiple users.

30 3. Directory: a directory that is shared with one or multiple groups. Directories directly map to directories on the underlying filesystem.

We use groups as a way to easily distinguish permission-sets. One group that a user may create could be called Medical and would include a user for each doctor’s office that has access to medical files. Users in the Medical group would only be able to access directories that are shared with the Medical group. A directory may be shared with none or multiple groups.

Refer to Figures 4.3, 4.4, and 4.5 for screenshots of the management pages.

When a user is finished modifying the permissions, he or she must click the “Reload Apache” button and an Apache configuration file will be generated and Apache will be reloaded.

31 Figure 4.2: Login screen for data attic management.

Figure 4.3: User management screen for data attic server.

32 Figure 4.4: Group management screen for data attic server.

Figure 4.5: Directory management screen for data attic server.

33 4.2 Data Attic Client

The data attic client is the component that is located at a collaborator’s premises. There are many options available for client software. Because the data attic server uses the WebDAV protocol, a client can use any commonly available WebDAV client implementation to interact with the data attic server. Custom software can also be used to communicate with the data attic server. We have built a daemon that can be run on a client machine to interact with data attic server. We also have built a library that can be used to convert an existing C program to communicate with the data attic.

4.2.1 WebDAV Client

A user can use any commonly available WebDAV client to interact with the data attic server. Cyberduck is a commonly used cloud storage browser that can be used to connect to the data attic. Users can use Cyberduck to upload and download files to and from the data attic as if it were a folder on their computer locally [2].

4.2.2 Data Attic Daemon

We built a daemon that can be run on a client machine that helps client applications interact with the data attic. We called the daemon atticd and it is meant to be kept always running in the background. This daemon is used as a layer of abstraction between the data attic and the software on the client machine that wants to access the data attic. The daemon reads configuration files to determine the hostname, username, and password for the remote data attic. The daemon can also handle asynchronous file uploading or downloading, so the software can have the option of exiting instead of waiting for long-running data attic synchronization tasks.

34 A client program communicates with the local daemon through a socket. The client program sends a text string containing well-known parameters to the socket and listens for a response. This makes it easy to implement custom code in a client application. The client application only needs to communicate with this daemon and does not need to implement functionality to connect to the data attic directly. The data attic daemon communicates with the data attic using curl. The daemon checks in a configuration file for the hostname, username, password, port number, and filepath of the remote data attic with which to connect. The default location for

the configuration files for the data attic daemon is /etc/storage/. The daemon uses an HTTP PUT request to upload a file to the data attic server. The daemon uses an HTTP GET request to download a file from the data attic server. When downloading a file, the daemon uses the HTTP header If-Modified-Since with the time the file was last modified on the local machine. This restriction will request that the data attic server only sends back a file if it is newer than the local copy. There is no need to re-download the same version of a file. Additionally, the daemon utilizes file locks on the data attic server to ensure that another user is not modifying files at the same time as the local client.

4.2.3 Client Application Library

Additionally, we built a library that can be used to simply re-compile many existing C programs to allow them to communicate with the data attic. No changes to the C code is necessary to add data attic functionality. This client application library does not interact directly with the data attic server.

Instead, it communicates via a socket with the atticd daemon described above. The daemon communicates with the data attic server on the client application library’s behalf. The easiest way to utilize our data attic is to recompile an existing C program

35 with our data attic library. The existing program should be compiled using the gcc compiler. To implement our data attic library in an existing program, we replace any existing call to open, close, fopen, and fclose with a call to our own versions of each of these four functions. Our versions of open and fopen will synchronize the local copy of the file to be opened with the data attic server. If the local machine has the newest copy of the file to be opened, our version of open or fopen will simply call the default open or fopen, respectively. If the data attic server holds the newest version of the file, our data attic library will download it before calling open or fopen. Likewise, our versions of close and fclose will first call the default close or fclose, respectively, and will then upload the newly saved version of the file to the data attic server.

To accomplish the replacement of the default open, close, fopen, and fclose functions, we need to pass the option -wrap to the linker. -wrap allows us to define a new function to override an existing function. For example, any reference to open is replaced with a reference to __wrap_open and we define __wrap_open to be our own implementation of that function [9].

We use the GCC flag -Wl to pass the -wrap flags to the linker [7]. Additionally, the data attic library itself must be compiled and either added to the computer’s path, or it must be manually included when compiling that existing program using the GCC -L flag explicitly setting the path to the compiled data attic library [7]. When the application attempts to open a file, the data attic client library, com- municating to the data attic server through atticd, will open the newest version of that file. If the data attic server has the newest version (or the only version, if the file is not present locally), the file will be downloaded. Likewise, upon saving the file, te data attic client library will upload the newest version of that file to the data attic server. The newest version of a file is determined by comparing the last-modified

36 timestamp of that file between the local version and the version stored on the data attic. If the file is only present on one machine, that version will be used. Sometimes the client application may want to keep data such as support files locally and never share them with the data attic. This type of use may be relevant for an image processing application, for example, that has a large amount of metadata. Consider an image processing application that, given an image of a human, identifies the person and adds a label with the person’s name. The source images and labels would be stored on the data attic, but it makes sense that the metadata such as decision trees, etc. would be stored within the application that provides the image processing. If the application requires advanced functionality such as only uploading certain types of files to the data attic server or customizing when the files are uploaded or downloaded, the application will have to be modified with custom code to either interact directly with the data attic server or with the atticd daemon. Our data attic client library does not currently support such functionality. If we want to easily add functionality to only synchronize certain files, we could add a configuration file to the client application library or the atticd daemon to exclude certain file names or file types from synchronization.

4.2.4 Client Configuration Utility

The client’s data attic daemon, atticd, described above, needs to be configured with a hostname, username, password, port number, and filepath of the remote data attic to gain access to the files. All of this information can be set in configuration files on the client machine. Assuming that a client is in an environment where it will communicate with multiple data attics, a doctor’s office, for example, we built a management tool to make it easy to change the active data attic. This management interface is web-based

37 to allow the user to access it simply using a web browser. The management software is broken into two parts: a Python REST tool that runs on Flask, and a HTML tool that uses PHP to communicate with the Python REST tool. The Python REST tool provides a REST interface to perform all management tasks including storage of data attic credentials and choosing one data attic to be “active”. The active data attic will be identified with a text file in the /etc/storage directory that will be read by applications that use our client library. The HTML tool provides a convenient frontend experience that allows a user to easily interact with the management tool. The user can perform all operations enabled in the Python REST interface using the HTML application.

To manage the data attic client tool, a user logs in by pointing his or her browser to http://localhost. The user is shown a login screen for authentication (See Figure 4.6. The user can now manage the data attics to which he or she has access. The user can also select one data attic to become active and used by any programs that utilize our provided library. Refer to Figures 4.7 and 4.8 for screenshots of the management pages.

4.2.5 Client Installation

Our goal was to provide an easy way for existing applications to utilize our data attic. We provided a Python script that will install the components required by the data attic client daemon atticd and the data attic client configuration management utility. The installation script, in order, accomplishes the following tasks:

1. Disable SELinux. This is necessary so SELinux does not think that the data attic client daemon is a security vulnerability.

2. Install EPEL. We require packages not included in the default repositories.

38 Figure 4.6: Login screen for data attic client management.

Figure 4.7: Main management page for data attic client management.

39 Figure 4.8: Data attic detail page for data attic client management.

40 Extra Packages for Enterprise Linux (EPEL) is needed.

3. Update all server packages. To be sure the server is running the latest

software, use yum to update all of the server’s installed packages.

4. Create a linux user for the data attic client. The data attic client daemon and management utility will run as this user.

5. Create directory to hold the data attic client configuration files. By

default the data attic client uses the /etc/data_attic directory to hold con- figuration files

6. Install HTTPD and PHP which are required to run the data attic client management utility.

7. Install curl-devel which is required by the data attic client daemon and is used to communicate via HTTPS with the data attic server.

8. Install Pip. pip is used to install packages for Python. Use pip to install flask, gunicorn, and pycrypto which are required for the web-based data attic configuration manager.

9. Copy configuration files into place

10. Install, enable, and start the data attic client daemon, atticd

11. Initialize database used by data attic client management backend Flask application

12. Create, enable, and start service for the data attic client management backend Flask application

13. Copy web administration files into place

14. Set proper ownership of all installed files

41 4.2.6 Sharing Access to Data Attic

If the owner of the data attic wants to enable a collaborator, a doctor’s office, for example, to access his or her data attic, the owner needs to create a new username/- password combination for the collaborator and share that information with the col- laborator. Figure 4.9 shows the process to add a new collaborator.

42 Figure 4.9: The process to add a new collaborator.

43 Evaluation

To evaluate the performance of our data attic, we configured two identical servers on the same network; one was set up as a server and the other as a client. We wrote two different tests to simulate usage of the data attic compared with similar usage of NFS. We wanted to evaluate (a) if the data attic performed similarly to NFS and (b) if the data attic performed fast enough to be useful in the field.

5.1 Servers

Both servers were in the same room on the same network subnet and had gigabit ethernet connectivity. They both had the following specifications:

• Red Hat Enterprise Linux 7.5

• 4x Intel Xeon 5140 @ 2.33 GHz

• 4 GB RAM

5.2 Performance Test Script

We wrote a test script to compare file transfer times of the data attic with file transfer times of NFS. The test script was designed to provide an approximation of the data

44 attic’s use of the WebDAV protocol in comparison to a similar implementation with the already-accepted NFS protocol. It is important to note how we designed the tests. The data attic client executes writes in two steps: (1) the file is saved to disk then (2) the file is uploaded to the remote data attic server. The data attic client executes reads in two steps: (1) the file is downloaded from the remote data attic server to local disk and (2) the file is opened and read from the client program that requested the file. Therefore, it would be an unfair comparison to simply read or write to (or from) NFS directly. To mirror the data attic client behavior with our NFS tests, we performed writes in two steps: (1) write the file to local disk and (2) copy that file to the NFS server. We also performed reads in two steps: (1) download the file from the NFS server and (2) re-open and read that file. This comparison allows NFS and the data attic client to be compared in an equitable way. When creating a file to be written, we always create a file of the specified size using random bytes generated with the C rand() function. First, we ran each test with no added restrictions (remember the two test machines are on a 1Gbps link) and then added latency and restricted bandwidth. We added the following five RTT latencies: [5, 10, 20, 50, 100] ms. We also restricted bandwidth to three options: [up:3/down:25, up:20/down:100, up:1000/down:1000] mbps. The bandwidth restrictions are in the perspective of the server, so a 3up/25down link means that the server, which in our use case is the user’s data attic server at their home or premises, has an internet connection from their ISP that is capable of 3mbps uploading and 25mbps downloading. Both latency and bandwidth restrictions were added using tc. Tc adds the specified restrictions to the network egress scheduler, so for each machine we can effectively control the upload rate. We use tc on both machines to achieve the desired network limitations. We chose the up:3/down:25 and up:20/down:100 bandwidth limitations by researching common internet speeds

45 for ISPs such as Spectrum, Comcast, and AT&T. Up:1000/down:1000 bandwidth represents gigabit speeds that some ISPs across the nation advertise.

5.3 Performance Test Results

The test results can be categorized into eight groups:

1. Write, no bandwidth restrictions

2. Write, 25/3 network speed

3. Write, 100/20 network speed

4. Write, 1000/1000 network speed

5. Read, no bandwidth restrictions

6. Read, 25/3 network speed

7. Read, 100/20 network speed

8. Read, 1000/1000 network speed

For each group we show the effect of adding between 0 and 100ms of RTT latency on the network between the data attic server and client. For each group we compare the performance of the data attic to the performance of NFS and we examine the effect of latency on the results. For all write comparisons, we show asynchronous write times. When requesting a write, the data attic client can request the write asynchronously. If the write is requested asynchronously, the data attic client daemon will immediately return a success to the data attic client, and will continue the upload in a separate thread in the background. We included the asynchronous write speeds in the performance tests to demonstrate that they take the same amount of time as simply writing to the local disk.

46 5.3.1 Write, no bandwidth restrictions

For tests with less than 100ms latency, we observe that NFS consistently takes more time than the data attic to write a file, as shown in Figures 5.1, 5.2, 5.3, 5.4, and 5.5. For tests with 100ms of latency added, as shown in Figure 5.6, we observe NFS writing a file at approximately the same speed as the data attic for all file sizes tested. We do not know why these tests showed NFS taking longer to upload files at low latencies. We note that it was only for tests that utilized the full 1Gbps network link and only at latencies less than 100 ms that we observed the poor NFS performance. We did not test WebDAV write speeds during our preliminary comparison so we cannot compare this behavior to our preliminary tests. The rest of the observed results, however, are fairly consistent with our preliminary comparison results.

5.3.2 Write, 25/3 network speed

For all tests in this group, we observe NFS writing a file at approximately the same speed as the data attic for all file sizes tested, as shown in Figures 5.7, 5.8, 5.9, 5.10, 5.11, and 5.12.

5.3.3 Write, 100/20 network speed

For all tests in this group, it appears that NFS writes a file slightly slower than the data attic. For all of the tests, however, the error bars on the graph are overlapping and we conclude that NFS writes the files at the same speed as the data attic, as shown in Figures 5.13, 5.14, 5.15, 5.16, 5.17, and 5.18. One interesting item we noticed during this group of tests is that the asynchronous writes to the data attic tended to be faster than writing to the local disk. We do not know why this occurred.

47 5.3.4 Write, 1000/1000 network speed

This group of tests should produce the same results as the group without bandwidth restrictions because the network between the client and server was on a gigabit link. Figures 5.19, 5.20, 5.21, 5.22, 5.23, and 5.24 confirm that the results are identical to the tests with no bandwidth restrictions.

5.3.5 Read, no bandwidth restrictions

For reads with an added latency less than 50ms, it appears that NFS and the data attic perform similarly for files less than 500 megabytes. For larger files, however, NFS tended to perform slightly faster, as shown in Figures 5.25, 5.26, 5.27, and 5.28. For reads with an added latency of 50ms or greater, NFS tended to read all files slightly faster than the data attic. This is shown in Figures 5.29 and 5.30.

5.3.6 Read, 25/3 network speed

For all read tests at this network speed, the data attic tended to read slightly slower than NFS, as shown in Figures 5.31, 5.32, 5.33, 5.34, 5.35, and 5.36.

5.3.7 Read, 100/20 network speed

For all read tests at this network speed, the data attic tended to read slightly slower than NFS, as shown in Figures 5.37, 5.38, 5.39, 5.40, 5.41, and 5.42.

5.3.8 Read, 1000/1000 network speed

This group of tests should produce the same results as the group without bandwidth restrictions because the network between the client and server was on a gigabit link. Figures 5.43, 5.44, 5.45, 5.46, 5.29 and 5.30 confirm that the results are identical to the tests with no bandwidth restrictions.

48 5.3.9 Summary

We were impressed that the data attic performed similarly to or better than NFS for all write comparisons. We conclude that for writing, speed will not be a limiting factor for a person to decide to adopt the data attic. On slower networks the data attic performed similarly to NFS for the read com- parisons. NFS performed better compared with the data attic as the network got faster. We do not believe that the amount of read difference is significant enough to prevent adoption of the data attic protocol.

49 Figure 5.1: Write comparison with no added latency and no bandwidth restriction

Figure 5.2: Write comparison with 5ms added RTT latency and no bandwidth re- striction.

50 Figure 5.3: Write comparison with 10ms added RTT latency and no bandwidth restriction.

Figure 5.4: Write comparison with 20ms added RTT latency and no bandwidth restriction.

51 Figure 5.5: Write comparison with 50ms added RTT latency and no bandwidth restriction.

Figure 5.6: Write comparison with 100ms added RTT latency and no bandwidth restriction.

52 Figure 5.7: Write comparison with no added latency and 25 mbps download, 3 mbps upload bandwidth restriction.

Figure 5.8: Write comparison with 5ms added RTT latency and 25 mbps download, 3 mbps upload bandwidth restriction.

53 Figure 5.9: Write comparison with 10ms added RTT latency and 25 mbps download, 3 mbps upload bandwidth restriction.

Figure 5.10: Write comparison with 20ms added RTT latency and 25 mbps download, 3 mbps upload bandwidth restriction.

54 Figure 5.11: Write comparison with 50ms added RTT latency and 25 mbps download, 3 mbps upload bandwidth restriction.

Figure 5.12: Write comparison with 100ms added RTT latency and 25 mbps down- load, 3 mbps upload bandwidth restriction.

55 Figure 5.13: Write comparison with no added latency and 100 mbps download, 20 mbps upload bandwidth restriction.

Figure 5.14: Write comparison with 5ms added RTT latency and 100 mbps download, 20 mbps upload bandwidth restriction.

56 Figure 5.15: Write comparison with 10ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction.

Figure 5.16: Write comparison with 20ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction.

57 Figure 5.17: Write comparison with 50ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction.

Figure 5.18: Write comparison with 100ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction.

58 Figure 5.19: Write comparison with no added latency and 1000 mbps download, 1000 mbps upload bandwidth restriction.

Figure 5.20: Write comparison with 5ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction.

59 Figure 5.21: Write comparison with 10ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction.

Figure 5.22: Write comparison with 20ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction.

60 Figure 5.23: Write comparison with 50ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction.

Figure 5.24: Write comparison with 100ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction.

61 Figure 5.25: Read comparison with no added latency and no bandwidth restriction

Figure 5.26: Read comparison with 5ms added RTT latency and no bandwidth re- striction.

62 Figure 5.27: Read comparison with 10ms added RTT latency and no bandwidth restriction.

Figure 5.28: Read comparison with 20ms added RTT latency and no bandwidth restriction.

63 Figure 5.29: Read comparison with 50ms added RTT latency and no bandwidth restriction.

Figure 5.30: Read comparison with 100ms added RTT latency and no bandwidth restriction.

64 Figure 5.31: Read comparison with no added latency and 25 mbps download, 3 mbps upload bandwidth restriction.

Figure 5.32: Read comparison with 5ms added RTT latency and 25 mbps download, 3 mbps upload bandwidth restriction.

65 Figure 5.33: Read comparison with 10ms added RTT latency and 25 mbps download, 3 mbps upload bandwidth restriction.

Figure 5.34: Read comparison with 20ms added RTT latency and 25 mbps download, 3 mbps upload bandwidth restriction.

66 Figure 5.35: Read comparison with 50ms added RTT latency and 25 mbps download, 3 mbps upload bandwidth restriction.

Figure 5.36: Read comparison with 100ms added RTT latency and 25 mbps download, 3 mbps upload bandwidth restriction.

67 Figure 5.37: Read comparison with no added latency and 100 mbps download, 20 mbps upload bandwidth restriction.

Figure 5.38: Read comparison with 5ms added RTT latency and 100 mbps download, 20 mbps upload bandwidth restriction.

68 Figure 5.39: Read comparison with 10ms added RTT latency and 100 mbps download, 20 mbps upload bandwidth restriction.

Figure 5.40: Read comparison with 20ms added RTT latency and 100 mbps download, 20 mbps upload bandwidth restriction.

69 Figure 5.41: Read comparison with 50ms added RTT latency and 100 mbps download, 20 mbps upload bandwidth restriction.

Figure 5.42: Read comparison with 100ms added RTT latency and 100 mbps down- load, 20 mbps upload bandwidth restriction.

70 Figure 5.43: Read comparison with no added latency and 1000 mbps download, 1000 mbps upload bandwidth restriction.

Figure 5.44: Read comparison with 5ms added RTT latency and 1000 mbps download, 1000 mbps upload bandwidth restriction.

71 Figure 5.45: Read comparison with 10ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction.

Figure 5.46: Read comparison with 20ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction.

72 Figure 5.47: Read comparison with 50ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction.

Figure 5.48: Read comparison with 100ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction.

73 5.4 Gzip Test

To demonstrate the data attic’s ability to work with unmodified applications, we compiled gzip 1.2.4 with our data attic library. We attempted to use the latest version of gzip (1.9) but version 1.9 used functions such as openat which we did not build into our data attic library. Using gzip version 1.2.4, we were able to compile gzip with our data attic library and run a performance test.

We wrote a script that created eight random text files of sizes ranging from 1 megabyte to 1 gigabyte. We ran three tests using these files:

First, we put these files on the data attic server. We compiled gzip 1.2.4 with our data attic library and compressed each of the eight random text files. The files were on the data attic server only, and not stored locally on the test client. The modified gzip program downloaded each file from the data attic server, compressed each, then uploaded the compressed version to the data attic server.

Next, we moved the eight random text files to the client machine. Again us- ing the modified gzip program we compressed each of the eight random text files. This time, gzip already had the file locally, so it compressed each file and uploaded the compressed version to the data attic server.

Last, as a baseline, we used gzip 1.2.4 compiled normally, without the data attic library, to compress each file. The files were not transmitted to or from the data attic server for this test.

We repeated the aforementioned gzip tests five times each, for the following

74 bandwidths: 25mbps download/3mbps upload, 100mbps download/20mbps upload, and 1000mbps download/1000mbps upload, all with a 50ms rtt latency. The band- width restrictions are in the perspective of the data attic server, so a 3up/25down link means that the data attic server, has an internet connection from their ISP that is capable of 3mbps uploading and 25mbps downloading.

5.4.1 Gzip Test Results

The following graphs show the results for our gzip comparisons. Because we repeated each test five times, the line on the graph is the mean for each sample, and there are error bars on each data point showing the minimum and maximum recorded value. The variability was very low so most of the error bars are not visible.

As shown if Figure 5.49, we see that downloading the random data files to the client takes the longest amount of time in the compression process. The upload link from the data attic server is only 3mbps, so it makes sense that when the client needs to first download the random text files it would take significantly longer than when the client only needs to upload the compressed files.

As shown in Figure 5.50, we see that all three of the gzip comparisons have completion times more similar to each other than in the previous trial. When the client needs to first download the random text files, it still takes longer than when it only needs to upload the compressed versions.

Lastly, as shown in Figure 5.51, we see that all three of the gzip comparisons have completion times even more similar to each other than in the previous trials.

75 This suggests that the most time consuming part of the gzip process is the compres- sion, and as the internet speed gets faster, the data attic adds minimal overhead to the gzip process.

76 Figure 5.49: Gzip comparison with 50ms added RTT latency and 25 mbps download, 3 mbps upload bandwidth restriction.

Figure 5.50: Gzip comparison with 50ms added RTT latency and 100 mbps download, 20 mbps upload bandwidth restriction.

77 Figure 5.51: Gzip comparison with 50ms added RTT latency and 1000 mbps down- load, 1000 mbps upload bandwidth restriction.

78 Future Work

The system design described in this thesis lays the foundation for a viable Data Attic, although additional features may be implemented to provide additional functionality for users.

6.1 QR Codes for Sharing Credentials

Earlier, we showed the process to grant access to a data attic to a collaborator in Figure 4.9. Rather than having to manually share a username, password, IP address, and port number to a collaborator who then needs to enter that information into their data attic client, we could implement QR codes to share that data. Upon creating a new user, the data attic server management interface could display a QR code that the data attic owner could print or send to the collaborator. This is demonstrated in Figure 6.1

79 Figure 6.1: The process to add a new collaborator with a QR code.

80 6.2 Federation

A system to federate among multiple data attics could provide a useful way to allow users to share access to directories on their data attic. Without implementing federation, if two users each own their own Data Attic, when User 1 wants to share a folder with User 2, User 2 needs to log into User 1’s Data Attic to access the shared folder using the credentials that User 1 provides. User 2 now needs to log into two separate Data Attics to access all of his or her files: his or her own Data Attic and User 1’s Data Attic. With federation, when User 1 shares a folder with User 2, that folder will be added to User 1’s Data Attic. Both of the Data Attic servers will communicate with each other to keep the shared folder synchronized between User 1 and User 2’s Data Attics. Now, User 2 only needs to log into his or her own Data Attic to access all of his or her files.

6.3 Synchronization and Backup Offsite

One major reason a user may choose to use a Data Attic is to maintain control over the backup process of his or her data. A user may, for example, want to ensure that the data stored in the Data Attic is backed up every night or that there is a daily snapshot of all the data in case a file was mistakenly deleted. We could implement a feature that automatically backs up all of the data stored in the data attic on a scheduled interval. This backup could even be integrated with a cloud storage provider such as Amazon S3 or with NFS over the network, or even backed up to another Data Attic server.

81 6.4 Synchronization Client for Personal Computer

We built the Data Attic Client Library to demonstrate the Data Attic integrating with a computer program. Another useful Data Attic client utility would be a syn- chronization client for a personal computer. The synchronization client could act similarly to how the Dropbox synchronization client works and could synchronize all of the files between a user’s Data Attic and their computer.

82 Conclusion

In this thesis we have described the architecture of a Data Attic to allow users to retain control of their data when using external applications (such as SaaS) or when letting collaborators create or utilize user data (e.g. being able to collect all user medical history generated across various health care providers in one place under user control). We have provided an example implementation and evaluated its performance. We have implemented a Data Attic server, Data Attic Client Daemon, and a Data Attic Client Library and demonstrated the performance of a sample client uploading and downloading data over the internet. We have implemented a proof of concept Data Attic server using Apache and the

mod_dav Apache plugin. The proof of concept server allows remote access over the internet with the WebDAV protocol. We have implemented a proof of concept Data Attic client using two main compo-

nents: the atticd data attic daemon and the Data Attic Client Library. The atticd data attic daemon runs as a daemon on a client computer and interfaces with a Data Attic server allowing uploading and downloading of files between the client machine and server. The Data Attic Client Library is a C library that can be included with an existing C program. The C program can be recompiled so all files are synchronized with the Data Attic server. The Data Attic Client Library communicates with the atticd data attic client daemon, which in turn communicates with the Data Attic

83 server to upload and download files. We performed two performance tests. The first performance test measured trans- ferring various sized files across the internet and compared the Data Attic to NFS. This test concluded that NFS and the Data Attic overall showed similar upload and download performance. The second performance test measured the time it takes to gzip a file located on the Data Attic for various internet speeds. This test concluded that the processing time overhead due to the Data Attic (downloading and uploading the data required over the internet) becomes small relative to the processing time overhead just due to the compression itself as internet speed increases. Both tests support the theory that the Data Attic is a viable solution for a user who wishes to keep data on premises and allow the sharing of certain data over the internet.

84 Performance Test Data

85 Table A.1: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 0 ms. Upload bandwidth: Max mbps. Down- load bandwidth: Max mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.059358 0.050298 0.059480 0.059834 0.054829 2 0.124036 0.127822 0.112225 0.119757 0.120033 5 0.302653 0.287406 0.299972 0.319449 0.318047 20 1.169976 1.026058 1.016456 0.926263 0.918519 50 2.556964 2.416031 2.380915 2.163834 2.546807 100 5.063996 4.598417 4.838216 4.503420 4.459823 500 20.591808 21.027002 21.055264 19.323055 18.025129 1000 37.403057 39.131771 34.888695 38.356434 39.004379 Data Attic, Synchronous 1 0.153231 0.163815 0.162754 0.167221 0.157563 2 0.202295 0.185804 0.184091 0.192167 0.194342 5 0.305823 0.407475 0.294274 0.365634 0.413257 20 1.090701 1.171379 1.646899 1.372628 1.182264 50 2.729394 2.547476 3.417260 2.996106 2.875734 100 5.488574 5.823744 5.867588 5.680115 5.937920 500 24.639418 23.910990 23.321096 23.363100 24.156494 1000 50.482327 45.306938 46.196098 47.689106 44.159363 Data Attic, Asynchronous 1 0.164380 0.110407 0.200688 0.131313 0.128704 2 0.174737 0.182488 0.177866 0.176875 0.172495 5 0.253061 0.245606 0.362106 0.297579 0.346903 20 1.016745 1.141299 1.119020 1.012184 1.152835 50 2.794158 2.565769 2.733261 2.519390 2.457881 100 4.928754 4.942327 4.695787 4.658017 5.226766 500 20.615452 21.713570 21.169800 19.473696 21.150778 1000 40.854912 37.752930 36.832359 39.814625 39.133896 NFS 1 0.144061 0.178759 0.149599 0.167216 0.138731 2 0.195938 0.187958 0.238347 0.219863 0.220554 5 0.470792 0.427680 0.519685 0.399754 0.479040 20 1.479899 1.538878 1.723194 1.806637 1.526892 50 4.152783 4.032967 4.363979 4.155839 3.956857 100 7.601775 7.797457 8.480111 7.936622 7.719589 500 31.720402 31.461967 35.003040 34.275803 32.747318 1000 61.622421 60.554527 66.881020 62.792801 62.983868

86 Table A.2: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 5 ms. Upload bandwidth: Max mbps. Down- load bandwidth: Max mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.056803 0.064498 0.061405 0.056621 0.063424 2 0.124042 0.119026 0.107787 0.119767 0.097176 5 0.204136 0.197117 0.307443 0.310721 0.299485 20 0.940965 0.920404 1.269596 0.900401 0.871934 50 2.492426 2.192241 2.763786 2.789302 2.214220 100 4.653791 4.318310 4.729844 5.014293 4.491293 500 20.939953 18.643089 21.246750 20.715361 22.069550 1000 37.175110 38.012299 43.304039 40.307365 42.469383 Data Attic, Synchronous 1 0.268543 0.249687 0.255437 0.249873 0.255637 2 0.286905 0.281559 0.245921 0.248735 0.283202 5 0.465435 0.480871 0.472705 0.490309 0.470964 20 1.365120 1.254090 1.351791 1.156294 1.494564 50 2.919111 3.391150 3.147485 3.282971 3.276717 100 5.072980 6.036745 5.571825 5.702149 5.919055 500 24.741882 24.361113 24.469622 23.926434 23.984015 1000 46.125797 46.841984 50.599285 47.077404 47.562183 Data Attic, Asynchronous 1 0.166524 0.151221 0.149569 0.157908 0.166509 2 0.195632 0.196557 0.205604 0.194701 0.208311 5 0.443137 0.388992 0.395757 0.401938 0.317314 20 0.883292 1.182994 1.030039 1.111356 1.107729 50 2.409962 2.972433 2.524377 2.934837 2.617103 100 4.852707 5.381643 4.958538 4.721032 5.112540 500 21.599274 20.702740 21.228062 20.960527 20.691130 1000 34.623642 41.056538 39.587627 35.241146 35.390411 NFS 1 0.436107 0.185858 0.187702 0.189238 0.187186 2 0.435655 0.251153 0.223833 0.254625 0.249205 5 0.799507 0.511552 0.519635 0.463704 0.507683 20 2.158339 1.611418 1.714731 1.758430 1.629272 50 4.719280 4.008450 4.188687 4.106264 4.104477 100 8.164529 7.429842 7.802768 7.817723 7.664912 500 31.680065 31.581764 33.744621 32.434868 34.759747 1000 58.391613 62.533451 61.702156 62.129711 60.471939

87 Table A.3: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 10 ms. Upload bandwidth: Max mbps. Download bandwidth: Max mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.062289 0.057211 0.062262 0.055847 0.061009 2 0.124302 0.124064 0.088049 0.118629 0.125533 5 0.280204 0.242481 0.190899 0.268710 0.193904 20 0.880073 1.150134 0.945661 0.931855 0.962679 50 2.673040 2.260882 2.104868 2.304948 2.335115 100 5.017885 4.733715 4.674635 4.685786 5.054148 500 20.455975 21.091423 19.114882 21.480022 21.482222 1000 34.361191 34.740173 34.160118 35.173038 34.664593 Data Attic, Synchronous 1 0.350606 0.460066 0.357468 0.348381 0.351308 2 0.386725 0.379519 0.392787 0.393980 0.345788 5 0.484693 0.587716 0.648646 0.637456 0.544965 20 1.283927 1.316337 1.555949 1.458863 1.503701 50 3.050696 3.074695 3.155631 2.978532 3.192526 100 5.935033 5.881456 6.367491 5.491650 6.024215 500 23.685532 23.535019 26.974308 24.325375 23.976620 1000 47.981426 46.733959 53.199257 46.485008 48.353699 Data Attic, Asynchronous 1 0.210790 0.169347 0.179406 0.187189 0.183599 2 0.198116 0.195852 0.208886 0.205809 0.190403 5 0.403609 0.324024 0.311561 0.394666 0.397819 20 1.255176 0.999578 1.125873 1.158493 1.129046 50 2.748521 2.580289 2.793062 2.438677 2.419918 100 4.810205 5.472314 5.017850 4.833544 4.513020 500 20.869192 20.433304 20.692381 21.282875 21.069126 1000 35.567474 34.434208 34.897160 37.130192 39.515640 NFS 1 0.861511 0.242413 0.252726 0.274107 0.252414 2 0.689639 0.257709 0.271807 0.242527 0.288823 5 1.165044 0.549796 0.430461 0.512853 0.455700 20 3.031223 1.452523 1.767729 1.805507 1.642588 50 5.842335 4.008316 4.019854 4.124762 4.240124 100 9.595013 8.060550 7.905543 7.765759 8.600776 500 33.819244 33.252316 32.469379 32.565533 33.140694 1000 65.204926 62.016422 63.412285 61.704327 62.891926

88 Table A.4: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 20 ms. Upload bandwidth: Max mbps. Download bandwidth: Max mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.058624 0.056584 0.064792 0.051203 0.067358 2 0.091804 0.119421 0.118290 0.115797 0.116060 5 0.299461 0.196780 0.299481 0.307435 0.200592 20 0.850404 0.901233 1.086992 1.146849 1.004008 50 2.449651 2.339526 2.589016 2.997252 2.424987 100 5.147657 4.946518 4.865900 4.671009 4.451906 500 21.154154 21.327976 21.036798 20.422424 21.417795 1000 42.435791 36.069427 38.968227 37.902107 36.365852 Data Attic, Synchronous 1 0.538537 0.543005 0.536950 0.539511 0.529852 2 0.548905 0.570134 0.548370 0.609030 0.558564 5 0.736761 0.816384 0.794671 0.815025 0.811052 20 1.713411 1.704466 1.815093 1.581801 1.495889 50 3.126741 3.498632 3.390552 3.524629 3.390805 100 6.756967 6.075339 6.608936 6.100894 6.204084 500 26.018978 23.161930 26.458523 27.052792 23.017128 1000 44.390079 49.578835 47.891777 50.032127 50.647594 Data Attic, Asynchronous 1 0.247051 0.247978 0.247689 0.237675 0.234786 2 0.275502 0.266176 0.263588 0.260599 0.261731 5 0.376040 0.385830 0.389725 0.420376 0.360012 20 1.358376 0.992291 1.071219 1.014147 1.082563 50 2.603354 2.534506 2.419747 3.025571 2.434174 100 5.098548 4.888424 5.041903 5.244440 5.166879 500 20.772543 21.736614 20.456924 20.603933 20.506628 1000 36.047367 41.741364 42.720676 34.663681 39.355865 NFS 1 1.188848 0.342096 0.352335 0.340182 0.357684 2 1.123065 0.348438 0.339625 0.334150 0.339801 5 1.774435 0.499605 0.607562 0.625172 0.491686 20 4.460316 1.958314 1.594658 1.793023 1.882620 50 7.545892 4.452210 4.759741 4.345419 4.560389 100 11.392103 8.428973 8.137911 8.268535 8.017531 500 34.206490 32.393261 32.449333 33.812683 32.985161 1000 60.454117 64.055008 64.127060 64.103333 64.545227

89 Table A.5: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 50 ms. Upload bandwidth: Max mbps. Download bandwidth: Max mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.062197 0.058814 0.057857 0.056696 0.068757 2 0.125418 0.119722 0.119769 0.119753 0.124623 5 0.308156 0.243801 0.306478 0.307453 0.255570 20 1.160465 0.890125 1.090890 0.968227 0.916384 50 2.514079 2.323509 2.487222 2.285532 2.025977 100 4.818583 4.623064 4.741066 4.794693 5.014274 500 20.963043 21.646259 20.907646 21.630335 21.178646 1000 40.241375 35.649872 34.865170 40.706795 35.372372 Data Attic, Synchronous 1 1.104776 1.108374 1.117226 1.113022 1.160845 2 1.181237 1.183804 1.176323 1.153678 1.171276 5 1.365758 1.418047 1.450209 1.469064 1.479620 20 2.663469 2.432061 2.679507 2.574066 2.679834 50 4.479808 5.069049 4.816079 4.580398 4.604435 100 8.517258 8.474209 8.436048 8.411600 7.982908 500 31.482016 31.130037 31.976063 31.215647 32.447514 1000 58.353779 62.060608 56.082100 56.572468 59.939465 Data Attic, Asynchronous 1 0.444924 0.438579 0.421349 0.418477 0.445999 2 0.457240 0.470269 0.459066 0.451759 0.434436 5 0.638545 0.550627 0.616686 0.545865 0.564020 20 1.341268 1.191869 1.217321 1.163190 1.253343 50 2.801208 2.464995 2.740358 2.593966 2.726135 100 5.647057 4.848766 4.786773 4.734175 4.802253 500 23.738216 21.201855 20.579885 20.550694 20.083406 1000 44.983826 37.881779 33.428570 32.425793 34.685604 NFS 1 0.674573 0.683974 0.678830 0.683830 0.690553 2 0.713504 0.556977 0.559363 0.574983 0.552295 5 1.348956 0.785966 0.887341 0.873554 0.811407 20 4.118844 2.184663 2.454101 2.391694 2.298449 50 9.447221 5.244682 5.335679 5.074458 5.191660 100 14.818768 9.882912 9.676320 9.308350 9.521309 500 36.266777 36.729523 36.250107 37.105427 36.960999 1000 69.990120 70.929169 72.176300 70.161774 68.794403

90 Table A.6: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 100 ms. Upload bandwidth: Max mbps. Download bandwidth: Max mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.066841 0.068948 0.055924 0.065692 0.054083 2 0.115781 0.104051 0.084027 0.096096 0.156078 5 0.307408 0.242963 0.268160 0.220868 0.192596 20 0.883873 0.878390 0.893258 0.970176 1.092032 50 2.521614 2.544235 2.135910 2.154412 2.156851 100 4.597057 4.725768 5.026237 4.656465 5.210831 500 20.757526 21.587887 21.741585 18.228508 20.438696 1000 39.502975 34.977772 34.883961 36.189476 36.857792 Data Attic, Synchronous 1 2.092135 2.053705 2.062195 2.061936 2.068773 2 2.192832 2.187956 2.189246 2.183138 2.154714 5 2.455205 2.451184 2.563529 2.509922 2.529629 20 4.079446 4.025866 4.243835 4.002354 4.164248 50 6.752228 7.045014 6.762201 6.857125 6.861023 100 10.957293 11.482757 11.033351 11.276647 11.434264 500 40.220028 43.452293 42.940006 42.715233 42.561085 1000 78.139000 82.682365 81.065872 80.475151 82.041283 Data Attic, Asynchronous 1 0.723854 0.725828 0.728887 0.736342 0.734196 2 0.736775 0.777215 0.748177 0.776690 0.806175 5 0.854432 0.880279 0.954888 0.839980 0.909246 20 1.513223 1.525832 1.506461 1.475685 1.593266 50 2.681642 2.682925 2.735415 2.559718 2.627210 100 4.686013 4.630188 4.692730 4.602915 4.647031 500 19.320793 19.581120 19.231167 19.920399 20.060621 1000 32.014320 30.538605 32.729183 35.983223 31.925053 NFS 1 1.801550 1.253546 1.242466 1.231984 1.236941 2 2.506887 0.883153 0.883243 0.867301 0.915339 5 4.983238 1.137858 1.283050 1.138999 1.160529 20 9.244273 2.990625 3.129293 3.325027 3.368984 50 9.186757 6.958145 6.886787 6.949972 6.978370 100 12.496395 12.439646 12.725327 12.305946 12.212942 500 40.082359 43.930676 44.431755 41.593914 44.943161 1000 80.536346 83.142487 80.996948 80.058029 82.484650

91 Table A.7: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 0 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.059160 0.065407 0.061898 0.058962 0.059971 2 0.112008 0.120031 0.126933 0.121273 0.123752 5 0.298270 0.252149 0.307439 0.288157 0.291497 20 0.930286 0.876959 0.994252 1.000425 0.974147 50 2.560969 2.244278 2.228899 2.259146 2.343491 100 4.629745 4.490677 4.509649 4.527399 5.301697 500 21.407793 21.298685 21.443720 18.150545 20.823364 1000 34.977604 35.330933 40.138454 36.429127 37.533478 Data Attic, Synchronous 1 0.513599 0.518496 0.517675 0.508089 0.514015 2 0.828171 0.859626 0.846409 0.828570 0.869345 5 2.044849 1.969545 2.040643 2.041303 1.954202 20 7.711234 7.813971 7.590280 7.971976 7.552229 50 19.398729 19.253099 19.153027 19.281237 19.313932 100 38.584671 38.628700 38.511486 38.606628 38.252838 500 186.105499 187.421356 187.995026 187.335770 186.808243 1000 374.390228 374.025024 375.100006 374.388336 374.252136 Data Attic, Asynchronous 1 0.139073 0.164202 0.153716 0.150218 0.133700 2 0.181539 0.205938 0.206317 0.227272 0.183882 5 0.424082 0.375691 0.384501 0.386020 0.432447 20 1.043801 1.062776 1.045861 1.128333 1.026541 50 2.339534 2.216968 2.353451 2.332667 2.226599 100 4.347022 4.312332 4.302952 4.366248 4.364864 500 17.247473 17.450966 17.207855 16.946457 17.401468 1000 33.466423 33.080570 33.615414 33.034271 32.280621 NFS 1 0.549135 0.472560 0.467112 0.462652 0.527541 2 0.843375 0.886656 0.895375 0.836480 0.879155 5 2.134504 2.014874 2.030283 2.042142 2.114084 20 8.409149 8.376854 8.144245 8.152560 8.296658 50 20.108383 20.312960 20.228294 20.587002 20.576214 100 38.832008 38.191814 38.348129 38.244186 39.394577 500 188.281311 190.045242 189.565857 189.798782 190.184296 1000 375.092041 377.718170 374.924591 374.069672 376.107300

92 Table A.8: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 5 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.061506 0.064098 0.066714 0.069329 0.060506 2 0.112015 0.120066 0.125306 0.127763 0.120065 5 0.318210 0.300155 0.299458 0.223635 0.206894 20 0.918414 0.852395 1.015905 0.958207 1.017682 50 2.260897 2.670494 2.413551 2.254607 2.436966 100 4.789893 4.672313 4.854598 4.648166 4.493721 500 21.094772 21.011793 21.198767 18.619978 21.151129 1000 40.354057 35.029671 39.758682 36.820740 41.340054 Data Attic, Synchronous 1 0.577748 0.569662 0.671236 0.618632 0.567527 2 0.957250 0.970823 0.963330 0.907987 0.953898 5 1.999456 2.115459 2.097185 2.029820 2.064694 20 7.904115 7.921248 7.848856 7.613979 7.776449 50 19.383394 19.166733 19.016949 19.507278 19.298571 100 39.392094 38.698631 38.566799 38.985424 38.018444 500 188.307312 188.229767 187.172516 187.974838 187.498459 1000 378.072968 373.798279 374.600037 373.994476 375.445526 Data Attic, Asynchronous 1 0.157421 0.174964 0.166830 0.173477 0.155370 2 0.328843 0.292169 0.359384 0.343622 0.324799 5 0.474957 0.464468 0.439148 0.481101 0.473378 20 1.147676 1.140856 1.212361 1.088833 1.188019 50 2.258274 2.333692 2.306527 2.343745 2.271082 100 4.356986 4.305491 4.373015 4.356146 4.329666 500 16.698109 17.203709 16.518030 17.078497 17.185699 1000 35.295677 33.455936 33.320545 35.165844 34.063663 NFS 1 0.673499 0.656152 0.669205 0.586221 0.598225 2 0.964058 0.927239 0.925286 0.899847 0.928082 5 2.118284 2.118060 2.106148 2.078412 2.042323 20 7.995384 8.140414 8.312229 8.237212 8.300911 50 20.237368 20.432785 20.368788 20.747604 20.536644 100 38.811760 37.948109 38.257870 39.730282 39.477482 500 188.383133 188.557648 190.149765 187.086914 187.947403 1000 372.677490 374.870331 374.829224 374.630585 375.117676

93 Table A.9: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 10 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.061887 0.060639 0.060756 0.060189 0.054686 2 0.140071 0.118105 0.123768 0.118959 0.124056 5 0.292971 0.299479 0.299477 0.311435 0.284185 20 0.927605 1.055146 0.894349 0.890354 1.092499 50 2.171213 2.641797 2.271693 2.231733 2.296894 100 4.759479 4.677007 4.942486 4.854929 4.633808 500 21.593319 21.559975 21.946140 21.489904 21.154236 1000 36.469036 35.549156 36.436756 38.612934 35.335239 Data Attic, Synchronous 1 0.649859 0.648797 0.649643 0.650189 0.658136 2 0.997365 1.005446 0.977198 0.996723 0.986608 5 2.074707 2.165501 2.199616 2.080542 2.182279 20 8.088141 8.003193 7.850478 7.872001 7.869211 50 19.711594 19.558907 19.519957 19.375296 19.780363 100 38.489456 39.189034 38.774036 38.726791 38.257961 500 187.670578 188.186127 187.391922 187.114288 190.153458 1000 373.900299 373.068909 374.208130 374.576141 375.765747 Data Attic, Asynchronous 1 0.196035 0.218820 0.241061 0.235804 0.184731 2 0.223854 0.236136 0.219932 0.240415 0.251786 5 0.427282 0.399813 0.414969 0.451947 0.412156 20 1.027134 1.079316 1.040659 1.225524 1.036552 50 2.377158 2.355784 2.250728 2.858298 2.292293 100 4.246502 4.361794 4.284990 5.337105 4.486803 500 17.210115 17.365477 17.059244 19.584217 16.778830 1000 33.146938 33.187744 34.183983 39.039036 33.826366 NFS 1 0.852982 0.799081 0.911839 0.796227 0.807136 2 0.899300 0.903297 0.943363 0.923324 0.942670 5 2.185177 2.178331 2.169386 2.214217 2.076907 20 8.402326 8.255925 8.164455 8.177412 8.268407 50 20.634926 20.132677 20.221863 20.472317 20.537420 100 38.360119 39.810875 37.912483 38.659569 39.135490 500 188.552902 189.204697 189.076935 188.452866 188.134552 1000 373.654968 374.993683 374.843506 375.824005 374.447327

94 Table A.10: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 20 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.058282 0.063662 0.061783 0.062409 0.055851 2 0.127745 0.116053 0.108061 0.103817 0.117170 5 0.263875 0.248171 0.306880 0.287663 0.295775 20 0.941882 0.996840 0.925715 0.994020 1.141557 50 2.319578 2.188472 2.292922 2.207764 2.523151 100 4.814700 4.726197 4.805944 4.734840 4.982669 500 21.278948 18.428196 21.387627 21.506289 20.691526 1000 38.174255 36.600750 35.357918 35.404846 39.590504 Data Attic, Synchronous 1 0.809138 0.794456 0.822312 0.791681 0.801569 2 1.144361 1.146674 1.146833 1.138711 1.153882 5 2.334117 2.282834 2.328169 2.333872 2.230578 20 8.034048 7.853633 8.100262 8.192041 8.120604 50 19.360661 19.464352 19.508865 19.155380 19.429312 100 38.374084 38.992069 38.863083 38.617229 38.395287 500 186.964630 188.339432 186.997635 187.314392 188.264313 1000 373.779602 374.349396 377.718414 374.566620 375.097473 Data Attic, Asynchronous 1 0.257841 0.257138 0.345493 0.265651 0.267009 2 0.305688 0.264893 0.271600 0.266999 0.296949 5 0.455443 0.518579 0.578864 0.556484 0.484727 20 1.121801 1.073882 1.189894 1.070428 1.125993 50 2.342569 2.318602 2.922625 2.292073 2.311047 100 4.356382 4.256178 4.671367 4.329873 4.343600 500 16.733952 16.708139 16.093994 17.107922 16.680353 1000 33.373035 32.827011 33.326260 33.588421 33.030987 NFS 1 1.361777 1.365700 1.419938 1.342490 1.366991 2 1.166929 1.218855 1.142947 1.147065 1.151114 5 2.286862 2.250197 2.142225 2.125206 2.269330 20 8.323359 8.413031 8.180858 8.346460 8.210312 50 20.348944 20.263922 20.388277 20.359308 20.551292 100 39.539330 38.659771 40.058151 38.100468 39.107628 500 188.852264 189.075043 188.091034 189.594650 187.925049 1000 372.964661 374.035400 375.358215 376.136078 376.896667

95 Table A.11: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 50 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.057476 0.061190 0.049584 0.065812 0.057165 2 0.084044 0.096017 0.119767 0.116074 0.115790 5 0.300178 0.297834 0.294449 0.300124 0.303676 20 0.921361 0.958781 0.931292 1.135850 1.108482 50 2.495984 2.404986 2.275603 2.076843 2.420926 100 4.853833 4.744818 4.848194 4.251094 4.622640 500 21.497971 21.933388 20.626045 21.953068 21.418205 1000 40.332527 40.551273 35.031490 40.428936 40.413059 Data Attic, Synchronous 1 1.343448 1.287213 1.284136 1.275075 1.323363 2 1.629548 1.672590 1.628559 1.646525 1.595575 5 2.804063 2.825353 2.811393 2.715196 2.828766 20 8.552694 8.322612 8.487167 8.624421 8.505118 50 19.867884 20.014042 20.277176 20.066229 20.011110 100 39.361050 39.049805 39.219948 39.185780 38.911835 500 191.160980 186.839294 187.605896 188.736084 188.763855 1000 374.014160 373.372894 375.538147 374.675018 373.887970 Data Attic, Asynchronous 1 0.452499 0.443640 0.433180 0.432938 0.486451 2 0.470520 0.442476 0.440572 0.494173 0.460683 5 0.658498 0.639138 0.645964 0.710982 0.630049 20 1.316932 1.340321 1.290882 1.205318 1.347991 50 2.280695 2.312294 2.314508 2.645152 2.266545 100 4.289359 4.242897 4.209763 4.351503 4.273125 500 16.876667 16.402552 16.713985 16.737932 16.556805 1000 33.482918 33.214027 34.299908 33.103516 33.448917 NFS 1 2.994738 3.005516 3.098603 3.090075 3.006593 2 2.477969 2.513486 2.483591 2.489699 2.484166 5 3.213435 3.227553 3.109102 3.225205 3.110682 20 8.413051 8.701352 8.667971 8.429187 8.643976 50 20.867121 20.742886 20.676439 20.387228 20.560585 100 39.226803 40.151413 39.694393 38.244461 38.575550 500 190.215652 188.398788 190.231308 190.040253 188.806580 1000 377.412384 373.116364 374.258484 376.927795 376.216827

96 Table A.12: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 100 ms. Upload bandwidth: 3 mbps. Down- load bandwidth: 25 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.065002 0.062754 0.053485 0.050809 0.057808 2 0.119753 0.095829 0.113113 0.113624 0.116952 5 0.267529 0.295650 0.224176 0.192127 0.294323 20 0.926808 0.914111 0.952381 0.872406 1.074335 50 2.368925 2.311555 2.256898 2.584983 2.630557 100 4.972303 4.722824 4.525908 4.993866 4.520128 500 21.777590 22.424164 18.691498 20.895590 21.644772 1000 34.560909 42.449856 34.688183 40.838840 38.069607 Data Attic, Synchronous 1 2.160016 2.180039 2.177022 2.171808 2.163533 2 2.517706 2.487259 2.514706 2.518254 2.522466 5 3.690053 3.662192 3.714370 3.590044 3.602731 20 9.394938 9.343427 9.634742 9.447462 9.439835 50 20.922453 20.830166 20.853025 20.875793 20.789211 100 40.762852 40.231556 40.286057 39.744907 39.816296 500 189.331512 187.916046 190.497513 188.667191 189.704742 1000 375.797180 374.603424 378.910156 375.176575 376.964661 Data Attic, Asynchronous 1 0.782001 0.750625 0.738012 0.742114 0.746862 2 0.739648 0.777607 0.765871 0.744525 0.764001 5 1.043169 0.908544 0.970608 0.945152 0.957388 20 1.693398 1.728669 1.607846 1.643800 1.552325 50 3.084804 2.988798 3.092665 2.982097 2.962060 100 4.434602 4.449274 4.428823 4.400446 4.386806 500 18.711128 18.946508 19.058283 18.948414 18.628584 1000 29.809494 29.242397 34.576866 29.218006 29.722006 NFS 1 5.405625 5.379889 5.370632 5.439360 5.383942 2 2.812814 2.874534 2.848373 2.885685 2.885692 5 3.142452 3.335816 3.276913 3.295542 3.330444 20 8.886298 8.702420 8.879772 8.990577 9.136034 50 21.455997 20.799044 20.897640 21.422203 21.369850 100 39.130753 39.474972 40.414383 40.296387 38.677757 500 190.742920 189.487793 189.770081 194.180542 189.400681 1000 380.280090 374.428986 378.121857 385.746246 377.965576

97 Table A.13: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 0 ms. Upload bandwidth: 20 mbps. Download bandwidth: 100 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.056652 0.050237 0.151840 0.058743 0.066594 2 0.127185 0.115776 0.103142 0.089226 0.091812 5 0.300153 0.298433 0.203632 0.208146 0.295790 20 0.972414 0.996425 0.933749 1.020424 0.870035 50 2.524958 2.701011 2.591475 2.494910 2.395379 100 4.834370 5.042701 4.798672 4.583078 4.818631 500 21.187115 20.630566 21.319069 20.892210 21.142872 1000 40.542683 35.244709 34.225601 34.428356 39.734310 Data Attic, Synchronous 1 0.274074 0.250710 0.291441 0.251462 0.254437 2 0.336652 0.358951 0.314516 0.351644 0.314363 5 0.749562 0.770479 0.674814 0.685356 0.771987 20 2.767947 2.706634 2.572598 2.651968 2.779820 50 6.546412 6.811458 6.528470 6.688406 6.321375 100 13.063154 13.159188 12.784981 13.391679 13.448707 500 63.181980 62.437778 63.864456 62.683292 63.110653 1000 122.058685 123.603477 121.796219 121.242950 119.733658 Data Attic, Asynchronous 1 0.128685 0.123327 0.134515 0.129459 0.131529 2 0.142554 0.156829 0.163567 0.158310 0.162466 5 0.280361 0.290705 0.260911 0.265329 0.285656 20 0.903892 1.061317 0.936285 1.067291 1.095315 50 2.040452 2.025670 2.096637 2.121525 2.093777 100 3.924642 4.043474 4.028017 3.857406 3.960844 500 17.850386 18.022566 18.012005 17.409363 17.460861 1000 30.148266 30.197565 30.028631 30.316864 30.313272 NFS 1 0.211587 0.226257 0.230208 0.208535 0.209629 2 0.371679 0.353763 0.355771 0.379789 0.367762 5 0.851377 0.795170 0.746481 0.817133 0.743521 20 3.237405 2.888041 3.062597 3.263767 3.200129 50 7.645815 7.746148 7.565890 7.865938 7.893309 100 15.275534 15.163006 15.459991 15.219057 15.845028 500 66.644638 66.146347 65.861572 65.981003 66.473175 1000 127.291626 124.910797 124.529907 122.770638 125.241287

98 Table A.14: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 5 ms. Upload bandwidth: 20 mbps. Download bandwidth: 100 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.048299 0.074471 0.072202 0.066574 0.062718 2 0.092052 0.121248 0.111792 0.119768 0.118630 5 0.242735 0.299491 0.287648 0.343382 0.284620 20 0.925803 1.237645 1.185631 0.986158 0.939967 50 2.597003 2.587394 2.555122 2.437887 2.573372 100 5.017916 4.435097 5.150748 5.187422 5.050215 500 20.919794 21.071115 20.958591 20.847572 21.211340 1000 42.052509 42.709160 45.200207 34.920273 40.920174 Data Attic, Synchronous 1 0.337564 0.322831 0.322097 0.317357 0.327169 2 0.412703 0.438310 0.429129 0.433036 0.394807 5 0.771127 0.888489 0.753230 0.868797 0.909154 20 2.799701 2.839676 2.739855 2.743484 2.881575 50 6.994622 6.438366 6.677980 6.879724 6.443520 100 13.447961 12.990962 13.446190 13.099155 12.798742 500 62.156269 62.793915 62.734051 63.148388 63.558182 1000 123.070251 122.463860 124.139580 122.913673 122.725243 Data Attic, Asynchronous 1 0.158637 0.153976 0.165542 0.172498 0.174150 2 0.171913 0.171229 0.194988 0.169806 0.186753 5 0.285837 0.337187 0.305009 0.322441 0.290339 20 1.081860 0.944459 0.951452 0.852259 0.940294 50 2.180214 2.029442 2.202774 2.137030 2.092657 100 4.099010 4.008259 3.993056 4.011773 4.142473 500 17.379648 17.484921 17.483685 17.886168 17.287304 1000 30.153688 29.639395 30.212580 30.196243 30.249903 NFS 1 0.500524 0.528288 0.583329 0.496781 0.505878 2 0.519747 0.535518 0.519661 0.493472 0.528283 5 0.891295 0.963266 0.979366 0.837741 0.958645 20 3.241527 3.008989 3.037591 3.190731 3.114512 50 7.857540 8.050489 7.648610 7.548967 7.533037 100 15.719011 15.216230 15.781233 15.697230 15.403597 500 68.704041 63.068966 62.251877 68.315659 62.619068 1000 130.506699 128.203217 128.029648 131.420441 128.495193

99 Table A.15: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 10 ms. Upload bandwidth: 20 mbps. Down- load bandwidth: 100 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.057845 0.075072 0.058307 0.061486 0.069147 2 0.120063 0.122955 0.087853 0.084021 0.123784 5 0.260532 0.305244 0.195680 0.382787 0.291343 20 0.902334 0.984475 1.133893 0.782588 1.000454 50 2.423339 2.356945 2.467279 2.204129 2.740590 100 4.642999 5.370021 4.523210 4.882217 4.327625 500 21.189985 20.687721 21.274605 21.739918 20.850313 1000 36.341648 42.679916 36.653397 35.205719 40.310726 Data Attic, Synchronous 1 0.418306 0.410737 0.397875 0.389827 0.390076 2 0.531764 0.508406 0.497816 0.497875 0.459386 5 0.984144 0.815519 0.930664 0.942397 0.823335 20 3.217335 2.783468 2.986201 2.928924 2.872741 50 7.474137 6.736957 7.261557 7.113383 7.254833 100 14.618139 13.644247 12.967666 13.777299 13.660068 500 66.641060 62.903900 61.973415 62.027699 62.311848 1000 130.582275 121.511848 122.934280 121.289078 122.914391 Data Attic, Asynchronous 1 0.189344 0.185143 0.192309 0.194400 0.251310 2 0.212874 0.235214 0.235102 0.216908 0.232351 5 0.321935 0.422411 0.331712 0.319422 0.324539 20 1.033792 0.950114 0.953396 0.976130 0.953424 50 2.032868 2.172808 2.121790 2.063365 2.916815 100 4.047435 4.067168 4.166242 3.994529 4.767362 500 17.276556 17.123955 17.710333 17.886290 16.664816 1000 29.385485 30.017080 30.125422 29.889975 29.226480 NFS 1 0.730075 0.869079 0.781156 0.816003 0.852684 2 0.679588 0.707400 0.723432 0.673037 0.695623 5 1.219076 1.234955 1.258495 1.227126 1.198976 20 3.365768 3.301000 3.138297 3.332914 3.189508 50 7.831756 7.948837 7.877567 8.149231 7.685733 100 15.541264 15.493191 15.531872 15.768909 15.871292 500 67.420471 66.667038 64.314995 65.650665 65.717262 1000 132.147354 127.138458 131.255768 127.490929 128.894974

100 Table A.16: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 20 ms. Upload bandwidth: 20 mbps. Down- load bandwidth: 100 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.061891 0.060947 0.055923 0.059251 0.057587 2 0.124056 0.115906 0.132062 0.087851 0.096753 5 0.210294 0.311322 0.296182 0.188033 0.264166 20 0.830472 0.961293 0.936502 0.888392 1.008453 50 2.400651 2.324899 2.584888 2.292946 2.351023 100 5.021882 4.578645 5.297999 4.941825 4.967724 500 21.489153 21.130777 21.047607 21.048443 21.823980 1000 35.933895 38.073841 37.208393 39.434978 41.287323 Data Attic, Synchronous 1 0.572125 0.567532 0.563985 0.567882 0.568126 2 0.643134 0.669634 0.703021 0.714189 0.701706 5 1.068913 1.007249 1.102563 1.074247 1.024172 20 3.079742 2.893526 3.159401 3.249569 3.044400 50 6.577515 7.049737 7.214568 6.992394 6.853275 100 13.867879 13.382878 13.617109 13.677602 13.693284 500 61.514111 61.611938 62.336311 61.717003 61.626419 1000 124.257484 123.808960 125.339851 123.293831 120.199081 Data Attic, Asynchronous 1 0.318375 0.258525 0.242629 0.249795 0.243040 2 0.295014 0.276346 0.294206 0.272542 0.271850 5 0.371978 0.370529 0.391974 0.367828 0.370106 20 0.959136 0.962050 0.941430 0.986925 0.941796 50 2.080091 2.296157 2.291802 2.114464 2.281238 100 4.125208 4.243598 3.988413 4.123827 4.123937 500 17.642639 17.350021 17.760422 17.661234 17.689476 1000 30.083353 29.822655 29.987852 29.695084 30.185345 NFS 1 1.360423 1.348032 1.293007 1.422493 1.360623 2 1.108984 1.138973 1.131127 1.137035 1.156032 5 1.794440 1.805587 1.694693 1.706513 1.813228 20 4.411937 4.352875 4.476434 4.382797 4.451790 50 8.288333 7.985145 8.021527 8.593178 8.730942 100 15.709219 16.004320 15.787161 15.657173 16.550289 500 65.640190 62.816647 63.515907 63.689060 72.210632 1000 123.790024 128.929993 127.087585 127.842003 139.435928

101 Table A.17: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 50 ms. Upload bandwidth: 20 mbps. Down- load bandwidth: 100 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.065636 0.054065 0.049166 0.064565 0.061829 2 0.116048 0.115808 0.128080 0.115796 0.123761 5 0.294729 0.266357 0.228163 0.275655 0.293831 20 0.898322 1.027279 1.036447 0.972439 0.864421 50 2.379408 2.499179 2.436973 2.745070 2.450892 100 4.790731 4.543181 4.861800 5.049881 4.894497 500 21.565989 21.302460 20.605560 20.855616 20.480026 1000 34.723431 37.387089 35.638927 36.941441 39.463173 Data Attic, Synchronous 1 1.124022 1.128124 1.123880 1.132152 1.117379 2 1.194252 1.198667 1.188695 1.192931 1.221121 5 1.594469 1.582067 1.667346 1.657801 1.660129 20 3.538554 3.581827 3.600000 3.427639 3.522959 50 7.621759 7.811487 7.590448 7.420263 7.619318 100 14.373152 14.144315 13.904218 14.426879 13.861417 500 64.036636 63.033520 64.019958 64.232239 64.241608 1000 124.728523 120.683617 123.987358 126.282608 123.091660 Data Attic, Asynchronous 1 0.435112 0.428266 0.421296 0.437937 0.437205 2 0.474890 0.468095 0.480717 0.464796 0.466388 5 0.566108 0.587815 0.579891 0.569274 0.569517 20 1.145387 1.178898 1.143559 1.122108 1.177480 50 2.349889 2.234135 2.385856 2.333984 2.361417 100 4.148040 4.279788 4.180872 4.255090 4.322849 500 16.677250 16.671568 17.327492 16.675337 16.251122 1000 29.191765 29.811680 29.803543 29.941120 29.661547 NFS 1 2.944688 2.950288 2.759447 2.946715 3.006845 2 2.437727 2.410040 2.421651 2.434324 2.408566 5 3.177175 3.109475 3.284791 3.221248 3.218417 20 5.171477 5.266695 5.067619 5.183842 4.891193 50 8.441146 8.194493 8.233611 8.139811 8.177483 100 15.930936 15.535341 15.751827 16.264915 15.817064 500 63.876633 64.238754 67.568253 64.051193 64.228577 1000 124.682068 126.403030 125.413551 128.322586 128.922577

102 Table A.18: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 100 ms. Upload bandwidth: 20 mbps. Down- load bandwidth: 100 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.064506 0.055422 0.068428 0.081630 0.057491 2 0.124031 0.092040 0.117868 0.100123 0.088043 5 0.303043 0.216146 0.187689 0.298318 0.210922 20 0.822459 0.888776 0.995062 1.102339 1.141717 50 2.652054 2.543119 2.398449 2.969815 2.192847 100 4.510060 4.711391 4.966351 5.441410 4.785790 500 22.300425 21.445917 20.609489 22.512024 21.047623 1000 40.390018 39.377048 40.012066 48.341450 39.054173 Data Attic, Synchronous 1 2.105059 2.154669 2.080833 2.076468 2.147362 2 2.209422 2.203605 2.196220 2.202135 2.208167 5 2.624382 2.633346 2.641235 2.565287 2.639162 20 4.807465 4.607589 4.586599 4.637392 4.608846 50 8.598458 8.514937 8.856984 8.540647 8.819521 100 15.566928 15.027549 15.625415 15.380994 15.092664 500 63.325523 63.822960 64.330376 63.719818 64.031174 1000 127.438202 121.517204 128.502884 125.579353 125.361031 Data Attic, Asynchronous 1 0.741210 0.729199 0.746194 0.728097 0.744706 2 0.745659 0.742995 0.761417 0.770662 0.762588 5 0.898515 0.899219 0.854333 0.849901 0.844699 20 1.542120 1.505748 1.442633 1.445979 1.534060 50 2.620354 2.737929 2.595373 2.657288 2.560628 100 4.606757 4.564988 4.612124 4.635629 4.783839 500 18.869804 19.236456 20.088394 19.082211 19.025850 1000 30.208822 30.092018 30.549765 30.530102 29.765985 NFS 1 5.377262 5.344446 5.323671 5.149697 5.341882 2 2.809694 2.832441 2.829292 2.925522 2.818961 5 3.189486 3.249417 3.301404 3.281373 3.324005 20 5.791339 5.707298 5.819295 6.115080 5.883800 50 8.901457 9.218386 9.084502 9.312484 9.434959 100 16.465868 16.308743 16.750061 17.274006 16.448423 500 66.469070 65.753860 63.219795 67.707672 66.899208 1000 128.846649 129.577881 128.034531 129.450500 127.829819

103 Table A.19: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 0 ms. Upload bandwidth: 1000 mbps. Down- load bandwidth: 1000 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.052802 0.066693 0.058544 0.058868 0.072141 2 0.118418 0.117314 0.123746 0.115766 0.092048 5 0.228874 0.295449 0.307470 0.210062 0.191377 20 1.030058 1.071902 1.177748 0.984446 0.925148 50 2.551085 2.329583 2.475248 2.434572 2.508971 100 4.629217 4.529012 5.214812 4.940187 4.801981 500 21.755299 17.675806 21.155722 21.843941 21.661758 1000 41.667580 39.000687 36.247448 37.585606 34.990551 Data Attic, Synchronous 1 0.185855 0.165984 0.260192 0.170946 0.162623 2 0.197186 0.201858 0.203697 0.190426 0.198998 5 0.416334 0.391224 0.418739 0.296046 0.377656 20 1.194709 1.414429 1.213643 1.261597 1.230931 50 2.667909 2.942886 3.523650 3.031737 2.966442 100 5.204189 5.990384 6.274388 5.797007 5.173704 500 24.645071 24.027498 23.370947 23.792370 24.640005 1000 44.500999 43.883499 48.410431 47.920685 45.091621 Data Attic, Asynchronous 1 0.119918 0.135226 0.112782 0.123200 0.123494 2 0.170719 0.172674 0.163064 0.173597 0.174033 5 0.321626 0.291146 0.354366 0.356412 0.292747 20 0.980043 1.044713 1.247021 1.138503 1.216352 50 2.900109 2.402220 2.782984 2.590402 2.427419 100 5.484600 5.847372 5.040667 4.998085 5.116876 500 20.042292 20.974445 20.490625 21.350510 20.835278 1000 36.015991 39.247730 39.609039 37.631168 40.037735 NFS 1 0.132276 0.137116 0.151901 0.134761 0.158906 2 0.227862 0.219831 0.195826 0.183881 0.194907 5 0.495164 0.463765 0.475729 0.487637 0.492666 20 1.535491 1.738534 1.650647 1.478720 1.674604 50 4.348532 4.212065 4.076813 4.236404 4.028272 100 7.777729 7.809450 7.521788 8.089312 7.689456 500 32.864735 31.471039 32.868843 33.600163 34.179859 1000 63.615067 61.905033 61.560997 63.403728 59.631332

104 Table A.20: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 5 ms. Upload bandwidth: 1000 mbps. Down- load bandwidth: 1000 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.066079 0.060085 0.058599 0.066079 0.059561 2 0.111755 0.120058 0.131765 0.099810 0.130476 5 0.308176 0.308185 0.187699 0.303491 0.303479 20 0.894332 1.004458 1.153845 1.209709 1.054760 50 2.654908 2.689085 2.491225 2.399440 2.196893 100 4.733984 4.864127 4.726852 5.082154 4.765851 500 20.872192 21.542088 21.322533 20.915335 21.968424 1000 34.760387 40.734795 37.618828 34.633160 39.411354 Data Attic, Synchronous 1 0.264992 0.255636 0.346547 0.263238 0.255482 2 0.279822 0.252443 0.292351 0.282692 0.314531 5 0.402927 0.489988 0.494913 0.556057 0.475760 20 1.341610 1.365536 1.465139 1.337033 1.275087 50 3.195807 3.062260 2.982951 3.347876 2.926886 100 6.171115 5.922306 5.734056 6.368814 5.660809 500 23.415213 24.276159 23.754898 23.353498 24.663715 1000 47.717056 44.210415 46.567455 46.790451 48.542603 Data Attic, Asynchronous 1 0.144337 0.156385 0.165308 0.159189 0.154507 2 0.204883 0.168710 0.191289 0.202075 0.216571 5 0.338626 0.384824 0.381111 0.273487 0.395040 20 1.205848 1.031601 0.952871 0.974623 1.049833 50 2.554513 2.189801 2.503795 2.656783 2.874230 100 4.930090 5.267166 4.635101 4.884575 4.837085 500 20.910566 20.681545 22.028847 21.517256 21.628183 1000 39.047089 36.790844 35.828159 38.835724 39.121304 NFS 1 0.289073 0.191494 0.198805 0.188495 0.203538 2 0.433926 0.243884 0.211836 0.291864 0.255858 5 0.895406 0.463741 0.507815 0.403725 0.408500 20 2.591310 1.642198 1.514823 1.626724 1.626793 50 5.226505 3.965564 4.396465 4.036750 3.923349 100 9.456245 7.985509 7.767923 7.813695 8.023010 500 35.842632 31.458055 32.361736 33.492371 33.826340 1000 70.557426 60.697830 60.267525 62.911922 61.330849

105 Table A.21: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 10 ms. Upload bandwidth: 1000 mbps. Download bandwidth: 1000 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.059947 0.057768 0.065794 0.062635 0.057648 2 0.119745 0.084045 0.112042 0.124058 0.084041 5 0.211969 0.300200 0.250548 0.232650 0.245024 20 1.062388 1.156485 0.874375 0.899917 0.907535 50 2.583035 2.493018 2.523197 2.493016 2.705589 100 4.781908 4.890125 4.447437 4.737915 4.797366 500 21.491064 20.408449 17.930639 17.847034 17.679092 1000 37.319664 35.392494 34.997295 39.114601 35.268810 Data Attic, Synchronous 1 0.350203 0.351033 0.354970 0.345641 0.358032 2 0.363031 0.381948 0.436370 0.393468 0.384511 5 0.486820 0.576733 0.593699 0.475037 0.483944 20 1.420765 1.403482 1.326527 1.327772 1.224687 50 3.034193 3.088941 3.156598 3.166182 3.633196 100 5.898386 5.657500 5.893041 5.812803 5.804256 500 23.948648 23.521902 24.226698 23.990162 23.239834 1000 43.130703 48.449902 43.954231 48.777012 49.927437 Data Attic, Asynchronous 1 0.177280 0.182546 0.202056 0.184778 0.187692 2 0.232312 0.198750 0.196426 0.233840 0.243565 5 0.321891 0.308345 0.321625 0.315449 0.413644 20 1.154452 1.302144 1.071974 1.250894 1.272065 50 2.285730 2.662540 2.642736 2.530391 2.389327 100 5.361224 4.936982 4.589179 4.876550 4.604817 500 20.920923 20.333437 24.838356 21.612688 21.433899 1000 39.162601 39.186367 40.992363 37.987411 39.196220 NFS 1 0.239133 0.249746 0.238424 0.242046 0.283767 2 0.295820 0.279785 0.275819 0.279807 0.307603 5 0.586849 0.439719 0.559585 0.490793 0.539620 20 2.174798 1.686155 1.533856 1.763358 1.778041 50 4.496090 4.236693 3.937366 3.844660 4.684948 100 8.623884 7.601957 7.861274 7.808679 8.328626 500 33.077911 31.977888 32.968937 33.587437 36.310772 1000 62.540543 62.404739 63.396378 62.750622 67.232498

106 Table A.22: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 20 ms. Upload bandwidth: 1000 mbps. Download bandwidth: 1000 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.055501 0.063173 0.060924 0.060423 0.055923 2 0.131762 0.124195 0.109880 0.123780 0.124068 5 0.227965 0.220020 0.268256 0.187718 0.295010 20 1.089534 0.944428 1.010543 0.950635 0.834456 50 2.259647 2.492982 2.413654 2.274240 2.336040 100 4.714800 4.961868 5.025897 4.469689 5.013885 500 21.258814 21.027180 20.852999 21.364969 21.801348 1000 38.414028 38.198502 40.673565 40.421665 37.832481 Data Attic, Synchronous 1 0.566871 0.544924 0.540744 0.544871 0.540603 2 0.624051 0.573636 0.606730 0.543881 0.548725 5 0.722158 0.797167 0.784159 0.792350 0.797244 20 1.581985 1.718343 1.419558 1.588982 1.568140 50 3.520948 3.268345 3.165108 3.491501 3.089764 100 5.943084 6.046779 6.284189 6.071392 5.928220 500 23.037170 23.733538 22.728954 22.563906 23.585514 1000 44.578018 45.915146 46.377670 47.309555 46.619480 Data Attic, Asynchronous 1 0.243033 0.251854 0.251211 0.250212 0.247522 2 0.296613 0.296835 0.267780 0.307758 0.263830 5 0.382615 0.428368 0.394796 0.372545 0.385176 20 1.225607 1.073046 0.987438 0.999633 1.251048 50 2.733363 2.718635 2.587143 2.541181 2.678784 100 5.286828 5.268546 4.859350 4.975677 5.037382 500 20.551563 20.595284 21.774206 21.482588 20.970373 1000 36.017902 39.117336 42.228127 35.304607 37.150211 NFS 1 1.212569 0.343352 0.352160 0.352260 0.352415 2 1.099076 0.343727 0.347793 0.344902 0.323793 5 1.662560 0.561974 0.526485 0.611529 0.499023 20 4.316051 1.760092 1.536088 1.894336 1.775288 50 7.293535 4.463104 4.034930 4.364958 4.448431 100 11.249035 7.897581 8.348235 8.592210 8.193405 500 33.861740 33.096752 33.403370 32.337364 32.551746 1000 66.289307 63.371407 62.276512 63.611877 62.093452

107 Table A.23: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 50 ms. Upload bandwidth: 1000 mbps. Download bandwidth: 1000 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.062831 0.055605 0.065583 0.051753 0.066489 2 0.100096 0.132053 0.120059 0.128055 0.116052 5 0.304093 0.295086 0.293756 0.215063 0.210096 20 0.848396 1.085960 0.850818 0.914294 0.998484 50 2.328900 2.484612 2.529009 2.320180 2.480995 100 5.082086 4.553826 4.785810 4.323038 5.402017 500 21.330454 17.710424 21.392979 18.819323 20.928644 1000 34.812134 38.252522 35.559917 37.183823 44.030136 Data Attic, Synchronous 1 1.115748 1.113355 1.094419 1.111827 1.102031 2 1.157426 1.144169 1.142997 1.184110 1.176643 5 1.325514 1.309963 1.316824 1.408336 1.428291 20 2.308694 2.383437 2.249962 2.617759 2.436753 50 4.526013 4.853253 4.508476 4.509760 4.818467 100 7.852029 7.306141 7.987390 8.189612 7.433278 500 32.128235 32.635960 32.803082 32.277878 31.863895 1000 60.708698 62.165150 61.818989 58.708336 57.515579 Data Attic, Asynchronous 1 0.431546 0.411167 0.433525 0.424937 0.419482 2 0.439895 0.475726 0.469031 0.443176 0.473475 5 0.556643 0.555758 0.624846 0.571798 0.542286 20 1.221703 1.275075 1.112928 1.123864 1.280145 50 2.521950 2.733016 2.705331 2.578811 2.600212 100 4.656781 5.464221 4.899717 4.752986 4.979419 500 20.422659 20.227709 20.567400 20.893068 20.500105 1000 38.424648 36.248627 38.415234 38.030464 37.632050 NFS 1 2.808470 0.698469 0.699585 0.688454 0.699990 2 2.381737 0.564761 0.552853 0.555565 0.544791 5 3.621689 0.739509 0.963332 0.779693 0.735404 20 5.707276 2.118108 2.660313 2.058440 2.206156 50 7.318247 5.159272 5.602747 5.031861 5.494914 100 10.879398 10.203527 11.073581 9.621702 9.987600 500 34.611362 38.400375 40.247299 35.866276 36.490063 1000 71.104813 71.634674 77.545654 70.765991 72.244377

108 Table A.24: Execution times of performance test script comparing data attic to NFS. Operation: write. RTT latency added: 100 ms. Upload bandwidth: 1000 mbps. Download bandwidth: 1000 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.068806 0.060968 0.121402 0.061507 0.066294 2 0.116045 0.125863 0.112056 0.116056 0.083665 5 0.273835 0.295492 0.295110 0.241570 0.308179 20 0.966171 0.831238 0.926268 1.007026 0.944436 50 2.443271 2.191003 2.452155 2.384929 2.324909 100 5.022232 4.755607 4.610181 4.781770 4.537703 500 20.859209 17.950569 21.805426 21.013464 18.838869 1000 37.691055 34.278599 38.712391 39.052361 37.420616 Data Attic, Synchronous 1 2.072731 2.045890 2.076608 2.069917 2.061959 2 2.187199 2.179461 2.182323 2.188589 2.227470 5 2.475732 2.488208 2.366383 2.370559 2.443721 20 3.633843 3.669876 3.765541 3.668965 3.763146 50 6.275428 6.269781 6.260333 6.557391 6.745384 100 11.172023 10.796264 10.802117 10.879519 10.796483 500 44.212929 43.701107 40.247608 40.038937 43.987354 1000 79.627876 81.442657 82.472786 81.082001 83.550537 Data Attic, Asynchronous 1 0.717627 0.720205 0.768717 0.729590 0.733555 2 0.742726 0.735921 0.753272 0.775187 0.748191 5 0.878177 0.943826 0.866635 0.902414 0.947994 20 1.469007 1.456314 1.463002 1.394245 1.469913 50 2.761544 2.801467 2.714287 2.829308 2.601946 100 4.502186 4.765174 4.644105 4.596310 4.582359 500 20.302292 19.968840 19.885090 19.439665 19.395405 1000 33.053555 30.689039 31.612782 31.593538 33.063087 NFS 1 5.142686 1.237602 1.243057 1.239086 1.239659 2 4.156747 0.925223 0.879368 0.915288 0.898354 5 4.092753 1.264720 1.275029 1.203221 1.251799 20 6.170901 3.055249 2.929652 3.327403 2.941410 50 8.354902 7.057573 6.754668 6.997530 7.077900 100 12.108686 12.292353 12.634588 12.432456 12.269976 500 41.736568 43.067371 42.183937 42.053177 42.070782 1000 82.352875 80.256226 82.087891 81.020851 82.622467

109 Table A.25: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 0 ms. Upload bandwidth: Max mbps. Down- load bandwidth: Max mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.012801 0.013784 0.012379 0.012967 0.012642 2 0.023955 0.024017 0.024013 0.024013 0.023973 5 0.053949 0.053369 0.053369 0.053370 0.053942 20 0.215749 0.216965 0.216951 0.216955 0.215744 50 0.539119 0.535666 0.535670 0.573932 0.535171 100 1.065099 1.072263 1.070581 1.069083 1.065059 500 5.333475 5.340612 5.345096 5.389964 5.333950 1000 10.686331 10.750601 10.678097 10.678558 11.652473 Data Attic 1 0.101050 0.100077 0.099343 0.113229 0.086078 2 0.097291 0.091198 0.099901 0.094604 0.097787 5 0.150071 0.159613 0.147792 0.148915 0.149821 20 0.420775 0.416116 0.417059 0.416804 0.425226 50 0.977456 0.978034 0.987897 0.954356 0.975713 100 1.925107 1.911911 1.909377 1.945242 1.909735 500 9.453255 9.232305 9.449267 8.994619 9.067406 1000 18.728111 19.021381 18.819431 19.115232 19.137753 NFS 1 0.038988 0.038427 0.040844 0.040855 0.040575 2 0.056879 0.056768 0.056780 0.061028 0.057124 5 0.118746 0.118449 0.124315 0.120634 0.118613 20 0.430807 0.431703 0.433042 0.441604 0.450231 50 1.073084 1.096899 1.107034 1.095231 1.067978 100 2.130063 2.118756 2.144855 2.144905 2.151281 500 8.963226 9.402972 9.039831 9.146280 9.055970 1000 15.426827 15.134356 15.364738 15.135463 15.235631

110 Table A.26: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 5 ms. Upload bandwidth: Max mbps. Down- load bandwidth: Max mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.013081 0.013545 0.015925 0.015203 0.012998 2 0.023306 0.023967 0.023306 0.023308 0.023313 5 0.054079 0.053237 0.054080 0.054083 0.054071 20 0.216964 0.216451 0.220955 0.216960 0.216959 50 0.539671 0.539138 0.532463 0.563673 0.539677 100 1.068270 1.065089 1.067570 1.072292 1.068290 500 5.425174 5.333539 5.373194 5.741084 5.464783 1000 11.366941 10.686639 10.722236 10.738928 10.748539 Data Attic 1 0.199159 0.198389 0.198974 0.198951 0.208846 2 0.209112 0.911998 0.208445 0.209976 0.213202 5 0.288214 0.267730 0.262934 0.261630 0.259755 20 0.549531 0.540855 0.533690 0.533522 0.534676 50 1.108983 1.102420 1.092366 1.105619 1.098162 100 2.031048 1.981190 2.038359 2.032826 2.040599 500 9.326080 9.365308 9.209297 9.325895 9.231532 1000 19.117905 18.926908 19.517826 19.039885 19.019054 NFS 1 0.090955 0.092277 0.094730 0.086390 0.092052 2 0.084664 0.084958 0.084630 0.080698 0.085014 5 0.146471 0.150846 0.147120 0.154932 0.146075 20 0.455432 0.451487 0.442290 0.447368 0.456848 50 1.088158 1.093409 1.097659 1.093572 1.125845 100 2.125244 2.223048 2.122255 2.174116 2.183759 500 9.220222 9.223392 9.265257 9.218376 9.242928 1000 15.132147 15.167054 15.137794 15.459641 15.423551

111 Table A.27: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 10 ms. Upload bandwidth: Max mbps. Down- load bandwidth: Max mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.012297 0.015995 0.013077 0.014535 0.014109 2 0.023962 0.023301 0.023955 0.023960 0.023315 5 0.053246 0.054083 0.053252 0.057245 0.054073 20 0.216455 0.216945 0.216451 0.216450 0.216947 50 0.535143 0.532429 0.535131 0.535134 0.535684 100 1.105015 1.075559 1.069095 1.065098 1.072257 500 6.072072 5.361010 5.453747 5.417348 5.401129 1000 10.670626 10.729923 10.686373 10.654661 10.930715 Data Attic 1 0.326712 0.327669 0.321399 0.326212 0.325633 2 0.341069 0.339767 0.345088 0.338004 0.340375 5 0.391571 0.395687 0.394861 0.396130 0.393493 20 0.667541 0.668677 0.658497 0.667201 0.662053 50 1.203418 1.231336 1.247759 1.236380 1.228114 100 2.157506 2.167683 2.155143 2.178856 2.088369 500 9.456156 9.491174 9.396219 9.658755 9.291300 1000 19.132454 19.082943 20.072985 18.972439 19.428530 NFS 1 0.136828 0.134892 0.137876 0.136327 0.138738 2 0.116815 0.120680 0.112682 0.118879 0.120962 5 0.202917 0.182884 0.175550 0.182433 0.178805 20 0.463990 0.481808 0.475734 0.468550 0.468081 50 1.100556 1.150034 1.165886 1.111669 1.138834 100 2.194682 2.108364 2.200386 2.210879 2.141339 500 9.267393 9.092831 9.418684 9.092290 9.179915 1000 15.404186 15.597685 15.169136 15.389689 15.262370

112 Table A.28: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 20 ms. Upload bandwidth: Max mbps. Down- load bandwidth: Max mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.013888 0.015024 0.015471 0.013328 0.014061 2 0.023957 0.023308 0.023305 0.023961 0.023298 5 0.053249 0.054068 0.054078 0.053240 0.054081 20 0.216447 0.216962 0.220963 0.216456 0.216961 50 0.535120 0.535663 0.535664 0.535149 0.539660 100 1.069105 1.072281 1.082582 1.068095 1.068277 500 5.325983 5.337099 5.373131 5.328583 5.360649 1000 10.654445 11.226827 10.674137 10.654666 10.786672 Data Attic 1 0.583182 0.574000 0.578146 0.594716 0.574157 2 0.604909 0.603624 0.613426 0.610527 0.606927 5 0.707890 0.688398 0.692279 0.698827 0.690850 20 1.105367 1.061144 1.063340 1.061831 1.075037 50 1.752868 1.654385 1.653235 1.686637 1.664580 100 2.653264 2.576894 2.616322 2.572437 2.607989 500 9.835962 10.031301 10.092889 9.860900 10.061152 1000 19.455175 19.582781 20.738596 19.177282 19.440638 NFS 1 0.241651 0.233359 0.235579 0.233357 0.239493 2 0.172539 0.172963 0.172887 0.174965 0.177104 5 0.246840 0.255031 0.249889 0.246289 0.246723 20 0.505498 0.511090 0.525680 0.508098 0.551375 50 1.233099 1.248696 1.215405 1.240245 1.203812 100 2.235475 2.279783 2.274091 2.306447 2.281700 500 9.283020 9.335595 9.319242 9.426291 9.253763 1000 15.502158 15.734213 15.561103 15.447322 15.470838

113 Table A.29: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 50 ms. Upload bandwidth: Max mbps. Down- load bandwidth: Max mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.011547 0.013431 0.012023 0.017097 0.013836 2 0.023962 0.023300 0.023952 0.023288 0.023961 5 0.053250 0.054080 0.053238 0.054085 0.053243 20 0.216447 0.216953 0.216470 0.216947 0.220447 50 0.535134 0.535659 0.539109 0.535677 0.535128 100 1.068095 1.070567 1.072105 1.070556 1.064107 500 5.332552 5.341118 5.368488 5.421149 5.332564 1000 10.766180 10.710132 10.718532 10.711699 10.654665 Data Attic 1 1.329898 1.333818 1.325659 1.324522 1.331090 2 1.425808 1.415530 1.415496 1.419662 1.411664 5 1.624668 1.614463 1.618480 1.630300 1.587299 20 2.603487 2.498615 2.500811 2.471262 2.467997 50 4.010689 3.821101 3.814516 3.798267 3.792828 100 5.825681 5.747650 5.704259 5.703539 5.638037 500 21.105896 21.105520 20.877398 21.276108 21.106815 1000 40.381447 40.400650 40.637341 40.843784 40.699085 NFS 1 0.535993 0.536193 0.535454 0.547325 0.534545 2 0.356194 0.352222 0.352824 0.360520 0.352316 5 0.458630 0.458049 0.459033 0.454983 0.457794 20 0.826061 0.838408 0.822213 0.816783 0.829057 50 1.825447 1.816532 1.816432 1.812544 1.813896 100 3.336429 3.311938 3.310294 3.222954 3.283461 500 12.954087 12.942183 12.824565 13.029607 12.848473 1000 22.097645 21.900936 22.023121 21.971760 21.768532

114 Table A.30: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 100 ms. Upload bandwidth: Max mbps. Download bandwidth: Max mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.013516 0.013494 0.012755 0.013425 0.015514 2 0.023970 0.023317 0.023959 0.020593 0.023951 5 0.053235 0.054068 0.053246 0.053364 0.053244 20 0.216459 0.216952 0.216457 0.216957 0.216457 50 0.535128 0.539668 0.555098 0.540371 0.535138 100 1.069093 1.068278 1.069083 1.067562 1.081072 500 5.337490 5.344634 5.333502 5.345358 6.064561 1000 11.684439 11.642979 10.686392 10.678131 10.658468 Data Attic 1 2.575430 2.578762 2.579003 2.572103 2.583624 2 2.766240 2.767203 2.765207 2.770922 2.765608 5 3.173102 3.082716 3.081183 3.169417 2.990880 20 5.017485 4.801803 4.725445 4.717177 4.511675 50 7.776534 7.351033 7.282643 7.268884 6.887718 100 11.383486 11.105704 11.158765 11.161022 10.768575 500 41.553631 41.559563 41.155617 41.471409 40.774239 1000 79.666580 78.532852 78.777260 78.592171 78.588272 NFS 1 1.033895 1.034917 1.036903 1.048687 1.051054 2 0.671618 0.667124 0.661557 0.673982 0.655407 5 0.814907 0.805255 0.805332 0.806826 0.819415 20 1.360046 1.299861 1.304776 1.296036 1.368970 50 2.900097 2.919883 2.935391 2.944232 2.904192 100 5.233956 5.182717 5.230026 5.304803 5.181508 500 19.561600 19.725739 19.640047 19.523405 19.823124 1000 35.469177 35.373699 35.594646 35.517548 35.351673

115 Table A.31: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 0 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.009958 0.008951 0.009463 0.006847 0.009888 2 0.024020 0.024020 0.024009 0.023959 0.024018 5 0.054071 0.054089 0.054083 0.053243 0.054073 20 0.216245 0.216236 0.216236 0.220444 0.216254 50 0.536372 0.536387 0.532452 0.534423 0.536372 100 1.083564 1.071533 1.072267 1.065799 1.071773 500 5.381818 5.381236 5.340383 5.361438 5.415966 1000 10.738580 10.677598 10.674775 11.527441 10.802048 Data Attic 1 3.202392 3.206329 3.245574 3.209043 3.202403 2 6.327407 6.317030 6.317194 6.306697 6.312512 5 15.686652 15.687139 15.683418 15.683372 15.695961 20 62.552094 62.594421 62.618023 62.572124 62.568359 50 156.328400 156.406021 156.387741 156.329666 156.348679 100 312.751587 312.722534 312.617889 312.613190 312.748505 500 1563.219849 1563.528931 1563.058228 1563.289795 1563.005005 1000 3126.386475 3126.416260 3126.604004 3126.355469 3126.698975 NFS 1 2.826491 2.835220 2.843811 2.862402 2.831759 2 5.629289 5.636590 5.625874 5.630885 5.630920 5 14.049678 14.038133 14.035391 14.032538 14.031934 20 55.930504 55.868889 55.908592 55.934269 55.903877 50 139.596558 139.661316 139.639893 139.596649 139.723206 100 279.227661 279.133728 279.180817 279.164398 279.211578 500 1395.767090 1395.968872 1395.598267 1395.999512 1395.838135 1000 2791.701172 2791.589844 2791.645264 2791.488037 2791.552002

116 Table A.32: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 5 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.013174 0.013257 0.014240 0.013018 0.014764 2 0.023313 0.023311 0.023304 0.023964 0.023311 5 0.054066 0.054074 0.054073 0.053243 0.054072 20 0.216970 0.216956 0.216953 0.216462 0.216961 50 0.535675 0.539661 0.535667 0.531227 0.535670 100 1.076292 1.068279 1.072280 1.097750 1.072279 500 5.405215 5.448666 5.405118 5.348607 5.368637 1000 10.710801 10.698597 10.682614 10.658774 10.814655 Data Attic 1 3.272948 3.264307 3.276912 3.293533 3.269032 2 6.379263 6.378499 6.374300 6.375760 6.377610 5 15.758662 15.751980 15.752834 15.751326 15.752108 20 62.673874 62.643562 62.614323 62.644566 62.620083 50 156.392639 156.413788 156.412964 156.410828 156.427017 100 312.685669 312.746307 312.755615 312.703400 312.733246 500 1563.422363 1563.415649 1563.117065 1563.227661 1563.283203 1000 3126.492432 3126.238037 3125.972656 3126.647705 3126.644775 NFS 1 2.864913 2.862604 2.861433 2.857448 2.853079 2 5.663224 5.674509 5.654951 5.657968 5.659042 5 14.068113 14.055810 14.064473 14.063193 14.062115 20 55.955540 55.849754 55.947754 55.949574 55.880386 50 139.646194 139.704910 139.662949 139.715195 139.616074 100 279.180389 279.121307 279.265533 279.242950 279.191833 500 1395.816406 1395.790039 1395.801147 1395.846802 1396.125610 1000 2791.626221 2791.147705 2791.795166 2791.372070 2791.751221

117 Table A.33: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 10 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.014100 0.013260 0.014691 0.078732 0.014012 2 0.023305 0.023917 0.023961 0.023305 0.023304 5 0.054075 0.053251 0.053246 0.054072 0.054069 20 0.216954 0.216441 0.216460 0.216960 0.220958 50 0.539677 0.545681 0.531222 0.535668 0.535674 100 1.066643 1.072265 1.093757 1.084279 1.068270 500 5.345060 5.337089 5.352582 5.337105 5.372648 1000 10.702144 10.734610 10.678701 11.763024 10.738622 Data Attic 1 3.340918 3.344593 3.342342 3.345226 3.342699 2 6.456185 6.452004 6.456654 6.445892 6.451810 5 15.834831 15.840876 15.835675 15.822229 15.823122 20 62.690590 62.713188 62.704781 62.687027 62.695045 50 156.487457 156.506332 156.485565 156.482880 156.453049 100 312.718750 312.806396 312.759674 312.760315 312.835022 500 1563.255737 1563.238770 1563.181274 1563.084229 1563.422852 1000 3126.728271 3126.483154 3126.605713 3126.390869 3126.622803 NFS 1 2.891419 2.890239 2.894913 2.895188 2.892471 2 5.687049 5.685907 5.683142 5.684261 5.682752 5 14.092794 14.087283 14.104679 14.095231 14.087827 20 55.964592 55.944973 55.894711 55.995647 55.973820 50 139.685440 139.726242 139.782867 139.731445 139.603455 100 279.314209 279.160797 279.255829 279.271149 279.280304 500 1395.755859 1395.686279 1395.759399 1395.745728 1395.717407 1000 2791.821533 2791.079834 2791.464111 2791.531494 2791.905273

118 Table A.34: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 20 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.014396 0.012905 0.012401 0.012461 0.011656 2 0.023958 0.023958 0.023959 0.023313 0.023961 5 0.053245 0.053243 0.053242 0.054081 0.053254 20 0.216447 0.216452 0.216456 0.220944 0.216447 50 0.531223 0.539123 0.539116 0.535656 0.535136 100 1.069795 1.124986 1.065114 1.068267 1.069087 500 5.328587 6.276135 5.345923 5.344629 5.357453 1000 10.686608 10.686377 10.794165 11.811011 10.726285 Data Attic 1 3.510644 3.494565 3.510625 3.490841 3.512266 2 6.601064 6.602816 6.605989 6.601748 6.602426 5 15.974097 15.970911 15.978126 15.973613 15.974466 20 62.843861 62.853081 62.846176 62.841274 62.863022 50 156.605774 156.646332 156.621323 156.607285 156.621750 100 312.863007 312.990021 312.991852 312.965027 312.953857 500 1563.156006 1563.456177 1563.538574 1563.632080 1563.054443 1000 3126.506348 3126.554199 3126.659912 3126.772461 3126.935303 NFS 1 2.946069 2.961861 2.962199 2.943344 2.946831 2 5.737832 5.731077 5.739010 5.722547 5.723512 5 14.139112 14.139986 14.140188 14.127716 14.131248 20 55.996593 56.005192 55.990517 55.937618 55.992283 50 139.813950 139.684464 139.843094 139.788025 139.773148 100 279.269287 279.406647 279.306061 279.341522 279.347626 500 1395.710083 1396.141479 1395.806763 1396.088135 1396.170776 1000 2791.792480 2791.822998 2791.752441 2791.308105 2791.635498

119 Table A.35: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 50 ms. Upload bandwidth: 3 mbps. Download bandwidth: 25 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.013263 0.015948 0.013110 0.012623 0.013358 2 0.023307 0.023309 0.023955 0.023963 0.023958 5 0.054080 0.054082 0.053248 0.053244 0.053263 20 0.216948 0.216960 0.216457 0.216450 0.216444 50 0.539661 0.560464 0.535138 0.535133 0.539114 100 1.066576 1.075574 1.065080 1.069080 1.065094 500 5.345106 5.345202 5.358844 5.964709 5.377869 1000 10.678115 10.694277 10.682537 10.678344 10.658409 Data Attic 1 4.004158 4.008830 4.015088 4.002739 4.026192 2 7.114697 7.116794 7.119052 7.113482 7.113140 5 16.485811 16.489161 16.485605 16.483124 16.504620 20 63.366611 63.386959 63.395840 63.367844 63.364651 50 157.143570 157.186203 157.167007 157.134918 157.157425 100 313.475891 313.521576 313.509857 313.474365 313.488861 500 1564.068848 1564.187256 1564.314819 1563.947388 1563.957153 1000 3127.023438 3127.435791 3127.975586 3127.821289 3127.307861 NFS 1 3.123166 3.129868 3.126827 3.124901 3.122628 2 5.876701 5.874565 5.881501 5.879925 5.871020 5 14.278117 14.289805 14.282673 14.282882 14.283781 20 56.097797 56.064995 56.099304 56.094959 56.086998 50 139.926682 140.004227 139.890045 139.998077 139.948822 100 279.542206 279.585358 279.451172 279.470154 279.482635 500 1396.017456 1396.296387 1396.153442 1396.007690 1396.070190 1000 2791.854736 2792.581055 2791.857910 2791.323975 2791.928223

120 Table A.36: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 100 ms. Upload bandwidth: 3 mbps. Down- load bandwidth: 25 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.014408 0.014576 0.012113 0.015107 0.013377 2 0.023305 0.023962 0.023957 0.023312 0.023312 5 0.054078 0.053243 0.053245 0.054075 0.054071 20 0.216953 0.216459 0.216454 0.216944 0.216948 50 0.539669 0.535129 0.539123 0.571671 0.539673 100 1.068288 1.069083 1.065091 1.068264 1.068276 500 6.161447 5.377400 5.349912 5.493151 5.376619 1000 10.682670 10.726257 10.754217 10.682570 11.182756 Data Attic 1 4.959634 4.986181 4.953514 4.955634 4.954262 2 8.092593 8.065809 8.064820 8.062017 8.067725 5 17.437199 17.434725 17.441088 17.432547 17.436598 20 64.317390 64.323051 64.315926 64.328316 64.324120 50 158.139160 158.119919 158.113403 158.119125 158.114273 100 314.484283 314.532257 314.492706 314.465576 314.502411 500 1564.744385 1565.115112 1565.072388 1564.775635 1564.880371 1000 3127.746094 3128.167480 3128.198975 3128.105957 3128.046143 NFS 1 3.423364 3.429612 3.423814 3.428534 3.425334 2 6.131079 6.120411 6.124256 6.125934 6.123095 5 14.527884 14.538430 14.531332 14.532060 14.527902 20 56.186703 56.270054 56.151783 56.152508 56.159885 50 140.247986 140.230560 140.237869 140.330200 140.278000 100 279.874603 279.733826 279.800995 279.794617 279.829224 500 1396.353149 1396.069824 1396.173340 1396.171753 1396.306152 1000 2792.031250 2792.239014 2792.462158 2792.021240 2792.005127

121 Table A.37: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 0 ms. Upload bandwidth: 20 mbps. Download bandwidth: 100 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.014505 0.015254 0.075289 0.015466 0.011576 2 0.023948 0.023963 0.091993 0.024017 0.024009 5 0.053247 0.053242 0.228806 0.053919 0.053918 20 0.216451 0.216456 0.228593 0.216398 0.220386 50 0.538419 0.535136 0.534419 0.536365 0.536366 100 1.073779 2.086104 1.069793 1.071562 1.067566 500 5.333485 5.328561 5.333466 5.409799 5.377341 1000 10.702304 10.710503 10.722239 10.721869 10.718082 Data Attic 1 0.574519 0.573466 0.570053 0.576455 0.572611 2 1.059613 1.059623 1.060159 1.061613 1.062281 5 2.566059 2.571183 2.572260 2.566203 2.573058 20 10.117306 10.120498 10.112135 10.121973 10.119271 50 25.221779 25.222275 25.203032 25.199062 25.184456 100 50.373802 50.441643 50.379120 50.427475 50.346050 500 251.681290 251.771332 251.578354 251.805191 251.722458 1000 503.694153 503.375702 503.507599 503.348816 503.294342 NFS 1 0.451924 0.450984 0.448623 0.450537 0.447094 2 0.881178 0.879164 0.877239 0.901263 0.879391 5 2.181232 2.174882 2.173691 2.180179 2.178299 20 8.632182 8.641666 8.629431 8.640265 8.641224 50 21.561703 21.563316 21.587738 21.557947 21.561758 100 42.313496 42.410908 42.284439 42.212589 42.243401 500 210.456345 210.556808 210.414490 210.450256 210.558212 1000 420.052979 419.958191 420.135651 420.178467 420.378784

122 Table A.38: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 5 ms. Upload bandwidth: 20 mbps. Download bandwidth: 100 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.012272 0.013081 0.013561 0.013877 0.013794 2 0.023963 0.023966 0.023964 0.023300 0.023318 5 0.053242 0.053238 0.053237 0.054079 0.054069 20 0.220447 0.216473 0.213461 0.232957 0.216961 50 0.535142 0.535129 0.538421 0.535681 0.535668 100 1.065089 1.069080 1.065820 1.072275 1.072284 500 5.333500 5.353508 5.332810 5.340676 5.344648 1000 10.710333 11.568725 10.758275 10.694888 10.682849 Data Attic 1 0.646322 0.658974 0.663708 0.659579 0.660457 2 1.150288 1.158597 1.139090 1.148077 1.150904 5 2.661861 2.661002 2.660190 2.661801 2.658157 20 10.202509 10.200012 10.200859 10.202778 10.218682 50 25.313883 25.319210 25.323540 25.310781 25.314678 100 50.525169 50.459045 50.517262 50.494919 50.485157 500 251.713028 251.822372 251.845947 251.994370 251.855743 1000 503.670807 503.340027 503.498352 503.385284 503.377106 NFS 1 0.474501 0.475342 0.475513 0.476020 0.473846 2 0.905208 0.903503 0.898865 0.907161 0.901319 5 2.211712 2.199660 2.202546 2.194682 2.207482 20 8.673900 8.656266 8.706400 8.655240 8.642509 50 21.616798 21.617786 21.597519 21.618116 21.605438 100 42.319099 42.493122 42.373539 42.463516 42.307369 500 210.535538 210.652924 210.460327 210.687286 210.360794 1000 420.278687 420.100220 420.406372 420.247375 420.171570

123 Table A.39: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 10 ms. Upload bandwidth: 20 mbps. Down- load bandwidth: 100 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.012209 0.014907 0.014902 0.014125 0.014928 2 0.023950 0.023311 0.023966 0.023307 0.023963 5 0.053239 0.054080 0.053234 0.054079 0.053244 20 0.216457 0.216947 0.216457 0.216952 0.216451 50 0.535139 0.535665 0.535141 0.535667 0.539147 100 1.065103 1.072265 1.069070 1.116306 1.065087 500 5.357465 5.392666 6.196305 6.897714 5.381875 1000 10.830149 10.678818 10.678364 10.814687 10.698339 Data Attic 1 0.739170 0.734457 0.732286 0.728080 0.730336 2 1.227151 1.226590 1.226537 1.223224 1.223227 5 2.734043 2.731025 2.733821 2.732158 2.729090 20 10.285460 10.282348 10.280229 10.275315 10.289103 50 25.378128 25.398132 25.363564 25.394247 25.396193 100 50.504940 50.579166 50.600586 50.547752 50.575600 500 251.816498 251.970505 251.709610 251.866852 251.797318 1000 503.656525 503.498840 503.578186 503.450256 503.496246 NFS 1 0.501777 0.500818 0.501857 0.506806 0.504194 2 0.932240 0.930030 0.927457 0.925265 0.927870 5 2.229921 2.226783 2.299566 2.232399 2.239624 20 8.659801 8.657225 8.765673 8.669767 8.654137 50 21.632347 21.638670 21.703163 21.632696 21.633886 100 42.417915 42.363953 42.373817 42.415913 42.513714 500 209.941513 210.350479 210.116730 210.464706 210.110809 1000 420.522369 420.113403 420.176392 420.414948 420.357330

124 Table A.40: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 20 ms. Upload bandwidth: 20 mbps. Down- load bandwidth: 100 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.013543 0.015042 0.014522 0.013723 0.012092 2 0.023958 0.023961 0.023960 0.023311 0.023961 5 0.053233 0.053245 0.053239 0.054078 0.053242 20 0.232430 0.216447 0.216450 0.216953 0.216450 50 0.535131 0.535135 0.547109 0.532446 0.539123 100 1.069085 1.089040 1.105022 1.075555 1.065097 500 6.411862 5.349919 5.333489 5.345114 5.393372 1000 10.702356 10.790157 10.670584 10.694106 10.860584 Data Attic 1 0.929746 0.926206 0.939159 0.922957 0.920546 2 1.414759 1.417225 1.416243 1.415090 1.416539 5 2.930924 2.925749 2.923278 2.921917 2.926517 20 10.484479 10.481408 10.475486 10.479676 10.469843 50 25.571327 25.570354 25.598158 25.580103 25.585814 100 50.703053 50.705093 50.776867 50.727341 50.714748 500 252.014114 252.080475 252.029358 252.161606 252.073563 1000 503.861969 503.625702 503.861237 504.030548 503.747101 NFS 1 0.576450 0.551649 0.551047 0.583713 0.548870 2 0.976798 0.999027 0.976202 0.976411 0.976264 5 2.272378 2.270322 2.287227 2.268921 2.288985 20 8.683164 8.703392 8.689393 8.690175 8.710640 50 21.682409 21.670296 21.681890 21.677109 21.690973 100 42.341629 42.506599 42.357639 42.408363 42.412571 500 210.333481 210.156296 210.681442 210.623901 210.009399 1000 420.202240 420.016785 420.145172 420.583618 420.352753

125 Table A.41: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 50 ms. Upload bandwidth: 20 mbps. Down- load bandwidth: 100 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.012222 0.015730 0.014847 0.012977 0.014531 2 0.023952 0.023312 0.023321 0.023963 0.023304 5 0.053239 0.054081 0.054063 0.053241 0.054086 20 0.220445 0.216947 0.216942 0.216455 0.216964 50 0.535138 0.539673 0.559686 0.575064 0.539639 100 1.103842 1.068279 1.068274 1.065074 1.068277 500 5.369088 5.381084 5.357084 5.353927 5.344616 1000 10.704161 10.682593 10.850651 10.670388 10.690808 Data Attic 1 1.567353 1.564143 1.563937 1.575571 1.651211 2 2.058644 2.054007 2.058541 2.055722 2.058800 5 3.562298 3.563328 3.569591 3.565878 3.585876 20 11.102778 11.106563 11.110718 11.123807 11.116936 50 26.229858 26.205578 26.194012 26.224567 26.212738 100 51.409389 51.404984 51.381996 51.414337 51.380875 500 252.704803 252.798050 252.672302 252.740570 252.658295 1000 504.327209 504.554779 504.502197 504.337128 504.066986 NFS 1 0.777788 0.707036 0.725005 0.721611 0.719800 2 1.132709 1.125396 1.129265 1.128045 1.126728 5 2.422137 2.427534 2.435764 2.426025 2.417929 20 8.775098 8.786154 8.795289 8.781536 8.777965 50 21.780754 21.802725 21.799616 21.815811 21.849356 100 42.557476 42.614780 42.705166 42.758602 42.614758 500 210.786255 210.834320 211.055435 210.798279 210.841293 1000 420.622253 420.470978 420.352325 420.779785 420.310974

126 Table A.42: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 100 ms. Upload bandwidth: 20 mbps. Down- load bandwidth: 100 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.014515 0.014148 0.013249 0.013397 0.013128 2 0.024022 0.023893 0.023900 0.024013 0.024014 5 0.053361 0.053310 0.053304 0.053378 0.053370 20 0.216950 0.216447 0.216454 0.216950 0.216966 50 0.576394 0.558384 0.554381 0.536360 0.536362 100 1.070565 1.081788 1.065793 1.467394 1.071561 500 5.340406 5.349284 6.231509 5.377391 5.373323 1000 10.698818 10.662548 10.658384 10.690569 10.674073 Data Attic 1 2.722313 2.715011 2.732630 2.720617 2.717519 2 3.236086 3.234526 3.234225 3.233158 3.235818 5 4.741321 4.740405 4.741404 4.745071 4.740558 20 12.295375 12.300757 12.292222 12.295055 12.290298 50 27.377523 27.417814 27.382536 27.408001 27.413166 100 52.593300 52.604149 52.587761 52.600105 52.524349 500 254.005249 254.122803 254.003281 254.124405 254.003922 1000 505.982758 505.455017 505.868835 505.578827 505.462341 NFS 1 0.951962 0.945474 0.946180 0.947900 0.946210 2 1.385412 1.378178 1.381829 1.380797 1.378263 5 2.672921 2.674875 2.675254 2.682872 2.669572 20 8.927579 8.926731 8.929219 8.922600 8.924067 50 22.053967 22.069128 22.058529 22.048027 22.080389 100 42.800854 42.833454 42.942188 42.804363 42.920433 500 210.947449 211.131729 210.553925 211.218079 211.075897 1000 420.654083 420.838501 421.011414 420.689606 420.797302

127 Table A.43: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 0 ms. Upload bandwidth: 1000 mbps. Down- load bandwidth: 1000 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.011945 0.012015 0.016590 0.013683 0.013666 2 0.023960 0.023946 0.023966 0.023962 0.024021 5 0.053250 0.053252 0.057232 0.081191 0.053916 20 0.216741 0.216453 0.216453 0.216450 0.216401 50 0.535148 0.534419 0.534439 0.538415 0.540369 100 1.065079 1.068115 1.065792 1.065806 1.067568 500 5.341456 5.329249 5.373157 5.333479 5.345316 1000 10.714270 10.674587 10.718982 11.704374 10.742591 Data Attic 1 0.095839 0.099990 0.095088 0.109347 0.100209 2 0.101252 0.105719 0.103363 0.106858 0.107477 5 0.159241 0.153263 0.161270 0.159600 0.155042 20 0.440763 0.442512 0.427815 0.426092 0.431074 50 0.994377 0.976555 0.999252 1.004847 0.991111 100 1.862909 1.909359 1.934612 1.957417 1.908165 500 9.353525 9.156355 9.063665 9.169987 9.061100 1000 19.249550 18.977049 19.031811 18.844490 18.882013 NFS 1 0.044861 0.038193 0.039769 0.039004 0.039521 2 0.060786 0.060731 0.058452 0.056783 0.060141 5 0.122469 0.118733 0.118706 0.118793 0.119576 20 0.428337 0.425364 0.426424 0.435693 0.430202 50 1.060672 1.089206 1.079605 1.074996 1.106824 100 2.165933 2.172843 2.083941 2.084490 2.110491 500 9.176386 9.315820 9.144845 9.261391 9.413330 1000 15.407029 15.111933 15.411137 15.538260 15.169400

128 Table A.44: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 5 ms. Upload bandwidth: 1000 mbps. Down- load bandwidth: 1000 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.014312 0.013258 0.125240 0.014934 0.013498 2 0.023299 0.023969 0.023309 0.023953 0.023304 5 0.054077 0.057228 0.054078 0.053249 0.054077 20 0.213257 0.216462 0.220955 0.216451 0.216952 50 0.540372 0.535143 0.535672 0.531215 0.535684 100 1.067563 1.065089 1.072271 1.065809 1.070565 500 5.365365 5.785085 5.360645 5.336574 5.341126 1000 10.686766 10.682388 10.766647 10.674624 10.726148 Data Attic 1 0.231299 0.207456 0.210872 0.202498 0.220859 2 0.214218 0.212470 0.213523 0.213756 0.214784 5 0.271689 0.272026 0.263771 0.277019 0.266604 20 0.556397 0.540932 0.538534 0.543130 0.801016 50 1.091882 1.105468 1.088153 1.109255 1.098998 100 2.030252 2.038642 2.040077 2.033295 2.030826 500 9.260099 9.364548 9.208427 9.185124 9.125246 1000 19.465603 18.740974 19.195414 19.130281 18.883484 NFS 1 0.098570 0.085918 0.086261 0.088024 0.086850 2 0.097167 0.086583 0.084973 0.084712 0.088390 5 0.150552 0.153201 0.146741 0.150469 0.151295 20 0.438154 0.462089 0.448921 0.439265 0.451306 50 1.091057 1.133171 1.115047 1.100985 1.093386 100 2.193097 2.163142 2.173291 2.190594 2.132412 500 9.080558 9.149271 8.979112 9.206995 9.162634 1000 15.651818 15.280517 15.394970 15.563126 15.208155

129 Table A.45: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 10 ms. Upload bandwidth: 1000 mbps. Down- load bandwidth: 1000 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.014995 0.014472 0.013197 0.012892 0.015352 2 0.023957 0.023960 0.023954 0.023970 0.023318 5 0.053242 0.053239 0.053253 0.053244 0.054069 20 0.216471 0.216469 0.220442 0.216457 0.216964 50 0.539115 0.535131 0.531218 0.559085 0.535666 100 1.065101 1.069108 1.089774 1.072111 1.072282 500 5.353471 5.349513 5.332621 5.332635 5.432680 1000 10.702371 10.678683 10.682684 10.672022 10.710872 Data Attic 1 0.324816 0.332452 0.331968 0.335270 0.334667 2 0.348177 0.344422 0.349974 0.339348 0.352759 5 0.397255 0.392281 0.397923 0.402695 0.392481 20 0.667265 0.678583 0.676612 0.672178 0.678885 50 1.231160 1.266360 1.239021 1.233405 1.204625 100 2.174609 2.137823 2.172406 2.090364 2.127868 500 9.297504 9.283559 9.398632 9.299617 9.591501 1000 19.438564 19.037067 19.062597 19.191475 19.197802 NFS 1 0.137207 0.136354 0.137129 0.136177 0.137589 2 0.116616 0.120433 0.125968 0.121004 0.120937 5 0.178404 0.183425 0.174319 0.180846 0.190173 20 0.472999 0.464744 0.475286 0.470276 0.478278 50 1.182322 1.137511 1.124481 1.132724 1.124164 100 2.155873 2.168211 2.215777 2.201844 2.143308 500 9.207791 9.388331 9.260987 9.517526 9.339321 1000 15.381142 15.382339 15.181826 15.344382 15.513127

130 Table A.46: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 20 ms. Upload bandwidth: 1000 mbps. Down- load bandwidth: 1000 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.014012 0.014376 0.029776 0.014192 0.013119 2 0.023938 0.023965 0.043920 0.023312 0.023966 5 0.053240 0.053250 0.053253 0.054080 0.053240 20 0.244408 0.216462 0.361258 0.216961 0.216458 50 0.535138 0.535125 0.539656 0.535659 0.535123 100 1.065086 1.065095 1.068285 1.072266 1.069088 500 5.341469 5.333492 5.393097 5.340629 5.357463 1000 11.536700 10.710325 10.702610 10.682817 10.798151 Data Attic 1 0.585253 0.575164 0.582280 0.576260 0.582344 2 0.608915 0.607288 0.609558 0.611980 0.611182 5 0.677078 0.673144 0.685416 0.678408 0.678276 20 0.951306 0.966444 0.962410 0.959414 0.961215 50 1.503323 1.512594 1.524236 1.527188 1.537219 100 2.449073 2.430100 2.415965 2.455801 2.459199 500 9.971152 9.772855 9.772682 9.889025 9.955913 1000 19.629475 19.715509 19.449205 19.596937 19.109718 NFS 1 0.239332 0.240285 0.235611 0.243157 0.238416 2 0.176515 0.176602 0.176815 0.173331 0.184945 5 0.245209 0.244390 0.246117 0.238558 0.244134 20 0.536667 0.513875 0.516329 0.517519 0.515985 50 1.297457 1.254623 1.232977 1.229618 1.238181 100 2.337497 2.205348 2.285600 2.296578 2.197381 500 9.585058 9.207819 9.395561 9.450255 9.331936 1000 15.464273 15.609755 15.476225 15.701265 15.284601

131 Table A.47: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 50 ms. Upload bandwidth: 1000 mbps. Down- load bandwidth: 1000 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.013936 0.014098 0.015217 0.014441 0.015220 2 0.023958 0.023961 0.023302 0.023314 0.023270 5 0.053249 0.053246 0.054073 0.054072 0.054069 20 0.216443 0.216458 0.220957 0.216954 0.216960 50 0.535146 0.535149 0.535665 0.567680 0.535658 100 1.069083 1.068085 1.088285 1.070581 1.072285 500 5.353928 6.310680 5.353090 5.369123 6.177397 1000 10.674421 10.654681 10.698616 10.682139 10.682613 Data Attic 1 1.327508 1.327537 1.328230 1.329041 1.323218 2 1.419235 1.417890 1.419932 1.433298 1.422295 5 1.546429 1.579297 1.547539 1.543415 1.542073 20 2.154391 2.135829 2.130220 2.136349 2.178776 50 3.353392 3.321645 3.330185 3.352096 3.339505 100 5.302264 5.240001 5.288125 5.226161 5.300101 500 20.861240 20.651667 20.789122 20.957628 20.738808 1000 40.606426 40.081741 40.302311 40.230686 40.398174 NFS 1 0.536005 0.535580 0.533607 0.536646 0.535341 2 0.364054 0.355169 0.356968 0.355054 0.353164 5 0.481711 0.457854 0.461574 0.450285 0.455680 20 0.844609 0.867897 0.836679 0.837967 0.834969 50 1.890677 1.881653 1.869351 1.848944 1.841646 100 3.408702 3.350919 3.318367 3.354725 3.341660 500 12.988369 12.958642 13.216334 13.099484 13.066463 1000 21.904961 21.940199 21.933634 22.029299 22.111099

132 Table A.48: Execution times of performance test script comparing data attic to NFS. Operation: read. RTT latency added: 100 ms. Upload bandwidth: 1000 mbps. Download bandwidth: 1000 mbps.

File Size Trial Trial Trial Trial Trial (mb) 1 (sec.) 2 (sec.) 3 (sec.) 4 (sec.) 5 (sec.) Local only 1 0.013688 0.016104 0.011850 0.014713 0.015314 2 0.023945 0.023302 0.023947 0.023967 0.023302 5 0.053245 0.054073 0.053234 0.053239 0.054073 20 0.216460 0.216958 0.216460 0.216462 0.216956 50 0.543107 0.535668 0.535126 0.535139 0.535664 100 1.069077 1.100279 1.655944 1.077055 1.584149 500 5.361901 5.341076 5.325963 5.337471 5.385407 1000 10.730240 10.750600 10.678383 11.121507 10.642846 Data Attic 1 2.581544 2.571464 2.578320 2.584034 2.577116 2 2.768815 2.770740 2.767468 2.765352 2.770235 5 2.992150 2.990522 2.991731 2.990354 2.993527 20 4.128263 4.129865 4.127096 4.141631 4.134083 50 6.443527 6.421541 6.392045 6.394361 6.426034 100 10.202190 10.251856 10.248662 10.250010 10.248103 500 40.715759 40.460876 40.563328 40.405365 40.349304 1000 78.922302 78.135826 78.180153 78.225075 78.434685 NFS 1 1.036652 1.038579 1.034361 1.034917 1.036411 2 0.663571 0.656573 0.681114 0.668559 0.668853 5 0.805849 0.804070 0.811229 0.804287 0.811229 20 1.377500 1.383162 1.374107 1.390428 1.413378 50 3.002639 2.990387 3.013238 3.017627 2.973926 100 5.273320 5.279652 5.232237 5.256146 5.312119 500 19.717278 19.853085 19.971788 19.631626 19.783358 1000 35.649261 35.353550 35.605354 35.431969 35.448105

133 Bibliography

[1] M. Chadalapaka et al. Internet Small Computer System Interface (iSCSI) Pro- tocol (Consolidated). RFC 7143. RFC Editor, Apr. 2014.

[2] Cyberduck — Libre server and cloud storage browser for Mac and Windows with support for FTP, SFTP, WebDAV, Amazon S3, OpenStack Swift, Backblaze

B2, Microsoft Azure and OneDrive, Google Drive and Dropbox. url: https: //cyberduck.io/.

[3] davfs2.conf(5) - Linux man page. url: https : // linux . die. net / man/ 5 / davfs2.conf.

[4] Idilio Drago et al. “Inside Dropbox: Understanding Personal Cloud Storage Services”. In: Proceedings of the 2012 Internet Measurement Conference. IMC

’12. Boston, Massachusetts, USA: ACM, 2012, pp. 481–494. isbn: 978-1-4503- 1705-4. doi: 10.1145/2398776.2398827. url: http://doi.acm.org/10. 1145/2398776.2398827.

[5] Dropbox. url: https://www.dropbox.com/.

[6] L. Dusseault. HTTP Extensions for Web Distributed Authoring and Versioning (WebDAV). RFC 4918. RFC Editor, June 2007.

[7] gcc(1) - Linux man page. url: https://linux.die.net/man/1/gcc.

[8] kernel.org. Ext4 Filesystem. Tech. rep. url: https://www.kernel.org/doc/ Documentation/filesystems/ext4.txt.

[9] ld(1) - Linux man page. url: https://linux.die.net/man/1/ld.

134 [10] Data Transfer Project. Data Transfer Project Overview and Fundamentals.

Tech. rep. July 2018. url: https : / / datatransferproject . dev / dtp - overview.pdf.

[11] Secure Enterprise File Sharing (EFSS). url: https://owncloud.com/.

[12] S. Shepler et al. Network File System (NFS) version 4 Protocol. RFC 3530. RFC Editor, Apr. 2003.

[13] Christopher Soghoian. “Caught in the cloud: Privacy, encryption, and govern- ment back doors in the web 2.0 era”. In: Journal on Telecommunications and

High Technology Law 8 (2010), pp. 359–424. url: http://www.jthtl.org/ content/articles/V8I2/JTHTLv8i2_Soghoian.PDF.

135