
The Anonymity Engine, Minimizing Quasi-Identifiers to Strengthen k-Anonymity by ERIC MATTHEW LOBATO B.S., University of Colorado, 2015 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirement for the degree of Master of Science Interdisciplinary Telecommunications Department 2017 This thesis entitled: The Anonymity Engine, Minimizing Quasi-Identifiers to Strengthen k-Anonymity written by Eric Matthew Lobato has been approved for the Interdisciplinary Telecommunications Department (Joe McManus) (David Reed) (Levi Perigo) Date The final copy of this thesis has been examined by the signatories, and we find that both the content and the form meet acceptable presentation standards of scholarly work in the above mentioned discipline. IRB protocol # ____________________ IACUC protocol # __________________ ii Lobato, Eric Matthew (M.S, ITP) The Anonymity Engine, Minimizing Quasi-Identifiers to Strengthen k-Anonymity Thesis directed by Scholar in Residence Joe McManus The k-anonymity model has become a standard for anonymizing data. However, almost all applications of k-anonymity are used to anonymize large data sets of personally identifiable information owned by a trusted third party before being given to analysists To further research in this this area, this study created a tool called the Anonymity Engine. This tool was built as a web browser plugin that analyzes headers on all web traffic exiting the system and builds a database of relevant quasi-identifier. Users are notified in real time if a data packet would compromise their identity and give the option to not send the data. This tool has also been used to generate data that shows that modifying data before implementing k-anonymity can impact the results. These modified results show that it can make some users more anonymous while reducing the level of privacy for other users depending on the traffic. iii CONTENTS CHAPTER I. INTRODUCTION AND RESEARCH QUESTION ................................1 II. REVIEW OF THE LITERATURE ..........................................................5 III. RESEARCH DESIGN AND METHODOLOGY……………………....... 9 The Anonymity Engine .......................................................................9 Design ...............................................................................................13 Testing Plans .....................................................................................15 IV. RESULTS AND CONCLUSIONS……………………………………………. 18 Results ...............................................................................................18 Conclusions .......................................................................................24 BIBLIOGRAPHY…………………….………………………………………………26 APPENDIX A. Code Submission ....................................................................................27 B. RAW DATA ...........................................................................................43 iv Table 1. Hospital Records, without anonymization …………………………………………..2 2. Hospital records with k value of 3...…………………………………………………3 3. Database Schema..……...…………………………………………………..………17 4. Application Structure..…...………………………………………………………...18 5. Experiment Users, URLs and Traits……………………………………………......21 6. Experiment 1 Data with Raw Request Counts……………………………………..24 7. Experiment 1 Data with k values of 3……………………………………………...26 8. Experiment 2 Data with k values of 3……………………………………………...27 v Figure 1. Anonymity Engine GET Blocking Example ................................................15 2. Anonymity Engine POST Blocking Example(form data)………………….16 3. Anonymity Engine POST Blocking Example(binary data)………………….16 vi I. Introduction and Research Question Imagine a user who logs onto a computer and visits the following websites: www.denverpost.com to check the news www.apple.com/support/products to look at warranty plans for their computer forecast.weather.gov to check the weather www.ebay.com to bid on a tie. This may appear to be a normal browsing session, but to an advanced user with access to traffic sniffers could use this session to ascertain several identifying traits about the other user. First, that the person is from Colorado, identified from the zip code entered on the weather site and by the news site of choice. Second, that the person has recently broken their apple product, and finally that the person is shopping for neckwear. Perhaps the advanced user knows a friend in real life who matches these descriptions. If this was the case, they could easily learn more facts about their friend by following the traffic of this specific user and potentially learning things that their friend would wish to keep private. As this example shows, maintaining personal privacy when using the internet has become one of the most important issues of the modern day. Unfortunately, nontechnical users struggle to understand how their browsing patterns could link information about themselves to the traffic they generate. Meanwhile, technical users might not always be convinced that they are truly anonymous when using privacy software such as VPNs, proxies or the Tor browser. This research aims to create a new tool that will give users a clear idea of what their web traffic says about them and whether or not they are anonymous over time. This will fill a niche in privacy tools that currently remains open as few privacy tools describe what actual network packets say about a user. The tool that this research will produce is based on the existing model for privacy known as k- anonymity [1]. The k-anonymity model determines anonymity based on the idea that knowing a minimal amount of identifiers can link an individual in a data set. For a data set to meet k-anonymity it must meet the definition: 퐿푒푡 푅푇(퐴푖, … 퐴푛) 푏푒 푎 푇푎푏푙푒 푎푛푑 푄퐼푅푇 푏푒 푡ℎ푒 푞푢푎푠 − 푑푒푛푡푓푒푟 푎푠푠표푐푎푡푒푑 푤푡ℎ 푡. 푅푇 푠 푠푎푑 푡표 푠푎푡푠푓푦 푘 − 푎푛표푛푦푚푡푦 푓 푎푛푑 표푛푙푦 푓 푒푎푐ℎ 푠푒푞푢푒푛푐푒 표푓 푣푎푙푢푒푠 푛 푅푇[푄퐼푅푇]푎푝푝푒푎푟푠 푤푡ℎ 푎푡 푙푒푎푠푡 푘 표푐푐푢푟푎푛푐푒푠 푛 푅푇[푄퐼푅푇] Definition (1) This definition leaves much to be desired for those who are not mathematicians. Thankfully, there are many examples which clearly show the intention of what k-anonymity is attempting to illustrate. 1 The classic example, which was first published with the definition of k-anonymity, was to think of a scenario where you wake up one morning to find an ambulance taking your neighbor, Bob, to a hospital. Let’s say, for example, that you are the nosey type, and you investigate what happened by entering Bob’s house. Conveniently, you find a sheet of hospital records on his counter that looks like Table I. Table 1 Hospital Records While the hospital data may look like it is anonymized due to the fact that it contains no names or obvious identifiers, you as Bob’s neighbor can leverage some knowledge that you have about him to infer which patient is him. Let’s say that you know that Bob is 30 years old because you attended his party last week, and you know his zip code because it is the same as yours. This pieces of information are Bob’s quasi-identifiers; individually they mean nothing, but taken together they identify Bob. With just this background knowledge you can find that there is only one match on the records that fits this description and you realize that a heart attack caused by chest pains is what caused your dear neighbor to be sent to the hospital. Now let’s apply the theory of k-anonymity to the data set. We want to make sure that there are at least a k amount of rows that are the same. TABLE II does this for us by instating a k value of 3. 2 Table 2 Hospital Records with k of 3 This means that, at a minimum, data has been anonymized so that there are 3 rows with matching data. Now, if you had found this sheet in Bob’s house, you would be able to narrow down your choices but you would not be able to say for certain whether Bob was sent to the hospital due to a heart disease or cancer. This implementation of k-anonymity successfully prevented Bob’s privacy from being breached. When data such as this is used in the real world a trusted third party, such as the hospital record keepers, sanitize the data before it is shared with any relevant parties such as researchers. This system works well as long as the third party who is scrubbing the data is trustworthy themselves. One of the big differences with digital data is that unless the user is specifically trying to anonymize their data, unwanted third parties could potentially perform these linking attacks based on a user’s browsing history. What’s more is that as more and more technical users take steps to anonymize their traffic, it also becomes easier to link the data from users who are not taking steps to protect their data because they stand out in the crowd. Returning to the original example of a hacker observing a person’s browsing history we find that a similar tactic can be employed. Had the user chosen to view a weather website that was encrypted, and if they had instead gone to a more general news site as opposed to the Denver Post, the attacker would not have learned that the person was from Colorado and could not have guessed that he knew this user in real life. This research is intending to find out what happens if the user chooses to take up the job of that trusted third party and only send data that would not violate a k-anonymity table. This would not be possible in the real world, after all, you wouldn’t tell your doctor that age is greater than 20 if he asked you how old you were. Yet, in the computer world it
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages69 Page
-
File Size-