Web Tracking SNET2 Seminar Paper - Summer Term 2011
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Release 2.2.1 Kenneth Reitz
Requests Documentation Release 2.2.1 Kenneth Reitz January 15, 2016 Contents 1 Testimonials 3 2 Feature Support 5 3 User Guide 7 3.1 Introduction...............................................7 3.2 Installation................................................8 3.3 Quickstart................................................9 3.4 Advanced Usage............................................. 15 3.5 Authentication.............................................. 25 4 Community Guide 29 4.1 Frequently Asked Questions....................................... 29 4.2 Integrations................................................ 30 4.3 Articles & Talks............................................. 30 4.4 Support.................................................. 30 4.5 Community Updates........................................... 31 4.6 Software Updates............................................. 31 5 API Documentation 49 5.1 Developer Interface........................................... 49 6 Contributor Guide 69 6.1 Development Philosophy......................................... 69 6.2 How to Help............................................... 70 6.3 Authors.................................................. 71 Python Module Index 77 i ii Requests Documentation, Release 2.2.1 Release v2.2.1. (Installation) Requests is an Apache2 Licensed HTTP library, written in Python, for human beings. Python’s standard urllib2 module provides most of the HTTP capabilities you need, but the API is thoroughly broken. It was built for a different time — and a different web. -
The Web Never Forgets: Persistent Tracking Mechanisms in the Wild
The Web Never Forgets: Persistent Tracking Mechanisms in the Wild Gunes Acar1, Christian Eubank2, Steven Englehardt2, Marc Juarez1 Arvind Narayanan2, Claudia Diaz1 1KU Leuven, ESAT/COSIC and iMinds, Leuven, Belgium {name.surname}@esat.kuleuven.be 2Princeton University {cge,ste,arvindn}@cs.princeton.edu ABSTRACT 1. INTRODUCTION We present the first large-scale studies of three advanced web tracking mechanisms — canvas fingerprinting, evercookies A 1999 New York Times article called cookies compre and use of “cookie syncing” in conjunction with evercookies. hensive privacy invaders and described them as “surveillance Canvas fingerprinting, a recently developed form of browser files that many marketers implant in the personal computers fingerprinting, has not previously been reported in the wild; of people.” Ten years later, the stealth and sophistication of our results show that over 5% of the top 100,000 websites tracking techniques had advanced to the point that Edward employ it. We then present the first automated study of Felten wrote “If You’re Going to Track Me, Please Use Cook evercookies and respawning and the discovery of a new ev ies” [18]. Indeed, online tracking has often been described ercookie vector, IndexedDB. Turning to cookie syncing, we as an “arms race” [47], and in this work we study the latest present novel techniques for detection and analysing ID flows advances in that race. and we quantify the amplification of privacy-intrusive track The tracking mechanisms we study are advanced in that ing practices due to cookie syncing. they are hard to control, hard to detect and resilient Our evaluation of the defensive techniques used by to blocking or removing. -
Do Not Track a Nokia Browser Look
Do Not Track Nokia Browser Position - Vikram Malaiya Copyright: Nokia Corporation, 2011 Our understanding of Do Not Track (DNT) • DNT is a technology to enables users to opt out of third-party web tracking • No agreed upon definition of DNT. There are currently 3 major technology proposals for responding to third-party privacy concern. 1. Stanford University and Mozilla’s DNT HTTP Header technique. 2. Blacklist based technique such as Microsoft’s ‘Tracking Protection’ which is part of IE9 3. Network Advertising Initiative’s model of a per company opt-out cookie. Opt-out cookie approach is being promoted by Google. DNT as HTTP Header The Browser adds ‘DNT’/ ‘X-Do-Not-Track’ to its http header. The header is sent out to the server with every web request. This header acts as a signal to the server suggesting that the user wishes to opt out of tracking. Adoption: Firefox 4, IE9 DNT as HTTP Header • Pros: – Scope: Server could apply restrictions to all third party entities and tracking mechanisms – Persistent: No reconfiguration needed once set – Simple: Easy to implement on the browser side • Cons: – Only work as long as the server honors users preferences – No way to enforce national regulations/legislations to servers located beyond country boundaries Block(Black) List / Tracking Protection This is a consumer opt-in mechanism which blocks web connections from known tracking domains that are compiled on a list. First party Third party Adoption: ‘Tracking Protection’ in Internet Explorer 9 cy.analytix.com The downloadable Tracking allow Protection Lists enable IE9 xy.ads.com consumers to control what deny third-party site content can deny track them when they’re ads.tracker.com Tracking Protection online. -
Effects and Opportunities of Native Code Extensions For
Effects and Opportunities of Native Code Extensions for Computationally Demanding Web Applications DISSERTATION zur Erlangung des akademischen Grades Dr. Phil. im Fach Bibliotheks- und Informationswissenschaft eingereicht an der Philosophischen Fakultät I Humboldt-Universität zu Berlin von Dipl. Inform. Dennis Jarosch Präsident der Humboldt-Universität zu Berlin: Prof. Dr. Jan-Hendrik Olbertz Dekan der Philosophischen Fakultät I: Prof. Michael Seadle, Ph.D. Gutachter: 1. Prof. Dr. Robert Funk 2. Prof. Michael Seadle, Ph.D. eingereicht am: 28.10.2011 Tag der mündlichen Prüfung: 16.12.2011 Abstract The World Wide Web is amidst a transition from interactive websites to web applications. An increasing number of users perform their daily computing tasks entirely within the web browser — turning the Web into an important platform for application development. The Web as a platform, however, lacks the computational performance of native applications. This problem has motivated the inception of Microsoft Xax and Google Native Client (NaCl), two independent projects that fa- cilitate the development of native web applications. Native web applications allow the extension of conventional web applications with compiled native code, while maintaining operating system portability. This dissertation determines the bene- fits and drawbacks of native web applications. It also addresses the question how the performance of JavaScript web applications compares to that of native appli- cations and native web applications. Four application benchmarks are introduced that focus on different performance aspects: number crunching (serial and parallel), 3D graphics performance, and data processing. A performance analysis is under- taken in order to determine and compare the performance characteristics of native C applications, JavaScript web applications, and NaCl native web applications. -
The Internet and Web Tracking
Grand Valley State University ScholarWorks@GVSU Technical Library School of Computing and Information Systems 2020 The Internet and Web Tracking Tim Zabawa Grand Valley State University Follow this and additional works at: https://scholarworks.gvsu.edu/cistechlib ScholarWorks Citation Zabawa, Tim, "The Internet and Web Tracking" (2020). Technical Library. 355. https://scholarworks.gvsu.edu/cistechlib/355 This Project is brought to you for free and open access by the School of Computing and Information Systems at ScholarWorks@GVSU. It has been accepted for inclusion in Technical Library by an authorized administrator of ScholarWorks@GVSU. For more information, please contact [email protected]. Tim Zabawa 12/17/20 Capstone Project Cover Page: 1. Introduction 2 2. How we are tracked online 2 2.1 Cookies 3 2.2 Browser Fingerprinting 4 2.3 Web Beacons 6 3. Defenses Against Web Tracking 6 3.1 Cookies 7 3.2 Browser Fingerprinting 8 3.3 Web Beacons 9 4. Technological Examples 10 5. Why consumer data is sought after 27 6. Conclusion 28 7. References 30 2 1. Introduction: Can you remember the last time you didn’t visit at least one website throughout your day? For most people, the common response might be “I cannot”. Surfing the web has become such a mainstay in our day to day lives that a lot of us have a hard time imagining a world without it. What seems like an endless trove of data is right at our fingertips. On the surface, using the Internet seems to be a one-sided exchange of information. We, as users, request data from companies and use it as we deem fit. -
Re-Architecting Web and Mobile Information Access for Emerging Regions
Re-architecting Web and Mobile Information Access for Emerging Regions by Jay Chen A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Mathematics Courant Institute of Mathematical Sciences New York University September 2011 Professor Lakshminarayanan Subramanian c Jay Chen All Rights Reserved, 2011 Acknowledgments I would like to start by expressing my deepest gratitude to my advisor, Lakshminarayanan Sub- ramanian (or just “Lakshmi”). It was Lakshmi who set me on the path toward my eventual area of research. Lakshmi has always been generous with his time, and never short on ideas or en- thusiasm. Without Lakshmi’s courage to pursue the research that inspires him, I would not have found my own passion: to build systems that benefit people - as many people as much as possible by inventing ways to bring technology to people living outside of the privileged regions of the world. Contributors to this dissertation - This thesis is based on research that I performed over the past five years with many colleagues contributing directly to the work in this dissertation. Many people helped me along the way whose help I could not have done without. The RuralCafe user study would not have been possible without the help of Saleema Amershi and Aditya Dhananjay (Chapter 6.6). Our low bandwidth transport modeling and analysis (Chapter 3.1) was an effort largely attributable to Janardhan Iyengar and long discussions with Bryan Ford. Russell Power implemented the feature reduction algorithm for CIPs (Chapter 7.2.2) in his “spare time”. Our ELF deployments (Chapters 2.2 and 5.3) were only possible with help from David Hutchful. -
Introducing 2D Game Engine Development with Javascript
CHAPTER 1 Introducing 2D Game Engine Development with JavaScript Video games are complex, interactive, multimedia software systems. These systems must, in real time, process player input, simulate the interactions of semi-autonomous objects, and generate high-fidelity graphics and audio outputs, all while trying to engage the players. Attempts at building video games can quickly be overwhelmed by the need to be well versed in software development as well as in how to create appealing player experiences. The first challenge can be alleviated with a software library, or game engine, that contains a coherent collection of utilities and objects designed specifically for developing video games. The player engagement goal is typically achieved through careful gameplay design and fine-tuning throughout the video game development process. This book is about the design and development of a game engine; it will focus on implementing and hiding the mundane operations and supporting complex simulations. Through the projects in this book, you will build a practical game engine for developing video games that are accessible across the Internet. A game engine relieves the game developers from simple routine tasks such as decoding specific key presses on the keyboard, designing complex algorithms for common operations such as mimicking shadows in a 2D world, and understanding nuances in implementations such as enforcing accuracy tolerance of a physics simulation. Commercial and well-established game engines such as Unity, Unreal Engine, and Panda3D present their systems through a graphical user interface (GUI). Not only does the friendly GUI simplify some of the tedious processes of game design such as creating and placing objects in a level, but more importantly, it ensures that these game engines are accessible to creative designers with diverse backgrounds who may find software development specifics distracting. -
Preventing Session Hijacking Using Encrypted One-Time-Cookies
Preventing Session Hijacking using Encrypted One-Time-Cookies Renascence Tarafder Prapty 1, Shuhana Azmin 1 Md. Shohrab Hossain 1, Husnu S. Narman2 1Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Bangladesh 2Weisberg Division of Computer Science, Marshall University, Huntington, WV, USA Email: [email protected],[email protected], [email protected], [email protected] Abstract—Hypertext Transfer Protocol (HTTP) cookies are cryptography-based techniques to achieve cookie confidential- pieces of information shared between HTTP server and client to ity and integrity. However, each mechanism lacks in one aspect remember stateful information for the stateless HTTP protocol or another. The scheme described in [1] does not achieve or to record a user’s browsing activity. Cookies are often used in web applications to identify a user and corresponding authen- cookie confidentiality. In [3], the web server is required to ticated session. Thus, stealing a cookie can lead to hijacking an store one-time keys that lead to key management problems. authenticated user’s session. To prevent this type of attack, a In [2] and [4], a one-time key is encrypted by the web server’s cookie protection mechanism is required. In this paper, we have public key. As public key encryption is computationally expen- proposed a secure and efficient cookie protection system. We have sive, these schemes are not efficient. [5] proposes the use of used one time cookies to prevent attacker from performing cookie injection. To ensure cookie integrity and confidentiality, we have cache cookies as an alternative to cookies for authentication of encrypted sensitive information in the cookie. -
Protecting Encrypted Cookies from Compression Side-Channel Attacks
Protecting encrypted cookies from compression side-channel attacks Janaka Alawatugoda1, Douglas Stebila1;2, and Colin Boyd3 1 School of Electrical Engineering and Computer Science, 2 School of Mathematical Sciences 1;2 Queensland University of Technology, Brisbane, Australia [email protected],[email protected] 3 Department of Telematics, Norwegian University of Science and Technology, Trondheim, Norway [email protected] December 28, 2014 Abstract Compression is desirable for network applications as it saves bandwidth; however, when data is compressed before being encrypted, the amount of compression leaks information about the amount of redundancy in the plaintext. This side channel has led to successful CRIME and BREACH attacks on web traffic protected by the Transport Layer Security (TLS) protocol. The general guidance in light of these attacks has been to disable compression, preserving confidentiality but sacrificing bandwidth. In this paper, we examine two techniques|heuristic separation of secrets and fixed-dictionary compression|for enabling compression while protecting high-value secrets, such as cookies, from attack. We model the security offered by these techniques and report on the amount of compressibility that they can achieve. 1This is the full version of a paper published in the Proceedings of the 19th International Conference on Financial Cryptography and Data Security (FC 2015) in San Juan, Puerto Rico, USA, January 26{30, 2015, organized by the International Financial Cryptography Association in cooperation with IACR. 1 Contents 1 Introduction 3 2 Definitions 6 2.1 Encryption and compression schemes.........................6 2.2 Existing security notions................................7 2.3 New security notions..................................7 2.4 Relations and separations between security notions.................8 3 Technique 1: Separating secrets from user inputs9 3.1 The scheme.......................................9 3.2 CCI security of basic separating-secrets technique................. -
Improving Transparency Into Online Targeted Advertising
AdReveal: Improving Transparency Into Online Targeted Advertising Bin Liu∗, Anmol Sheth‡, Udi Weinsberg‡, Jaideep Chandrashekar‡, Ramesh Govindan∗ ‡Technicolor ∗University of Southern California ABSTRACT tous and cover a large fraction of a user’s browsing behav- To address the pressing need to provide transparency into ior, enabling them to build comprehensive profiles of their the online targeted advertising ecosystem, we present AdRe- online interests. This widespread tracking of users and the veal, a practical measurement and analysis framework, that subsequent personalization of ads have received a great deal provides a first look at the prevalence of different ad target- of negative press; users associate adjectives such as creepy ing mechanisms. We design and implement a browser based and scary with the practice [18], primarily because they lack tool that provides detailed measurements of online display insight into how their data is being collected and used. ads, and develop analysis techniques to characterize the con- Our paper seeks to provide transparency into the targeted textual, behavioral and re-marketing based targeting mecha- advertising ecosystem, a capability that has not been ex- nisms used by advertisers. Our analysis is based on a large plored so far. We seek to enable end-users to reason about dataset consisting of measurements from 103K webpages why ads of a certain category are being displayed to them. and 139K display ads. Our results show that advertisers fre- Consider a user that repeatedly receives ads about cures for a quently target users based on their online interests; almost particularly private ailment. The user currently lacks a way half of the ad categories employ behavioral targeting. -
Swarovsky: Optimizing Resource Loading for Mobile Web Browsing
IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. XX, NO. XX, XXXX 201X 1 SWAROVsky: Optimizing Resource Loading for Mobile Web Browsing Xuanzhe Liu, Member, IEEE, Yun Ma, Xinyang Wang, Yunxin Liu Senior Member, IEEE, Tao Xie Senior Member, IEEE, and Gang Huang Senior Member, IEEE Abstract—Imperfect Web resource loading prevents mobile Web browsing from providing satisfactory user experience. In this article, we design and implement the SWAROVsky system to address three main issues of current inefficient Web resource loading: (1) on-demand and thus slow loading of sub-resources of webpages; (2) duplicated loading of resources with different URLs but the same content; and (3) redundant loading of the same resource due to improper cache configurations. SWAROVsky employs a dual-proxy architecture that comprises a remote cloud-side proxy and a local proxy on mobile devices. The remote proxy proactively loads webpages from their original Web servers and maintains a resource loading graph for every single webpage. Based on the graph, the remote proxy is capable of deciding which resources are “really” needed for the webpage and their loading orders, and thus can synchronize these needed resources with the local proxy of a client efficiently and timely. The local proxy also runs an intelligent and light-weight algorithm to identify resources with different URLs but the same content, and thus can avoid duplicated downloading of the same content via network. Our system can be used with existing Web browsers and Web servers, and does not break the normal semantics of a webpage. Evaluations with 50 websites show that on average our system can reduce the page load time by 43.1% and the network data transmission by 57.6%, while imposing marginal system overhead. -
Statement of Justin Brookman Director, Consumer Privacy Center for Democracy & Technology Before the U.S. Senate Committee O
Statement of Justin Brookman Director, Consumer Privacy Center for Democracy & Technology Before the U.S. Senate Committee on Commerce, Science, and Transportation Hearing on “A Status Update on the Development of Voluntary Do-Not-Track Standards” April 24, 2013 Chairman Rockefeller, Ranking Member Thune, and Members of the Committee: On behalf of the Center for Democracy & Technology (CDT), I thank you for the opportunity to testify today. We applaud the leadership the Chairman has demonstrated in examining the challenges in developing a consensus Do Not Track standard and appreciate the opportunity to address the continued insufficiency of self-regulatory consumer privacy protections. CDT is a non-profit, public interest organization dedicated to preserving and promoting openness, innovation, and freedom on the decentralized Internet. I currently serve as the Director of CDT’s Consumer Privacy Project. I am also an active participant in the Worldwide Web Consortium’s Tracking Protection Working Group, where I serve as editor of the “Tracking Compliance and Scope” specification — the document that purports to define what Do Not Track should mean. My testimony today will briefly describe the history of online behavioral advertising and the genesis of the Do Not Track initiative. I will then describe the current state of the World Wide Web Consortium’s efforts to create Do Not Track standards and the challenges going forward to implement Do Not Track tools successfully. I will conclude with my thoughts on the future of Do Not Track. and why I believe that this protracted struggle demonstrates the need for the fundamental reform of our nation’s privacy protection framework for commercial and government collection and use of personal information.