Feasibility and Real-World Implications of Web Browser History Detection

Total Page:16

File Type:pdf, Size:1020Kb

Feasibility and Real-World Implications of Web Browser History Detection Feasibility and Real-World Implications of Web Browser History Detection Artur Janc Lukasz Olejnik [email protected] [email protected] ABSTRACT known popular websites, including Web 2.0 applications[7]. Browser history detection through the Cascading Style Sheets Initial academic work and all known demonstrations focused visited pseudoclass has long been known to the academic se- on gathering information about the domain names of known curity community and browser vendors, but has been largely sites and potentially applying it for inferring user-related dismissed as an issue of marginal impact. information. In this paper we present several crucial real-world consid- More recently, CSS-based history detection started to be- erations of CSS-based history detection to assess the feasi- come applied as a powerful component of privacy research, bility of conducting such attacks in the wild. We analyze including work to determine the amount of user-specific in- Web browser behavior and detectability of content returned formation obtainable by ad networks[9] and as part of a via various protocols and HTTP response codes. We develop scheme for deanonymizing social network users[14]. an algorithm for efficient examination of large link sets and However, there has been a notable lack of work exam- evaluate its performance in modern browsers. Compared to ining several crucial aspects of history detection, including existing methods our approach is up to 6 times faster, and is the types of browser-supported protocols and resource types able to detect as many as 30,000 links per second in recent which can be detected, performance considerations, and the browsers on modern consumer-grade hardware. number of users affected by such attacks. Perhaps because We present a web-based system capable of effectively de- of the lack of such large-scale studies no Web browser vendor has yet implemented any kind of protection against history tecting clients' browsing histories and categorizing detected 1 information. We analyze and discuss real-world results ob- detection attacks . tained from 271,576 Internet users. Our results indicate that In this paper, we provide a detailed examination of CSS- at least 76% of Internet users are vulnerable to history de- based history detection techniques and their impact on the tection; for a test of most popular Internet websites we were privacy of Internet users. We provide an overview of exist- able to detect, on average, 62 visited locations. We also ing work, and discuss basic cross-browser implementations demonstrate the potential for detecting private data such of history detection using JavaScript as well as a CSS-only as zipcodes or search queries typed into online forms. Our technique. We evaluate the detectability of resources based results confirm the feasibility of conducting attacks on user on the browser-supported protocols used to retrieve them, privacy using CSS-based history detection and demonstrate analyze the effect of loading content in frames and iframes, that such attacks are realizable with minimal resources. as well as review the impact of HTTP redirects and other response codes. We also demonstrate an optimized algorithm for detect- 1. INTRODUCTION ing visited links and its JavaScript implementation. We pro- Web browsers function as generic platforms for applica- vide detailed performance measurements and compare it to tion delivery and provide various usability enhancements known methods. Our approach is, up to six times faster and performance optimizations, many of which have impli- than known methods, and allows for the examination of up cations for user privacy. One of the earliest such usability to 30,000 links per second on modern hardware. We also improvements was the ability to style links to Web pages vis- provide the first performance analysis of the CSS-only tech- ited by the user differently from unvisited links, introduced nique, demonstrating its value as an efficient, though often by the original version of the CSS standard[2] and quickly neglected, alternative to the scripting approach. adopted by all major Web browsers. This mechanism was Based on our work on a real-world testing system[8], we quickly demonstrated to allow malicious Web authors to de- provide an overview of the design of an efficient history tect which links a client had[1]. detection application capable of providing categorized test Since then, a body of academic work was created on the of various website classes, and realizable with minimal re- topic, describing history detection methods[13] and discussing sources. We discuss approaches for the selection of links the potential to detect visited websites to aid in phishing[16, to be detected, and outline our implemented solution based 21]. Countermeasures against such attacks were proposed, on primary links (as site entry points), secondary resources, including client-side approaches through browser extensions[15] and enumeration elements. and server-side solutions on a per-application basis[17], but such methods have not been adopted by browser vendors or 1Since writing the original draft of this article, we have Web application developers. Simultaneously, several demon- learned of plans to mitigate against such attacks in an up- stration sites have been created to show the ability to detect coming release of the Mozilla Firefox browser[12]. Finally, we analyze history detection results obtained from <style > 271,576 users. We demonstrate that a large majority (76.1%) #foo:visited {background: url(/?yes-foo);} of Internet users are vulnerable to history detection attacks. #bar:link {background: url(/?no-bar);} We analyze the average number of primary and secondary </style > links found for a basic test, and provide examples of privacy- <a id="foo" href="http://foo.org"></a> sensitive data gathered by our system. <a id="bar" href="http://bar.biz"></a> Our results indicate that properly prepared history detec- tion attacks have significant malicious potential and can be targeted against the vast majority of Internet users. Figure 1: Basic CSS Implementation. 2. BACKGROUND attempt to discover any intra-domain resources or attach The CSS visited pseudoclass has been applied to links vis- semantic information to detected links. ited by client browsers since the introduction of the CSS1 standard in an early stage of the Web, in 1996[2]. The fea- ture of applying different styles to "known" links quickly be- 3. ANALYSIS came accepted by users and was recommended by various In order to fully evaluate the implications of CSS-based usability experts[4]. history detection, it is necessary to understand how and The ability to use the visited pseudoclass for detecting when Web browsers apply styles to visited links. In this Web users' browsing history was first reported to browser section we analyze various browser behaviors related to vis- vendors as early as the year 2000[11, 1]. Since then, the ited links, describe an efficient algorithm for link detection technique has been independently rediscovered and disclosed and evaluate its performance in several major browsers2. several times[16], and has become widely known among Web browser developers and the security and Web standards com- 3.1 Basic Implementation munities. In fact, Chapter 5.11.2 of the CSS 2.1 standard[3], CSS-based history detection works by allowing an attacker a W3C recommendation since 1998, discusses the potential to determine if a particular URL has been visited by a for history detection using the :visited pseudoclass, and ex- client's browser through applying CSS styles distinguishing plicitly allows conforming User Agents to omit this function- between visited and unvisited links. The entire state of the ality for privacy reasons, without jeopardizing their compli- client's history cannot be directly retrieved; to glean his- ance with the standard. tory information, an attacker must supply the client with While initial discussions of CSS-based history detection a list of URLs to check and infer which links exist in the were mostly conducted in on-line forums, the issue was also client's history by examining the computed CSS values on disclosed to the academic community and discussed in the the client-side. As noted in [11], there are two basic tech- work of Felden et al. in conjuction with cache-based history niques for performing such detection. sniffing[13]. The CSS-only method, as shown in Figure 1, allows an As a response, Jakobsson and Stamm discussed potential attacker's server to learn which URLs victim's browser con- methods for implementing server-side per-application pro- siders to be visited by issuing HTTP requests for background tection measures[17]; such techniques would have to be im- images on elements linking to visited URLs. A similar, but plemented by every Web-based application and are thus an less known technique is to use the link CSS pseudoclass, extremely unlikely solution to the problem. A viable client- which only applies if the link specified as the element's href side solution was a proposed modification to the algorithm attribute has not been visited; the techniques are comple- for deciding which links are to be considered visited as de- mentary. scribed in [15] and implemented in the SafeHistory exten- This basic CSS functionality can also be accessed within sion[6] for Mozilla Firefox. Unfortunately, no such protec- JavaScript by dynamically querying the style of a link (<a>) tion measures were implemented for other Web browsers, element to detect if a particular CSS style has been applied, and the SafeHistory plugin is not available for more recent as shown in Figure 2. Any valid CSS property can be used Firefox versions. to differentiate between visited and unvisited links. The Other academic work in the area included a scheme for in- scripting approach allows for more flexibility on part of the troducing voluntary privacy-oriented restrictions to the ap- attacker, as it enables fine-grained control over the execution plication of history detection[20].
Recommended publications
  • Protecting Browser State from Web Privacy Attacks
    Protecting Browser State from Web Privacy Attacks Collin Jackson Andrew Bortz Stanford University Stanford University [email protected] [email protected] Dan Boneh John C Mitchell Stanford University Stanford University [email protected] [email protected] ABSTRACT malicious attackers is critical for privacy and security, yet Through a variety of means, including a range of browser this task often falls by the wayside in the push for function- cache methods and inspecting the color of a visited hyper- ality. link, client-side browser state can be exploited to track users An important browser design decision dating back to Net- against their wishes. This tracking is possible because per- scape Navigator 2.0 [10] is the \same-origin" principle, which sistent, client-side browser state is not properly partitioned prohibits web sites from different domains from interacting on per-site basis in current browsers. We address this prob- with another except in very limited ways. This principle lem by refining the general notion of a \same-origin" policy enables cookies and JavaScript from sites of varying trust- and implementing two browser extensions that enforce this worthiness to silently coexist on the user's browser without policy on the browser cache and visited links. interfering with each other. It is the failure to apply an ap- We also analyze various degrees of cooperation between propriate adaptation of the same-origin principle to all per- sites to track users, and show that even if long-term browser sistent browser state that is the source of the most alarming state is properly partitioned, it is still possible for sites to web privacy leaks.
    [Show full text]
  • Automated Cross-Browser Compatibility Testing
    Automated Cross-Browser Compatibility Testing ∗ Ali Mesbah Mukul R. Prasad Electrical and Computer Engineering Trusted Systems Innovation Group University of British Columbia Fujitsu Laboratories of America Vancouver, BC, Canada Sunnyvale, CA, USA [email protected] [email protected] ABSTRACT web browsers render web content somewhat differently [18, With the advent of Web 2.0 applications and new browsers, 24, 25, 26]. However, the scope and impact of this problem the cross-browser compatibility issue is becoming increas- has been rapidly growing due to two, fairly recent trends. ingly important. Although the problem is widely recognized First, modern, rich-content web applications have a heavy among web developers, no systematic approach to tackle client-side behavioral footprint, i.e., they are designed to ex- it exists today. None of the current tools, which provide ecute significant elements of their behavior exclusively on the screenshots or emulation environments, specifies any notion client-side, typically within the web browser. Further, tech- of cross-browser compatibility, much less check it automat- nologies such as Ajax [12], Flash, and event-handling for ically. In this paper, we pose the problem of cross-browser dynamic HTML, which support this thick-client behavior, compatibility testing of modern web applications as a `func- are the very aspects in which web browsers differ. tional consistency' check of web application behavior across Second, recent years have seen an explosion in the num- different web browsers and present an automated solution ber of available web browsers. There are nearly 100 different for it. Our approach consists of (1) automatically analyzing web browsers available today [31].
    [Show full text]
  • Cgi Environment Variables
    Appendix A CGI ENVIRONMENT VARIABLES The original specification for the Common Gateway Interface (CGI) established a list of environment variables, or aspects of the server and of the users connec- The complete CGI specification, http:// tion that would always be made known to CGI scripts. These variables remain a hoohoo.ncsa.uiuc.edu/cgi/ basic part of writing CGI scripts and have also been adopted in other server-side interface.html scripting languages, including server-side includes, ASP, and PHP. This list of the CGI environment variables below is annotated. SERVER_SOFTWARE The name and complete version information of the Web server software. Some server-side software adds itself to SERVER_SOFTWARE string. For example, Apache/1.3.26 (Unix) PHP/4.2.3. SERVER_NAME The servers domain name or IP address. GATEWAY_INTERFACE The version of the CGI specification in use. For example, CGI/1.1. CGI scripts are executable programs that can also be run as command-line programs on the server. The script can use the presence of a GATEWAY_INTERFACE environment variable to determine that it is being run as a CGI script. SERVER_PROTOCOL The communications protocol and version number used to request the script. For example, HTTP/1.1. SERVER_PORT Library Technology Reports The port number to which the request was sent. Sites running Web services on ports other than the default of 80 may want scripts to behave differently for users on different ports. REQUEST_METHOD The HTTP method used for the request. For example, GET or POST. A common use of this variable is have one script handle both the job of displaying a form to the user and of handling the forms submission.
    [Show full text]
  • Fingerprinting Information in Javascript Implementations
    Fingerprinting Information in JavaScript Implementations Keaton Moweryy, Dillon Bogenreif*, Scott Yilek*, and Hovav Shachamy yDepartment of Computer Science and Engineering *Department of Computer and Information Sciences University of California, San Diego University of St. Thomas La Jolla, California, USA Saint Paul, Minnesota, USA ABSTRACT employ additional means to identify the user and the sys- To date, many attempts have been made to fingerprint users tem she is using. Hardware tokens are appropriate in certain on the web. These fingerprints allow browsing sessions to be settings, but their cost and inconvenience form a barrier to linked together and possibly even tied to a user's identity. ubiquitous use. For such sites, browser and user fingerprints They can be used constructively by sites to supplement tra- form an attractive alternative to hardware tokens. On the ditional means of user authentication such as passwords; and (reasonable) assumption that typical users will usually log they can be used destructively to counter attempts to stay in from a single machine (or, at most, a handful), JavaScript anonymous online. fingerprinting allows sites to harden passwords against ac- In this paper, we identify two new avenues for browser fin- count hijacking with a minimum of user inconvenience. gerprinting. The new fingerprints arise from the browser's Destructively, fingerprints can also be used to identify JavaScript execution characteristics, making them difficult and track users, even when those users wish to avoid be- to simulate or mitigate in practice. The first uses the innate ing tracked. The most familiar browser fingerprinting tech- performance signature of each browser's JavaScript engine, nology is the cookie, a client-side datastore partitioned by allowing the detection of browser version, operating system domain and path.
    [Show full text]
  • Invasive Browser Sniffing and Countermeasures
    Invasive Browser Sniffing and Countermeasures Markus Jakobsson Sid Stamm Indiana University Indiana University Bloomington, Indiana USA Bloomington, Indiana USA [email protected] [email protected] ABSTRACT credentials; subjects in the control group received an email We describe the detrimental effects of browser cache/history appearing to come from an unknown person within the same sniffing in the context of phishing attacks, and detail an ap- domain as themselves. Even though the same statistics may proach that neutralizes the threat by means of URL person- not apply to the general population of computer users, it is alization; we report on an implementation performing such clear that it is a reasonably successful technique of luring personalization on the fly, and analyze the costs of and se- people to sites where their browsers silently will be interro- curity properties of our proposed solution. gated and the contents of their caches sniffed. Once a phisher has created an association between an email address and the contents of the browser cache/history, Categories and Subject Descriptors then this can be used to target the users in question with H.4.3 [Information Systems]: Communications Applica- phishing emails that – by means of context – appear plausi- tions—Information browsers ble to their respective recipients. For example, phishers can infer online banking relationships (as was done in [4]), and General Terms later send out emails appearing to come from the appro- priate financial institutions. Similarly, phishers can detect Security, Human Factors possible online purchases and then send notifications stating that the payment did not go through, requesting that the Keywords recipient follow the included link to correct the credit card Browser cache, cascading style sheets, personalization, phish- information and the billing address.
    [Show full text]
  • Intermediaries: Ways of Exploiting New Technologies a Number of New Technologies Have Emerged Recently, Such As HTML 4.0, CSS2.0, XML and HTTP 1.1
    Web Focus Corner: Intermediaries: Ways Of Exploiting New Technologies A number of new technologies have emerged recently, such as HTML 4.0, CSS2.0, XML and HTTP 1.1. Other new web technologies appear to be waiting, just around the corner. But we can't expect all users to upgrade to the latest version of their favourite browser. In this article Brian Kelly describes ways in which new technologies can be deployed. Since February 1998 HTML 4.0 [1], CSS 2.0[2], the Mathematical Markup Language MathML [3] and the Extensible Markup Language XML [4] have all become W3C Recommendations. These web protocols, all of which are concerned with the way in which information can be represented and displayed, were initially Working Drafts which were developed by the appropriate W3C Working Group. The Working Drafts were then made publicly available as W3C Proposed Recommendations. Following a review period the Proposed Recommendations were voted on by W3C member organisations. Following a successful vote, the Proposed Recommendations became Recommendations. But how widely used are these new web protocols? Today's web community, being so large and consisting of a majority who probably have little interest in the underlying web technologies, is reluctant to upgrade to new versions of browsers, due to a lack of knowledge, interest, technical skills or resources. More experienced web users remember the "browser wars" and the problems of being an early adopter of technologies. What are the implications of this conservatism? Can we simply ignore these new technologies? Revisiting the recommendations and looking at the new features they provide it would appear that we cannot afford to ignore these technologies if we wish to develop richer, more elegant and more maintainable web services.
    [Show full text]
  • The Unexpected Dangers of Dynamic Javascript
    The Unexpected Dangers of Dynamic JavaScript Sebastian Lekies, Ruhr-University Bochum; Ben Stock, Friedrich-Alexander-Universität Erlangen-Nürnberg; Martin Wentzel and Martin Johns, SAP SE https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/lekies This paper is included in the Proceedings of the 24th USENIX Security Symposium August 12–14, 2015 • Washington, D.C. ISBN 978-1-939133-11-3 Open access to the Proceedings of the 24th USENIX Security Symposium is sponsored by USENIX The Unexpected Dangers of Dynamic JavaScript Sebastian Lekies Ben Stock Martin Wentzel Ruhr-University Bochum FAU Erlangen-Nuremberg SAP SE [email protected] [email protected] [email protected] Martin Johns SAP SE [email protected] Abstract the fly. In turn, this enabled the transformation of the Web’s initial document-centric nature into the versatile Modern Web sites frequently generate JavaScript on-the- platform that we know today. fly via server-side scripting, incorporating personalized 2. Browser-driven Web front-ends: Furthermore, the user data in the process. In general, cross-domain access Web browser has proven to be a highly capable container to such sensitive resources is prevented by the Same- for server-provided user interfaces and application logic. Origin Policy. The inclusion of remote scripts via the Thanks to the flexible nature of the underlying HTML HTML script tag, however, is exempt from this policy. model and the power of client-side scripting via Java- This exemption allows an adversary to import and exe- Script, the server can push arbitrary user interfaces to the cute dynamically generated scripts while a user visits an browser that rival their counterparts of desktop applica- attacker-controlled Web site.
    [Show full text]
  • Browseraudit: Automated Testing of Browser Security Features
    ifact t * r * Comple t te A A n * te W E is s * e C n l l o D C A o * * c T BrowserAudit: Automated Testing u e m S s E u e S e n I R t v e o d t y * s E a * a l d u e a of Browser Security Features t Charlie Sergio Maffeis Chris Novakovic Hothersall-Thomas Department of Computing Department of Computing Netcraft Ltd, UK Imperial College London, UK Imperial College London, UK [email protected] [email protected] [email protected] ABSTRACT As such, browsers need to offer a variety of standardised The security of the client side of a web application relies on security mechanisms which can be relied upon uniformly browser features such as cookies, the same-origin policy and by the client side of web applications, in order to deliver HTTPS. As the client side grows increasingly powerful and security guarantees to their users. For example, the same- sophisticated, browser vendors have stepped up their offering origin policy (SOP) [34] is effective at preventing a range of security mechanisms which can be leveraged to protect of cross-site scripting (XSS) attacks [38] against users' web it. These are often introduced experimentally and informally browsers and is an integral aspect of modern web-based se- and, as adoption increases, gradually become standardised curity. On the other hand, it is sometimes excessively strict; (e.g., CSP, CORS and HSTS). Considering the diverse land- for instance, it forbids the sharing of information between scape of browser vendors, releases, and customised versions different subdomains, a common requirement of large web for mobile and embedded devices, there is a compelling need sites.
    [Show full text]
  • 8 Myths About Mobile Optimization
    8 MYTHS ABOUT MOBILE OPTIMIZATION EXPLORE THE MOST COMMON MISCONCEPTIONS AROUND BUILDING MOBILE FRIENDLY SITES Introduction Did you use Google today? What about Amazon, Twitter or Netflix? If so, you have benefitted from device detection technology. Far from being outdated, device detection is more important than ever as the web is becoming increasingly global and is consumed on a wide range of devices. These days, it’s truly a world-wide web. Some web users are using powerful desktop PCs with big screens and high speed fibre connections; others are using mobile devices with unreliable 3G, data connections or airport WiFi. To think that the same web experience will work well for both is delusional, not to mention bad for business. Device detection helps smooth out these bumps in the user experience by helping to tailor the experience to the user. Tailoring the user experience means happier users and wider market reach. In this guide we will help you understand why the top web brands are all using device detection to deliver world-class experiences on the web. Ronan Cremin, DeviceAtlas CTO 8 Mobile Optimization Myths Myth #1 You need a separate mobile URL, unless you use responsive design. Some people may think there are only two options for a mobile-optimized site: you either go responsive, or build a mobile site with a different URL. This opinion assumes that the mobile site must be a very basic, useless version of the desktop site making mobile users tap the ‘switch to desktop’ button as soon as they can find it.
    [Show full text]
  • HTML5 for Publishing Decision Makers: the Technology Behind the Scenes of Interactive Books Q&A Webinar
    webinar Q&A HTML5 for Publishing Decision Makers: The Technology Behind the Scenes of Interactive Books Unanswered Questions from the March 26, 2013 Live Event Respondents: Jean Kaplansky, Solutions Architect, Aptara | Maria Bartoszewicki, Director of Interactive eBooks, Aptara Moderator: Jeremy Greenfield, Digital Book World Question Answer Where can a publishing house get With regard to information for appropriate business models to distribute content, information on appropriate business Google is your friend here. The first thing you need to start looking at is information models for starting to distribute their about content strategy in general. Business rules and content strategy drive content digitally? Guidance in working requirements for creation, production, all the way through distribution. One of the contracts with authors, working with best ways to get an introduction to this aspect of working with eBooks and previously published books, etc. would interactive content is to attend conferences like DBW and TOCC, where things like be appreciated. contacts and working with previously published books are discussed in conference sessions and sometimes during keynote speeches. –JK What is your opinion on the continued If your business requirements and content strategy dictate that you are going to use of Flash for highly interactive distribute content to the iOS platform, then Flash should not be used. If distribution content? to mobile platforms is a known future business requirement, then project teams should start investigating whether or not they can implement an HTML5-based solution now in order to create interactive content that will not have to be completely converted or rewritten in the future.
    [Show full text]
  • Atlantis: Robust, Extensible Execution Environments for Web Applications
    Atlantis: Robust, Extensible Execution Environments for Web Applications James Mickens Mohan Dhawan Microsoft Research Rutgers University [email protected] [email protected] ABSTRACT variety of standards that define the software stack for web Today’s web applications run inside a complex browser en- applications [14, 19, 36, 52, 55]. In practice, the aggregate vironment that is buggy, ill-specified, and implemented in specification is so complex that it is difficult for any browser different ways by different browsers. Thus, web applications to implement it properly. To further complicate matters, that desire robustness must use a variety of conditional code browser vendors often add new features without consulting paths and ugly hacks to deal with the vagaries of their run- other vendors. Thus, each browser defines an idiosyncratic time. Our new exokernel browser, called Atlantis, solves JavaScript interpreter, HTML parser, and layout engine. this problem by providing pages with an extensible execu- tion environment. Atlantis defines a narrow API for basic These components are loosely compatible across different services like collecting user input, exchanging network data, vendors, but fine-tuning an application for multiple brows- and rendering images. By composing these primitives, web ers often means developing specialized application code in- pages can define custom, high-level execution environments. tended for just one browser. For example: Thus, an application which does not want a dependence on Atlantis’ predefined web stack can selectively redefine com- • Writing a portable web-based GUI is challenging be- ponents of that stack, or define markup formats and script- cause different browsers handle mouse and keyboard ing languages that look nothing like the current browser run- events in different, buggy ways (§2.4.1).
    [Show full text]
  • HTML5 Advertising Going Forward ������������������������������������������������������������325
    www.it-ebooks.info For your convenience Apress has placed some of the front matter material after the index. Please use the Bookmarks and Contents at a Glance links to access them. www.it-ebooks.info Contents at a Glance Foreword ��������������������������������������������������������������������������������������������������������������������������� xix About the Author ��������������������������������������������������������������������������������������������������������������� xxi About the Technical Reviewer ����������������������������������������������������������������������������������������� xxiii Acknowledgments ������������������������������������������������������������������������������������������������������������ xxv Introduction �������������������������������������������������������������������������������������������������������������������� xxvii ■ Chapter 1: The Campaign Process �������������������������������������������������������������������������������������1 ■ Chapter 2: Evolution of Advertising Technology ��������������������������������������������������������������21 ■ Chapter 3: Advertising with Web Standards ��������������������������������������������������������������������37 ■ Chapter 4: Using Canvas, SVG, and Web Fonts ����������������������������������������������������������������61 ■ Chapter 5: Animations and Presentations �����������������������������������������������������������������������85 ■ Chapter 6: HTML5 APIs ��������������������������������������������������������������������������������������������������121
    [Show full text]