Understanding and Mitigating Attacks Targeting Web Browsers

A Dissertation presented in partial fulfillment of the requirements for the degree of

Doctor of Philosophy

in the field of

Information Assurance

by

Ahmet Salih Buyukkayhan

Northeastern University Khoury College of Computer Sciences Boston, Massachusetts

April 2019 To my family, teachers and mentors.

i Contents

List of Figures v

List of Tables vii

Acknowledgments viii

Abstract of the Dissertation ix

1 Introduction 1 1.1 Structure of the Thesis ...... 2

2 Background 4 2.1 Browser Extensions ...... 4 2.1.1 Extensions ...... 5 2.1.2 Extension Security ...... 7 2.2 Vulnerabilities in Web Applications ...... 9 2.2.1 Vulnerability Reward Programs and Platforms ...... 9 2.2.2 XSS Vulnerabilities ...... 10 2.2.3 XSS Defenses ...... 12

3 CrossFire: Firefox Extension-Reuse Vulnerabilities 14 3.1 Overview ...... 14 3.2 Threat Model ...... 15 3.3 Design ...... 16 3.3.1 Vulnerability Analysis ...... 17 3.3.2 Exploit Generation ...... 19 3.3.3 Example Vulnerabilities ...... 20 3.4 Implementation ...... 23 3.5 Evaluation ...... 23 3.5.1 Vulnerabilities in Top Extensions ...... 23 3.5.2 Random Sample Study of Extensions ...... 25 3.5.3 Performance & Manual Effort ...... 27

ii 3.5.4 Case Study: Submitting an Extension to Add-ons Repository . . . 28 3.5.5 Jetpack Extensions...... 30 3.5.6 Implications on Extension Vetting Procedures ...... 31 3.6 Summary ...... 31

4 SENTINEL: Securing Legacy Firefox Extensions 33 4.1 Overview ...... 33 4.2 Threat Model ...... 34 4.3 Design ...... 35 4.3.1 Intercepting XPCOM Operations ...... 36 4.3.2 Intercepting XUL Document Manipulations ...... 37 4.3.3 Preventing Namespace Collision Exploits ...... 40 4.3.4 Policy Manager ...... 41 4.3.5 Limitations ...... 43 4.4 Implementation ...... 43 4.4.1 Proxy Objects ...... 43 4.4.2 XPCOM Objects as Method Arguments ...... 45 4.4.3 XUL Elements without an ID ...... 45 4.4.4 Modifications to the Browser and Extensions ...... 46 4.5 Evaluation ...... 48 4.5.1 Policy Examples ...... 48 4.5.2 Runtime Performance ...... 56 4.5.3 Applicability of the Solution ...... 57 4.5.4 Falsely Blocked Legitimate Extensions ...... 58 4.6 Summary ...... 59

5 An Empirical Analysis of XSS Exploitation Techniques 60 5.1 Overview ...... 60 5.2 Methodology ...... 63 5.2.1 Data Collection ...... 63 5.2.2 Feature Selection ...... 65 5.2.3 Exploit String Extraction ...... 66 5.2.4 Static Feature Extraction ...... 68 5.2.5 Exploit Execution ...... 69 5.2.6 Dynamic Feature Extraction ...... 71 5.2.7 Data Integration and Filtering ...... 72 5.2.8 Validation ...... 73 5.3 Analysis ...... 74 5.3.1 Affected Websites ...... 76 5.3.2 Sink Analysis ...... 77 5.3.3 XSS Filters ...... 79 5.3.4 Exploit Analysis ...... 81

iii 5.3.5 Exploit Patterns ...... 88 5.3.6 Exploit Sophistication ...... 90 5.3.7 Exploit Authors ...... 92 5.4 Summary ...... 95

6 Papers 96 6.1 Thesis Publications ...... 96 6.2 Other Publications ...... 96

7 Conclusion 99

Bibliography 101

iv List of Figures

3.1 An overview of the core components of CROSSFIRE...... 17 3.2 Breakdown of true positive vulnerabilities discovered by CROSSFIRE by category. . 26 3.3 Screenshots from Mozilla Add-ons website showing the accepted extension and its fully reviewed status...... 28

4.1 Overview of SENTINEL from the user’s perspective...... 35 4.2 An overview of SENTINEL, demonstrating how a file deletion operation can be intercepted and checked with a policy...... 37 4.3 Implementation of the Object Proxy using a proxy construct...... 44 4.4 A malicious extension can redirect users visiting “https://www.bankofamerica.com” to a different website “http://example.com” and fake the browser identity indicators. 51

5.1 Overview of our static and dynamic analysis system...... 64 5.2 Quarterly exploit submissions and unique affected domains in XSSED (outer) and OPENBUGBOUNTY (inset). Most new submissions are found on new domains, suggesting a large supply of vulnerable domains on the Web...... 75 5.3 Quarterly distribution of the popularity of domains affected by exploit submissions (XSSED left, OPENBUGBOUNTY right). Domains grouped by popularity accord- ing to their Alexa ranks; the last interval includes unranked domains. More than half of submissions are for unpopular websites. The rank interval distribution is al- most uniform over time, illustrating that XSS vulnerabilities continue to be found even on the most popular websites...... 76 5.4 Quarterly tag and event handler market share in OPENBUGBOUNTY. Submissions of script tags decline in favor of other tags with event handlers...... 84 5.5 CDF of sophistication scores. XSSED submissions tend to have lower scores than OPENBUGBOUNTY, and less score diversity due to fewer patterns...... 85 5.6 Quarterly median sophistication score for all submissions and for distinct exploit patterns in OPENBUGBOUNTY. Inset shows XSSED data...... 85

v 5.7 Correlation matrix of exploitation techniques and selected tags and attributes for (a) XSSED and (b) OPENBUGBOUNTY. Use of , the script will (unexpectedly) produce an HTML output containing script: Hello . In this case, we call the printing of the string concatenation with name a sink because it reflects a user-supplied input into the output gen- erated by the server-side script. The sink is located in a context that allows the attacker to directly inject markup. In other cases, the sink may be located in a script context () or even inside a string ("Hello {sink}"). Depending on the context, the at- tacker might need to escape from that context first (e.g., terminating a string by injecting ") before injecting the payload. Stored XSS is the stateful version of server-side XSS, where the injected code is stored and served to future visitors of the website. Depending on the web application, the stored user input may correspond to the title, user name or body of a blog post, for instance. Visitors can fall victim to a stored XSS attack even without following a malicious link. Web forums and comment section of the web pages can be especially vulnerable since they permanently store the users‘ input and distribute them to all visitors. DOM-Based XSS does not involve any server-side weakness; it abuses client-side code in the website that reads data from an input such as the URL or cookie and evaluates it as code, or writes it into the page as markup. As for reflected XSS, attackers typically rely on victims fol- lowing a crafted link. As an example, consider a client-side script that writes the url to the web page: document.write("" + location.href + ""). When called with http://example.com/, the script will produce the (expected) HTML output http://example.com/ , but when called with an URL such as http: //example.com/#, the script will (unexpectedly) pro- duce an HTML output containing script: http://example.com/# . Note that different from the previous XSS types, here client-side script reflects the user controlled input into the dynamically generated output. Quantifying XSS: Cross-Site Scripting has been considered in a large body of scholarly research, mostly from a vulnerability detection and exploitation prevention perspective. However, only very few works quantify the occurrence of different XSS exploitation techniques. For example, in the course of their XSS filter evaluation, Bates et al. [7] classified the sink contexts of a sample of 145 exploits from XSSED. In the area of DOM-based XSS, several works [43, 65, 48] reported charac- teristics of exploitable data flows, such as the source and sink types, and cognitive complexity. The exploits were automatically generated by the respective authors’ tool, which limits the analyzed exploits to the capabilities of the tool. Furthermore, the goal of the analysis was to characterize the “root causes” of vulnerabilities. In measuring complexity of XSS attacks, Scholte et al. [63] studied 2,632 XSS and SQL injection attack strings found in entries of the National Vulnerability

11 CHAPTER 2. BACKGROUND

Database (NVD). The authors measured the complexity of XSS exploits according to five static features such as use of encoded characters or event handlers.

XSS Defenses

Client-Side Defenses: Kirda et al. [40] introduced the first client-side defense to mitigate XSS by leveraging the idea of personal firewalls. In-browser filter proposed by Bates et al. [7] later adapted by Chrome and other browsers using Webkit or Blink rendering engines. Pelizzi and Sekar [60] proposed improvements to detect partial XSS injections at the client-side. Stock et al. [64] introduced an alternative filter design against DOM-based XSS by using dynamic taint tracking and taint aware parsers. Firefox does not offer any in-browser XSS filter however NoScript browser extension [32] utilizes regular expression based filters to detect potentially malicious outgoing HTTP requests. Once a suspicious payload is detected, NoScript relies on user defined policies or user confirmation to block or allow the request. Internet Explorer [14] improved the NoScript‘s detection by generating signatures for each potentially malicious payload in outgoing HTTP requests and then checking these signatures in the response and filtering unsafe characters before loading the page. Chrome relies on XSS Auditor [13] to detect and prevent XSS attacks. The key difference here is to check for injected words or tokens after url is decoded and response page is parsed by the browser. Furthermore, XSS Auditor only looks for reflected content in executable parts to reduce the number of false positives. Server-Side Defenses: In the case of reflected XSS, the vulnerability arises from server-side tem- plating and the inability of the browser to distinguish the trusted template from the untrusted user input. To prevent attacks, the developer of the server-side code must escape sensitive characters in the user input. However, which characters are sensitive and how they can be escaped depends on the sink context. For example, inside a JavaScript string context, quotes must be escaped as \", inside HTML tags as the HTML entity " and outside of HTML tags, they do not have a special meaning. General-purpose server-side programming languages typically do not escape user input since they are unaware of the sink context. Furthermore, in some cases developers may wish to allow certain types of markup in user input, such as style-related tags in articles or blog

12 CHAPTER 2. BACKGROUND posts submitted by users. Unfortunately, developers often omit input sanitization entirely, use an incorrect type of escaping, or implement custom sanitization code, which is highly susceptible to be incomplete. For example, developers that would like to allow certain tags but not script may remove the string "script" from user input, but they may fail to account for case differ- ences (. When exploit authors manually craft their attack string, they can customize it using their understanding of the website. For example, only a restricted set of tags can trigger the onload event handler, whereas it would have no effect in the other tags. Alternatively, exploit authors can use generic escape sequences that work in a variety of different sink types in order to make their exploit more versatile. In the same two random samples, 50.8 % of exploits in XSSED and 39.6 % in OPENBUGBOUNTY did not need any escaping, as the sink already had a JavaScript or HTML between-tag context suitable for the payload. However, 26.4 % of sampled XSSED exploits (OPENBUGBOUNTY: 28 %) contain an escaping sequence even though it is not necessary. This suggests that our datasets contain a significant fraction of general-purpose exploits. Around 41.6 % of sampled XSSED exploits, and 50.4 % in OPENBUGBOUNTY, contain an escape sequence that is both necessary and minimal. The remainder contains both correct, and unsuitable or ambiguous escaping. Many websites reflect injected exploits more than once. In XSSED and OPENBUGBOUNTY, 37.0 % and 32.5 % of submissions in our successfully merged dataset have more than one working exploit reflection. This typically occurs due to one URL parameter being used in multiple places in the server-side template, but there are also a few submissions where different exploits are injected into multiple parameters. In the manually labeled sample, the multiple working reflections occur in 45.2 % for XSSED and 30.0 % for OPENBUGBOUNTY. These can be further divided into 32.4 % of the XSSED sample, and 24.4 % of OPENBUGBOUNTY, where the exploit is reflected multiple times in sink contexts of the same type, and 12.8 % and 5.6 %, respectively, with multiple reflections in different sink contexts. An exploit with correct escaping for one context can also appear in a different context where the escaping sequence may be ineffective. To that end, it is worth noting that 44.7 % of all XSSED submissions, and 52.9 % of OPEN- BUGBOUNTY contain at least one additional potential exploit reflection where the exploit does not execute. This data is to be seen as a coarse approximation, as it contains false positives that are not actual exploit reflections, but matches between the request data and similar but potentially unrelated page code. For this reason, outside of this section, we only analyze executing reflec- tions, where our dynamic analysis rules out such false positives. Overall, 62.1 % of submissions in XSSED, and 63.4 % in OPENBUGBOUNTY, contain multiple reflections, with at least one working

78 CHAPTER 5. AN EMPIRICAL ANALYSIS OF XSS EXPLOITATION TECHNIQUES

and potentially more that do not. In addition to exploits being reflected in a sink context for which the escaping sequence is ineffective, another possible explanation for reflections not executing are server-side transformations. A typical server-side transformation is encoding of sensitive characters to prevent XSS. In exploit reflections that do execute, few special characters such as < or > for tags, and " or ' for attribute values or strings are HTML entity encoded (i.e., < < or <) – less than 0.1 % for angled brackets, and no more than 0.25 % for quotes in either database. In reflections that do not execute, the share of such encoding is significantly higher, with 7.3 % of these reflections in XSSED containing HTML-encoded angled brackets (OBB: 8.7 %), and 6.0 % (8.0 %) containing encoded quotes. HTML-encoding of alphanumeric characters is close to zero in either case. Since many server responses contain both working and non-working reflections, vulnerable applications appear to sanitize some user input, but inconsistently. Some reflections appear to be mirroring the full request URL instead of a single request param- eter. Around 6.4 % of submissions in XSSED and 15.1 % in OPENBUGBOUNTY contain at least one URL reflection, and 2.7 % and 4.4 % of submissions, respectively, contain at least one such URL reflection that executes. Similar to HTML encoding, URL reflections that execute injected exploits rarely contain any URL-encoded characters at all (XSSED: 3.6 %, OPENBUGBOUNTY: 0.9 %), whereas non-executing URL reflections do (81.5 % in XSSED and 73.0 % in OPENBUG- BOUNTY). Other factors that might prevent execution of an injected exploit, which we do not examine here, include an incompatible sink context, other types of encoding or escaping, and more complex server transformations such as substrings.

XSS Filters

Reflected XSS occurs due to improper sanitization of inputs in the server-side code of the web application. In this context, we aim to test whether exploits could have been blocked by add-on, out-of-the-box filtering technology. We distinguish two scenarios. Exploits could be blocked in the network or on the server side, using a web application firewall such as ModSecurity. Since all exploits were confirmed to be working at the time of submission, their inclusion in XSSED and OPENBUGBOUNTY implies that such server-side technology was either bypassed by the exploit,

79 CHAPTER 5. AN EMPIRICAL ANALYSIS OF XSS EXPLOITATION TECHNIQUES

Table 5.5: Exploits blocked by XSS filters (HTTP GET only)

XSS Filter Published XSSED OBB ModSecurity CRS 2.2.5 Sept. 2012 99.7 % 99.3 % ModSecurity CRS 3.2.0 Paranoia1 Sept. 2018 96.0 % 89.8 % ModSecurity CRS 3.2.0 Paranoia4 Sept. 2018 96.4 % 91.3 % Chrome 47.0.2526.73 Dec. 2015 96.4 % 85.4 % Chrome 62.0.3202.62 Aug. 2017 96.7 % 94.7 % Internet Explorer 11.2430.14393.0 Aug. 2018 97.9 % 97.0 % NoScript 5.1.8.4 Jan. 2018 100.0 % 99.6 % or not used at all. Alternatively, some web browsers or browser extensions attempt to block exploits on the client side, such as Chrome with the XSS Auditor, Internet Explorer with the XSS filter, or Firefox with the NoScript extension. To test the effectiveness of these defenses against the exploits in our dataset, we proceeded as follows: We installed ModSecurity with the OWASP Core Rule Set on a local test server, requested the original URLs, and observed whether the request was blocked. For NoScript, we passed the URLs to the code responsible for filtering; to the best of our knowledge, the Firefox extension does not inspect the body of the page. We executed Chrome similar to Section 5.2.5 and activated the XSS Auditor by setting the X-XSS-Protection HTTP header with a report URI to be notified about blocked requests. In Internet Explorer, we similarly activated the XSS Filter by setting the HTTP header, and observed whether the page was rendered or empty. Table 5.5 shows the exact browser versions used, and how many requests were blocked. Overall, the vast majority of exploits are blocked by these filters, with detection rates between 85 % and 100 %. While the exploits may be bypassing existing server-side filters (if any are de- ployed), most of them fail to bypass state-of-the-art security tools in their default settings. In this context, it is interesting to note that ModSecurity with an older ruleset has a higher block rate than more recent versions. We suspect that these changes were made to reduce false positive detections (which we did not measure). In the most recent version at the time of our test, setting a higher paranoia level results in only marginally more detections. Inspection of exploits blocked by the old, but not the new version does not reveal any correlation with the use of specific exploitation techniques; rather, certain URL layouts used by web applications seem to prevent detection in the

80 CHAPTER 5. AN EMPIRICAL ANALYSIS OF XSS EXPLOITATION TECHNIQUES newer version. We tested Chrome’s XSS Auditor in two different versions. Chrome 47 was published in December 2015, in the middle of our OPENBUGBOUNTY dataset. Exploits submitted before this version was published are detected at a rate of 80.5 %, whereas exploits submitted afterwards have a slightly worse detection rate of 75.9 %. On the other hand, Chrome 62, which was published after the end of our data collection, blocks both classes of exploits at a similar rate, 85.3 % of submissions before the release of version 47, and 86.7 % afterwards. These numbers do not reveal whether some exploit authors actively attempted to bypass the then current version of the XSS Auditor, but they do illustrate that exploits were submitted (and accepted) despite being blocked by browser filters at the time of submission.

Exploit Analysis

Nearly all exploit submissions contain simple proof-of-concept payloads showing a JavaScript dialog with a message. In XSSED, the earlier dataset covering five years since 2007, 99.7 % of all submissions use alert(). However, the prevalence of alert() appears to be decreasing over time in favor of prompt(). The former is used in 52.4 % of OPENBUGBOUNTY submissions, the latter in 40.7 %, and 7.6 % use confirm(). These numbers may be heavily dependent on the behavior of a few very active users, as 86.1 % of all authors in OPENBUGBOUNTY submit at least one exploit with alert(), and the fraction of active authors with at least one such exploit remains over 77.4 % in each quarter.

Overview of Exploitation Techniques

We group exploitation techniques into five categories, with the full list in Table 5.7: Context (Es- caping and Creation), Syntax Confusion, String Obfuscation & Quote Avoidance, Control Modification, and String Interpretation. Each category corresponds to a specific goal of exploit authors, such as bypassing (incomplete) server-side sanitization, or setting up a context where JavaScript code can be executed. The techniques within each category are alternative means of achieving that goal. Very few submissions (17.3 % in XSSED, 3.8 % in OPENBUGBOUNTY) use no special technique at all; they are simple exploits such as .

81 CHAPTER 5. AN EMPIRICAL ANALYSIS OF XSS EXPLOITATION TECHNIQUES

Table 5.6: Aggregated Use of Exploitation Techniques

% Submissions % Authors XSSED OBBXSSED OBB Context (Escaping and Creation) 75.5 83.8 68.6 83.6 Syntax Confusion 13.0 47.8 14.2 39.7 String Obfuscation & Quote Avoid. 18.2 71.1 25.0 67.7 Modification 4.1 70.8 11.3 67.7 String Interpretation 0.8 0.4 1.7 4.5 (no technique used at all) 17.3 3.8 50.7 42.2 (no technique or Context Creation) 67.6 15.6 84.0 61.4

As an overview, Table 5.6 shows how many exploits use at least one technique from a cate- gory. Very few submissions (17.3 % in XSSED, 3.8 % in OPENBUGBOUNTY) use none of these techniques at all. The most common category is context escaping and creation with 75.5 % of submissions in XSSED, and 83.8 % in OPENBUGBOUNTY, most likely because techniques of this category are often needed to set up the proper execution context for the exploit, depending on the sink type. The remaining categories appear in only a small fraction of exploits in the early XSSED, but become more popular in the later OPENBUGBOUNTY. As an illustration, 67.6 % of submis- sions in XSSED use no technique other than possible context escaping and creation, showing that older exploits tend to be relatively simple. In OPENBUGBOUNTY, this percentage is only 15.6 % of submissions, as some techniques gained popularity and more authors have submitted at least one exploit with a technique from the other categories. While categories such as control flow modification and syntax confusion have an upwards trend in OPENBUGBOUNTY, both in terms of submissions and authors, the string interpretation category remains rare, appearing in less than 1 % of submissions in either database, and used by only 1.9 % of XSSED and 5.0 % of OPENBUGBOUNTY authors. In the following, we look into each category in more detail.

Context (Escaping and Creation)

Depending on the sink context (Section 5.3.2), exploits need to escape from the current context and set up their own in which the payload can run. In our two datasets, most exploits (73.8%˙

82 CHAPTER 5. AN EMPIRICAL ANALYSIS OF XSS EXPLOITATION TECHNIQUES

in XSSED, and 77.6 % in OPENBUGBOUNTY) close the previous tag (C3 in Table 5.7), escap- ing to a context where new HTML tags can be inserted. Only a few exploits escape from an HTML attribute but remain inside the tag to insert an event handler (C4, 1.1 % in XSSED and 3.1 % in OPENBUGBOUNTY). Interestingly, though, a much higher fraction of authors, 20.4 % in OPENBUGBOUNTY, have submitted one such exploit at least once. Even fewer exploits insert their dialog-based payload directly into a JavaScript context by possibly terminating a string and chaining the statement with ; or an operator such as + (C5, 0.9 % in XSSED and 1.8 % in OPEN- BUGBOUNTY). While the latter finding is probably in large part due to JavaScript sinks being much less common in our datasets than HTML sinks, the dominance of HTML tag escaping over remaining inside the tag is more likely attributed to preferences of exploit authors, as we found both types of HTML sinks to be similarly common. As most exploits close the previous tag, they must insert a new tag in order to be able to execute the payload. Indeed, 98 % of exploits in XSSED, and 93.5 % in OPENBUGBOUNTY contain at least one tag. In the older XSSED dataset, with 95.6 % of submissions, this is nearly always a 2.4 % / 10.5 % 9.2 % / 21.7 % C2 JavaScript comment 5 /*/**/prompt(1)// G# 0.5 % / 3.3 % 2.7 % / 12.9 % C3 HTML tag escape and insertion 4 "> G# 73.8 % / 77.6 % 66.7 % / 80.7 % C4 HTML attr. escape and event handler 4 " autofocus onfocus=alert(1) G# 1.1 % / 3.1 % 5.6 % / 20.4 % C5 chaining onto prior JS expression 6 "-alert(1) or ;prompt(1) G# 0.9 % / 1.8 % 5.0 % / 14.4 % G#

Syntax Confusion (category score weight: 2)

Technique Score Example Detection Submissions Authors S1 extraneous parentheses 4 (alert)(1) 0.0 % / 0.3 % 0.0 % / 1.8 % S2 mixed case 3 G# 4.9 % / 4.4 % 8.8 % / 19.8 % S3 JavaScript encoding (uni, hex, oct) 10 \u0061lert(1) or top["\x61lert"](1) G# 0.0 % / 0.1 % 0.1 % / 2.3 % S4 malformed img tag 5 "> G# 0.2 % / 0.2 % 0.6 % / 1.6 % S5 whitespace characters 6 G# 0.0 % / 0.0 % 0.1 % / 0.7 % S6 slash separator instead of space 4 G# 0.1 % / 42.8 % 0.1 % / 31.7 % S7 multiple brackets (parse confusion) 4 < G# 8.2 % / 1.7 % 5.8 % / 8.8 % G#

String Obfuscation & Quote Avoidance (category score weight: 2)

Technique Score Example Detection Submissions Authors O1 character code to string 6 alert(String.fromCharCode(88,83,83)) 4.5 % / 3.7 % 11.1 % / 11.7 % O2 regular expression literal 3 prompt(/XSS/) 13.7 % / 65.1 % 16.7 % / 62.8 % O3 base64 encoding 10 alert(atob("WFNT")) 0.0 % / 0.1 % 0.0 % / 0.6 % O4 backtick 4 prompt`XSS` 0.0 % / 2.4 % 0.1 % / 8.3 % G#

Control Flow Modification (category score weight: 3)

Technique Score Example Detection Submissions Authors F1 automatically triggered events 3 1.2 % / 48.2 % 5.9 % / 38.1 % F2 exploit-triggered events 5

String Interpretation (category score weight: 4)

Technique Score Example Detection Submissions Authors I1 document.write 4 document.write(" 49.0 % 53.2 % 2 2 (none) 0 17.3 % 50.7 % 3 3 C3, O2 10 "> 7.8 % 11.0 % 4 9 C3, S7 12 ">> 7.3 % 3.8 % 5 4 O2 6 3.4 % 8.0 % 6 5 C3, S2 10 "> 2.9 % 5.4 % 7 8 C3, O1 16 "> 1.6 % 4.0 % 8 6 O1 12 1.2 % 5.0 % 9 C1, C3, O1 17 -->"> 0.8 % 2.8 % 10 S2 6 0.7 % 2.1 % 7 C3, C1 5 -->"> 0.7 % 4.4 % 10 C4, F3 34 " onmouseover=alert(1) 0.7 % 3.5 %

OPENBUGBOUNTY Rank Techniques Score Example Submissions Authors 1 6 C3, S6, F1, O2 27 "> 30.9 % 18.5 % 2 3 C3, F2, O2 25 "> 9.6 % 27.7 % 3 1 C3 4 "> 8.5 % 42.1 % 4 4 C3, O2 10 "> 4.7 % 26.0 % 5 2 (none) 0 3.8 % 42.1 % 6 C1, C3, S6, F1, O2 28 -->"> 2.7 % 4.8 % 7 5 F2, O2 21 2.7 % 23.4 % 8 S6, F1, O2 23 2.5 % 11.2 % 9 8 C3, F2 19 "> 2.3 % 17.0 % 10 C3, F1 13 "> 2.2 % 11.9 % 7 O2 6 2.1 % 17.3 % 9 C3, S6, F1 21 "> 1.9 % 14.2 % 10 F2 15 0.8 % 13.0 %

Our approach results in 178 distinct exploit patterns in XSSED, out of which 60.1 % are sub- mitted at least twice and 51.7 % are used by at least two authors. In OPENBUGBOUNTY, we detect 484 exploit patterns, with 65.9 % used multiple times and 54.3 % used by multiple authors. For comparison, exact string matching finds 15,567 and 31,337 unique exploit strings, with only 13.5 % and 12.6 % of them used more than once (2.9 % and 3.4 % used by multiple authors). The ten most frequent patterns account for 92 % of all submissions in XSSED, and 69.9 % in OPENBUGBOUNTY. Out of all authors, 93.8 % in XSSED, and 85.4 % in OPENBUGBOUNTY, have submitted at least one exploit based on one of these top ten patterns. Example string repre- sentations of these patterns are shown in Table 5.8. An example for the most frequently submitted

89 CHAPTER 5. AN EMPIRICAL ANALYSIS OF XSS EXPLOITATION TECHNIQUES pattern in XSSED is ">; it accounts for 49.0 % of all submissions and is used by 53.2 % of users. The same pattern, or its variant without tag clos- ing, is also the most popular in OPENBUGBOUNTY when considering the number of authors who have submitted it at least once. However, both patterns together account for only 12.3 % of to- tal submissions. On the other hand, the most frequently submitted OPENBUGBOUNTY pattern is ">, adding space and quote avoidance, and indirect code execution using an automatically triggered event handler. It accounts for 31.3 % of all submissions, but is used by only 19.1 % of users, which implies that these participants are disproportionately active.

Exploit Sophistication

In order to compare different exploits, and to investigate exploit authors’ technical skills, we de- velop a metric that scores exploits based on the sophistication of the techniques they are using. In doing so, we aim to characterize the injected exploit, but not the difficulty of discovery of the cor- responding vulnerability. Our score reflects the difficulty of detection by web application firewalls, and, to a lesser extent, the knowledge of JavaScript/HTML required to develop and use the exploit. We assign a score of up to 10 to each technique in Table 5.7, loosely following the CVSS guide- lines for attack complexity [21]. For example, exploits depending on external conditions such as user interaction might evade dynamic detection and are considered more sophisticated than those always triggering. Similarly, exploits with obfuscated payloads might evade static detection and are considered more sophisticated than those using mixed-case letters. We further give a weight to each category. To avoid score outliers due to users who show off their skills by submitting exploits with many redundant, alternative techniques, the exploit score takes into account only the highest score achieved in each category. The final score ranges from zero for exploits without any tech- nique, such as , to a maximum of 100 when the highest rated technique in each category is used. Table 5.9 in the appendix lists the highest scored exploits. For scoring purposes, we distinguish three classes of event handlers. Simple event handlers such as onload (F1) are triggered automatically. Medium-sophistication event handlers such as onfocus or onerror (F2) require a certain condition in order to be triggered. This condition can be caused by the exploit, e.g., by adding an autofocus attribute or specifying an invalid URL.

90 CHAPTER 5. AN EMPIRICAL ANALYSIS OF XSS EXPLOITATION TECHNIQUES

Table 5.9: The Top 5 Exploits by Sophistication Score

XSSED Score Techniques Example 68 C3, S2, F3, O1, O2, I2 "> 56 C4, F3, O2, I2 "onmouseover="x='aler';x=x+'t(/XSS/)';eval(x);alert().aspx 56 C3, F3, F4, O2, I2 53 C4, S2, F2, O1, I1, I2 "onFocus=document.write(String.fromCharCode(120,115,115));eval( String.fromCharCode(97,108,101,114,116,40,49,41)) autofocus b="

OPENBUGBOUNTY Score Techniques Example 79 C2, C4, S3, F3, O2, O4, I2 " onmouseover=eval(`\\u0061lert(/XSS/)`)// 66 C4, S2, S3, F3, O1 " onMouseover=\u0061lert(String.fromCharCode(88,83,83))> 62 C4, S2, S3, F3, O4 '> 60 C4, F3, O2, I4 " onmouseover=window['a'+'le'+'rt'](/XSS/) a=" 55 C3, F2, O3, I2 ">

High-sophistication event handlers such as onmouseover or onscroll (F3) require some level of user interaction. In these cases, exploit authors often inject the event handler into the preceding HTML tag where the sink is located, or they inject large elements such as containers or images in order to entice the desired user activity. The most submitted exploits are relatively simple in both databases. The median sophistication score in XSSED is just four, corresponding to a large number of submissions of the most popular pattern ">. Around 17.3 % of submissions have a score of zero with no detected techniques at all. OPENBUGBOUNTY submissions have a higher sophisti- cation than XSSED. The median submission has a score of 25; 97.6 % of XSSED submissions are less sophisticated. The CDF in Figure 5.5 also shows that OPENBUGBOUNTY has a higher diver- sity of scores due to the larger number of exploit patterns. The most popular pattern has a score of 27. While high scores exist (the maximum is 79), only around 0.6 % of OPENBUGBOUNTY submissions have a score higher than 40. The difference in sophistication between XSSED (2007 to 2015) and OPENBUGBOUNTY (2014 to 2017) suggests a trend of increasing exploit sophistication over time. While we could not observe a clear trend within the time range covered by XSSED, a steady increase in quarterly median scores in OPENBUGBOUNTY is visible in Figure 5.6. When considering all submissions, the median score peaks in the last quarter of 2015 and the first quarter of 2016 before slightly

91 CHAPTER 5. AN EMPIRICAL ANALYSIS OF XSS EXPLOITATION TECHNIQUES

decreasing. The most popular exploit pattern appears to receive a particularly high number of sub- missions during that time. In order to exclude possible bias arising from the activity of a few very active users, we also consider the median score in terms of unique exploit patterns used during each quarter, which shows a similar increasing trend. In absolute terms, the increase in sophistication is moderate; it corresponds to adding an exploit-triggered event handler (F2). It is tempting to use the sophistication of exploits to infer potential XSS defenses on the affected websites. However, authors do not necessarily submit minimum working exploits. To the contrary, our findings suggest that many authors use generic exploits and automated tools to cover large numbers of websites rather than individually testing websites and developing customized exploits. Consequently, such an analysis would be problematic given the available dataset.

Exploit Authors

As the last step of our analysis, we tie the exploit pattern and sophistication results to the users that submit them so that we can infer the technical skill and behavior of their authors. Ruohonen and Allodi [62] classified exploit contributors into the least active half (low), next quarter (medium), and most active quarter (high) of users according to their submission activity. Figure 5.8 plots the sophistication score CDFs of all exploits collectively contributed by the three classes of authors. In XSSED, perhaps due to the overall high similarity in scores, exploits submit- ted by the three user productivity classes appear to be similar in terms of sophistication, as they all have the same median score. Exploits contributed by low and medium productivity users in OPEN- BUGBOUNTY are similarly indistinguishable, albeit at a three times higher median than XSSED, in line with the overall increase in sophistication between the two databases. High productivity users, however, submit exploits that are noticeably more sophisticated on average, with a median score twice that of low and median productivity users, and over six times the median of XSSED. In aggregate, these users appear to contribute not only a higher quantity of exploits, but also a higher “quality” than the remainder of the user population. When authors submit multiple exploits, it is rare that they all have the same degree of sophisti- cation. Figure 5.9 shows the difference between the least and most sophisticated exploit submitted by each of the 18.1 % of XSSED authors, and 37.1 % of OPENBUGBOUNTY authors with ten or more contributions. In OPENBUGBOUNTY, 81.4 % of these authors submit exploits spanning a

92 CHAPTER 5. AN EMPIRICAL ANALYSIS OF XSS EXPLOITATION TECHNIQUES

1.0

0.8

0.6

0.4 XSSed (high productivity) XSSed (medium productivity) Fraction of Exploits XSSed (low productivity) 0.2 OBB (high productivity) OBB (medium productivity) 0.0 OBB (low productivity) 0 10 20 30 40 50 60 70 80 Score

Figure 5.8: CDF of exploit sophistication according to author productivity. Low productivity users contribute fewer low score exploits, causing a higher median.

range of 21 or more points, which is more than the increase in median sophistication over the three-year time period of the dataset. For medium productivity users, the average score difference is lower, whereas it is higher for high productivity users. In XSSED, related to the limited use of exploitation techniques, only 24.3 % of users with 10+ submissions have a score difference of 21 or more. Given that users submit exploits of varying scores, we investigate whether they improve and produce more sophisticated exploits over time. For different submission and duration thresholds, we observed roughly as many users where the later exploits had a better score than before, as we observed users where the later exploits had worse scores. There is no clear “learning effect” in our data. Score variations over time are likely dominated by other, external factors, such as the website being tested, or the choice of automated tools. To estimate the exploitation skills of a user, we draw on the most sophisticated exploit sub- mitted by that user. In contrast to Figure 5.8, where the CDFs for exploits submitted by low and

93 CHAPTER 5. AN EMPIRICAL ANALYSIS OF XSS EXPLOITATION TECHNIQUES

1.0 1.0

0.8 0.8

0.6 0.6

0.4 0.4 XSSed (all authors, max. score) Fraction of Authors Fraction of Authors OBB (low prod., max. score) 0.2 0.2 XSSed (all with 10+ submissions) OBB (high prod., median score) OBB (medium productivity) OBB (med. prod., max. score) 0.0 OBB (high productivity) 0.0 OBB (high prod., max. score)

0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 Score Difference Score

Figure 5.9: Difference between the lowest and Figure 5.10: CDF of the median and max. so- highest scored exploit per user with 10+ sub- phistication score per author. The most produc- missions (CDF). Most users submit exploits of tive users are aware of sophisticated techniques, varying sophistication. but use them sparingly. medium productivity users in OPENBUGBOUNTY overlap, the CDFs using only the maximum score of each author in Figure 5.10 have distinct curves for all three productivity classes. Collec- tively, low and medium productivity authors produce similarly sophisticated exploits, but medium productivity users are aware of more advanced techniques. High productivity users have the high- est exploitation skills. For example, 38.0 % of them submit an exploit scored 40 or higher, but only 8.6 % of medium productivity users do the same. Figure 5.10 also shows the median score CDF of high productivity users, located between the maximum score CDFs of low and medium productivity users. A typical exploit of a high productivity user is more sophisticated than the skills of low productivity users, but inferior to the skills of medium productivity users. High-productivity users appear to be aware of highly scored techniques, but use them sparingly. Collectively, the users in all three productivity classes submit similar fractions of highly so- phisticated exploits with scores over 30, as visible in Figure 5.8. Differences occur in the low and medium sophistication range, perhaps due to highly productive submitters using automated tools with preset exploits proactively including medium-sophistication filter bypasses. More so- phisticated techniques and custom-tailored exploits appear to be limited to smaller scale, manual

94 CHAPTER 5. AN EMPIRICAL ANALYSIS OF XSS EXPLOITATION TECHNIQUES efforts.

Summary

In this chapter, we presented a novel system to execute and extract features from archived proof- of-concept exploits with a unified static and dynamic approach. Our longitudinal analysis of exploitation techniques in XSSED and OPENBUGBOUNTY sub- missions has shown that most reflected XSS exploits are surprisingly simple, with an only moderate increase in sophistication over ten years. Many exploit authors are aware of advanced techniques, but use them only in a small fraction of their submissions. For example, few exploits use ob- fuscation, possibly because users lack incentives to submit more complex exploits. The relative simplicity of exploits in the two databases, also anecdotally observed by Pelizzi and Sekar [60], has implications for researchers using them for model training or system evaluation. Ideally, a sample of exploits used for these purposes should cover a diverse set of technical conditions. In reality, however, random samples of exploits contain only a few complex examples that would challenge the system to be evaluated. For a more diverse sample, researchers could apply an ap- proach similar to ours and select exploits based on different patterns and sophistication scores. Our data does not allow conclusions about the security posture of individual websites and how it evolves over time. Yet, standard, low-sophistication exploits appear to be effective on a large set of websites, demonstrating that shallow XSS vulnerabilities are still extremely widespread on the Web.

95 Chapter 6

Papers

The two parts of this thesis are based on author’s previously peer reviewed and published works [58, 10]. And the third one is currently under review for publication. It is highly encouraged to cite the related publications listed here instead of this thesis when it is applicable.

Thesis Publications

Chapter 3: A.S. Buyukkayhan, K. Onarlioglu, W. Robertson, E. Kirda. CrossFire: An Analysis of Firefox Extension-Reuse Vulnerabilities. In Proceedings of the Network and Distributed System Security Symposium (NDSS). San Diego, CA, USA, February 2016. Chapter 4: K. Onarlioglu, A.S. Buyukkayhan, W. Robertson, E. Kirda. Sentinel: Securing Legacy Firefox Extensions. Computers and Security, 49, January 2015. Chapter 5: Under review for publication

Other Publications

This section lists the author’s previously peer reviewed and published work that are not in the scope of this thesis. From Deletion to Re-Registration in Zero Seconds: Domain Registrar Behaviour During the Drop [41] T. Lauinger, A.S. Buyukkayhan, A. Chaabane, W. Robertson, E. Kirda. Internet Measurement Conference (IMC). Boston, MA, USA, November 2018.

96 CHAPTER 6. PAPERS

When desirable Internet domain names expire, they are often re-registered in the very moment the old registration is deleted, in a highly competitive and resource-intensive practice called domain drop-catching. To date, there has been little insight into the daily time period when expired domain names are deleted, and the race to re-registration that takes place. In this paper, we show that .com domains are deleted in a predictable order, and propose a model to infer the earliest possible time a domain could have been re-registered. We leverage this model to characterize at a precision of seconds how fast certain types of domain names are re-registered. We show that 9.5 of zero seconds. Domains not taken immediately by the drop-catch services are often re-registered later, with different behaviors over the following seconds, minutes and hours. Since these behaviors imply different effort and price points, our methodology can be useful for future work to explain the uses of re-registered domains. Lens on the endpoint : Hunting for malicious software through endpoint data analysis [11] A.S. Buyukkayhan, A. Oprea, Z. Li, W. Robertson. International Symposium on Research in Attacks, Intrusions and Defenses (RAID). Atlanta, GA, USA, September 2017. Organizations are facing an increasing number of criminal threats ranging from opportunistic malware to more advanced targeted attacks. While various security technologies are available to protect organizations perimeters, still many breaches lead to undesired consequences such as loss of proprietary information, financial burden, and reputation defacing. Recently, endpoint moni- toring agents that inspect system-level activities on user machines started to gain traction and be deployed in the industry as an additional defense layer. Their application, though, in most cases is only for forensic investigation to determine the root cause of an incident. In this paper, we demon- strate how endpoint monitoring can be proactively used for detecting and prioritizing suspicious software modules overlooked by other defenses. Compared to other environments in which host- based detection proved successful, our setting of a large enterprise introduces unique challenges, including the heterogeneous environment (users installing software of their choice), limited ground truth (small number of malicious software available for training), and coarse-grained data collec- tion (strict requirements are imposed on agents performance overhead). Through applications of clustering and outlier detection algorithms, we develop techniques to identify modules with known malicious behavior, as well as modules impersonating popular benign applications. We leverage a large number of static, behavioral and contextual features in our algorithms, and new feature weighting methods that are resilient against missing attributes. The large majority of our findings

97 CHAPTER 6. PAPERS are confirmed as malicious by anti-virus tools and manual investigation by experienced security analysts.

Game of Registrars: An Empirical Analysis of Post-Expiration Domain Name Takeovers [42] T. Lauinger, A. Chaabane, A.S. Buyukkayhan, K. Onarlioglu, W. Robertson. USENIX Security Symposium. Vancouver, BC, CA, August 2017. Every day, hundreds of thousands of Internet domain names are abandoned by their owners and become available for re-registration. Yet, there appears to be enough residual value and demand from domain speculators to give rise to a highly competitive ecosystem of drop-catch services that race to be the first to re-register potentially desirable domain names in the very instant the old registration is deleted. To preempt the competitive (and uncertain) race to re-registration, some registrars sell their own customers expired domains pre-release, that is, even before the names are returned to general availability. These practices are not without controversy, and can have serious security consequences. In this paper, we present an empirical analysis of these two kinds of post- expiration domain ownership changes. We find that 10 % of all com domains are re-registered on the same day as their old registration is deleted. In the case of org, over 50 % of re-registrations on the deletion day occur during only 30 s. Furthermore, drop-catch services control over 75 % of accredited domain registrars and cause more than 80 % of domain creation attempts, but represent at most 9.5 % of successful domain creations. These findings highlight a significant demand for expired domains, and hint at highly competitive re-registrations. This paper sheds light on various questionable practices in an opaque ecosystem. The implications go beyond the annoyance of websites turned into Internet graffiti, as domain ownership changes have the potential to circumvent established security mechanisms.

98 Chapter 7

Conclusion

Web browsers are often targeted by attackers to launch powerful attacks and to avoid detection by the conventional security tools. In the recent years, the complexity of web browser architectures and web applications presented new opportunities to the attackers. For example, the large number of easy-to-exploit vulnerabilities in browser extensions and web applications allow attackers to automate the vulnerability detection and to perform low cost widespread attacks. As a response to these evolving threats, security community is continuously developing new defenses. However, the new defenses that are implemented only at the server-side are not sufficient to protect the end users. In this thesis, we argue that browser extensions and web applications have an abundant supply of easy-to-exploit vulnerabilities but one can reduce the attack surface and protect users from a large number of web browser attacks by implementing defenses inside the browser. We developed novel systems to measure these vulnerabilities in browser extensions and web applications and evaluated the effectiveness of in-browser defenses including the one we proposed to defend against malicious or vulnerable extensions. Our research showed that in-browser defenses are feasible and effective in preventing most prevalent type of web attacks. In Chapter 3, we explored the extension code-reuse vulnerabilities in popular Firefox legacy extensions. Then, we presented CrossFire, a lightweight static analyzer for legacy Firefox ex- tensions to automatically discover instances of extension-reuse vulnerabilities, and to generate exploits that confirm the presence of vulnerabilities. Our analysis showed that there are thousands of extension-reuse vulnerabilities among the top 2000 extensions.

99 CHAPTER 7. CONCLUSION

In Chapter 4, we investigated the security threats posed by a malicious or a vulnerable browser extension and we developed a novel in-browser defense which is a run-time policy enforcer that provides fine-grained control to the user over the actions of legacy Firefox extensions. We demon- strated that it can effectively defeat concrete attacks, and performs efficiently in real-world brows- ing scenarios without a significant detrimental impact on the user experience. In Chapter 5, we conducted a longitudinal study of 134K reflected Cross-Site Scripting exploits submitted by independent security researchers spanning a period of nearly ten years. We showed that most reflected XSS exploits are surprisingly simple, with a slight increase in sophistication over ten years. Many exploit authors are aware of advanced techniques but use them only in a small fraction of their submissions. Our results also indicate that large number of reflected XSS exploits can be detected by existing in-browser filters. To conclude, today the new browser extension frameworks benefit from improved security mechanisms such as fine-grained access control, and isolation between different modules. How- ever, they only check access permissions at installation time and can not completely prevent ma- licious extensions to abuse interfaces provided by more privileged extensions as described in Sec- tion 2.1.2. In addition, they do not allow end users to customize security controls for their particular needs. Existing in-browser filters can detect the majority of XSS attacks in our dataset, but there are still some attacks that can evade the detection. Since it is not possible to completely trust the web applications to apply necessary security controls and to follow secure development practices, in-browser defenses are going to be the preferred way to protect the web users and a promising avenue for research and enhancements.

100 Bibliography

[1] Add-ons for Firefox. About Startup. https://addons.mozilla.org/en-us/ firefox/addon/about-startup/.

[2] Ariya Hidayat. Esprima. http://esprima.org/.

[3] Rudolfo Assis. XSS cheat sheet. https://brutelogic.com.br/blog/ cheat-sheet/, 2017.

[4] Sruthi Bandhakavi, Samuel T. King, P. Madhusudan, and Marianne Winslett. VEX: Vetting Browser Extensions for Security Vulnerabilities. In Proceedings of the USENIX Security Symposium, Berkeley, CA, USA, 2010. USENIX Association.

[5] Sruthi Bandhakavi, Nandit Tiku, Wyatt Pittman, Samuel T. King, P. Madhusudan, and Mari- anne Winslett. Vetting Browser Extensions for Security Vulnerabilities with VEX. Commu- nications of the ACM, 54(9):91–99, 2011.

[6] Adam Barth, Adrienne Porter Felt, Prateek Saxena, and Aaron Boodman. Protecting Browsers from Extension Vulnerabilities. In Proceedings of the Network and Distributed Systems Security Symposium, 2010.

[7] Daniel Bates, Adam Barth, and Collin Jackson. Regular Expressions Considered Harmful in Client-Side XSS Filters. In WWW, 2010.

[8] Khalil Bijjou. Web Application Bypassing - how to defeat the blue team. In OWASP Open Web Application Security Project, 2015.

[9] Brian LePore. Local Load. http://www.getlocalload.com/.

101 BIBLIOGRAPHY

[10] Ahmet Salih Buyukkayhan, Kaan Onarlioglu, William Robertson, and Engin Kirda. Cross- Fire: An Analysis of Firefox Extension-Reuse Vulnerabilities. In Proceedings of the Network and Distributed Systems Security Symposium, 2016.

[11] Ahmet Salih Buyukkayhan, Alina Oprea, Zhou Li, and William Robertson. Lens on the endpoint: Hunting for malicious software through endpoint data analysis. In International Symposium on Research in Attacks, Intrusions and Defenses (RAID), September 2017.

[12] Nicholas Carlini, Adrienne Porter Felt, and David Wagner. An Evaluation of the Google Chrome Extension Security Architecture. In Proceedings of the USENIX Security Sympo- sium, Berkeley, CA, USA, 2012. USENIX Association.

[13] Chromium. XSS Auditor. https://www.chromium.org/developers/ design-documents/xss-auditor.

[14] David Ross. IE 8 XSS Filter Architecture/Implementation. https://blogs.technet.microsoft.com/srd/2008/08/19/ ie-8-xss-filter-architecture-implementation/.

[15] Rachna Dhamija, J. D. Tygar, and Marti Hearst. Why Phishing Works. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2006.

[16] Mohan Dhawan and Vinod Ganapathy. Analyzing Information Flow in JavaScript-Based Browser Extensions. In Proceedings of the Annual Computer Security Applications Confer- ence, pages 382–391, 2009.

[17] Vladan Djeric and Ashvin Goel. Securing Script-Based Extensibility in Web Browsers. In Proceedings of the USENIX Security Symposium, Berkeley, CA, USA, 2010. USENIX Asso- ciation.

[18] Julie S. Downs, Mandy B. Holbrook, and Lorrie Faith Cranor. Decision Strategies and Sus- ceptibility to Phishing. In Proceedings of the Symposium on Usable Privacy and Security, 2006.

[19] K. Fernandez and D. Pagkalos. XSSed | Cross site scripting (XSS) attacks information and archive. http://xssed.com.

102 BIBLIOGRAPHY

[20] Matthew Finifter, Devdatta Akhawe, and David Wagner. An empirical study of vulnerability rewards programs. In Usenix Security, 2013.

[21] FIRST. Common vulnerability scoring system v3.0: Specification document (attack complexity). https://www.first.org/cvss/specification-document# 2-1-2-Attack-Complexity-AC, 2015.

[22] Nick Freeman and Roberto Suggi Liverani. Exploiting Cross Context Script- ing Vulnerabilities in Firefox. http://www.security-assessment. com/files/whitepapers/Exploiting_Cross_Context_Scripting_ vulnerabilities_in_Firefox.pdf, 2010.

[23] Mauro Gentile. Snuck payloads. https://github.com/mauro-g/snuck/tree/ master/payloads, 2012.

[24] Ian Goldberg, David Wagner, Randi Thomas, and Eric A. Brewer. A Secure Environment for Untrusted Helper Applications Confining the Wily Hacker. In Proceedings of the USENIX Security Symposium, Berkeley, CA, USA, 1996. USENIX Association.

[25] Google. Chrome Web Store. https://chrome.google.com/webstore/ category/extensions.

[26] Chris Grier, Shuo Tang, and Samuel T. King. Secure Web Browsing with the OP Web Browser. In Proceedings of the IEEE Symposium on Security and Privacy, pages 402–416. IEEE Computer Society, 2008.

[27] Arjun Guha, Matthew Fredrikson, Benjamin Livshits, and Nikhil Swamy. Verified Security for Browser Extensions. In Proceedings of the IEEE Symposium on Security and Privacy, pages 115–130. IEEE Computer Society, 2011.

[28] HackerOne.com. Bug bounty – hacker powered security testing | HackerOne. https: //hackerone.com/.

[29] Robert Hansen, Adam Lange, and Mishra Dhira. OWASP XSS filter evasion cheat sheet. https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_ Sheet, 2017.

103 BIBLIOGRAPHY

[30] Mario Heiderich. HTML5 security cheatsheet. https://html5sec.org/, 2011.

[31] Hossein Homaei and Hamid Reza Shahriari. Seven years of software vulnerabilities: The ebb and flow. IEEE Security and Privacy, 15:58–65, 01 2017.

[32] InformAction. NoScript. http://noscript.net/.

[33] Vladimir Ivanov. Web Application Firewalls: Attacking detection logic mechanisms. In BlackHat, 2016.

[34] Nav Jagpal, Eric Dingle, Jean-Philippe Gravel, Panayiotis Mavrommatis, Niels Provos, Mo- heeb Abu Rajab, and Kurt Thomas. Trends and Lessons from Three Years Fighting Malicious Extensions. In USENIX Security Symposium, pages 579–593, 2015.

[35] Raj Jain. The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modeling. Wiley, April 1991.

[36] Alexandros Kapravelos, Chris Grier, Neha Chachra, Christopher Kruegel, Giovanni Vigna, and Vern Paxson. Hulk: Eliciting Malicious Behavior in Browser Extensions. In Proceedings of the USENIX Security Symposium, Berkeley, CA, USA, 2014. USENIX Association.

[37] Rezwana Karim, Mohan Dhawan, and Vinod Ganapathy. Retargetting Legacy Browser Ex- tensions to Modern Extension Frameworks. In Proceedings of the European Conference on Object-Oriented Programming, Berlin, Heidelberg, 2014. Springer.

[38] Rezwana Karim, Mohan Dhawan, Vinod Ganapathy, and Chung-chieh Shan. An Analysis of the Mozilla Jetpack Extension Framework. In Proceedings of the European Conference on Object-Oriented Programming, pages 333–355, Berlin, Heidelberg, 2012. Springer.

[39] Engin Kirda, Christopher Kruegel, Greg Banks, Giovanni Vigna, and Richard A. Kemmerer. Behavior-Based Spyware Detection. In Proceedings of the USENIX Security Symposium, Berkeley, CA, USA, 2006. USENIX Association.

[40] Engin Kirda, Christopher Kruegel, Giovanni Vigna, and Nenad Jovanovic. Noxes: a client- side solution for mitigating cross-site scripting attacks. In ACM Symposium on Applied Com- puting, pages 330–337. ACM, 2006.

104 BIBLIOGRAPHY

[41] Tobias Lauinger, Ahmet Salih Buyukkayhan, Abdelberi Chaabane, William Robertson, and Engin Kirda. From Deletion to Re-Registration in Zero Seconds: Domain Registrar Be- haviour During the Drop. In ACM Internet Measurement Conference, October 2018.

[42] Tobias Lauinger, Abdelberi Chaabane, Ahmet Salih Buyukkayhan, Kaan Onarlioglu, and William Robertson. Game of Registrars: An Empirical Analysis of Post-Expiration Domain Name Takeovers. In USENIX Security Symposium, August 2017.

[43] Sebastian Lekies, Ben Stock, and Martin Johns. 25 Million Flows Later - Large-scale Detec- tion of DOM-based XSS. In ACM CCS, 2013.

[44] Zhuowei Li, XiaoFeng Wang, and Jong Youl Choi. SpyShield: Preserving Privacy from Spy Add-ons. In Proceedings of the International Symposium on Recent Advances in Intrusion Detection, pages 296–316, Berlin, Heidelberg, 2007. Springer.

[45] Lei Liu, Xinwen Zhang, Guanhua Yan, and Songqing Chen. Chrome Extensions: Threat Analysis and Countermeasures. In Proceedings of the Network and Distributed Systems Se- curity Symposium, 2012.

[46] Roberto Suggi Liverani. Cross Context Scripting with Firefox. http://www. security-assessment.com/files/whitepapers/Cross_Context_ Scripting_with_Firefox.pdf, 2010.

[47] Josh Marston, Komminist Weldemariam, and Mohammad Zulkernine. On Evaluating and Securing Browser Extensions. In Proceedings of the International Con- ference on Mobile Software Engineering and Systems, MOBILESoft, New York, NY, USA, 2014. ACM.

[48] William Melicher, Anupam Das, Mahmood Sharif, Lujo Bauer, and Limin Jia. Riding out DOMsday: Towards Detecting and Preventing DOM Cross-Site Scripting. In NDSS, 2018.

[49] Mozilla. Add-on Documentation - Review Process. https://addons.mozilla.org/ en-US/developers/docs/policies/reviews.

[50] Mozilla. Add-ons for Firefox. https://addons.mozilla.org/.

105 BIBLIOGRAPHY

[51] Mozilla Add-ons Blog. Firefox Extensions: Global Namespace Pol- lution. http://blog.mozilla.org/addons/2009/01/16/ firefox-extensions-global-namespace-pollution/, 2009.

[52] Mozilla Developer Network. JavaScript code modules. https://developer. mozilla.org/en-US/docs/Mozilla/JavaScript_code_modules.

[53] Mozilla Developer Network. Proxy. https://developer.mozilla.org/en-US/ docs/JavaScript/Reference/Global_Objects/Proxy.

[54] Mozilla Developer Network. XPCOM. https://developer.mozilla.org/ en-US/docs/XPCOM.

[55] Mozilla Developer Network. XUL. https://developer.mozilla.org/en-US/ docs/XUL.

[56] Mozilla Wiki. Jetpack. https://wiki.mozilla.org/Jetpack.

[57] Kaan Onarlioglu, Mustafa Battal, William Robertson, and Engin Kirda. Securing Legacy Firefox Extensions with Sentinel. In Conference on Detection of Intrusions and Malware & Vulnerability Assessment. Springer, 7 2013.

[58] Kaan Onarlioglu, Ahmet Salih Buyukkayhan, William Robertson, and Engin Kirda. Sentinel: Securing Legacy Firefox Extensions. Computers & Security, 49:147–161, March 2015.

[59] OpenBugBounty.org. Open Bug Bounty | Free bug bounty program & coordinated vulnera- bility disclosure. https://openbugbounty.org.

[60] Riccardo Pelizzi and R Sekar. Protection, usability and improvements in reflected XSS filters. In ASIACCS, 2012.

[61] Sebastian Poeplau, Yanick Fratantonio, Antonio Bianchi, Christopher Kruegel, and Giovanni Vigna. Execute This! Analyzing Unsafe and Malicious Dynamic Code Loading in Android Applications. In Proceedings of the Network and Distributed Systems Security Symposium, 2014.

106 BIBLIOGRAPHY

[62] Jukka Ruohonen and Luca Allodi. A bug bounty perspective on the disclosure of Web vul- nerabilities. In WEIS, 2018.

[63] Theodoor Scholte, Davide Balzarotti, and Engin Kirda. Quo Vadis? A Study of the Evolution of Input Validation Vulnerabilities in Web Applications. In Financial Crypto, 2011.

[64] Ben Stock, Sebastian Lekies, Tobias Mueller, Patrick Spiegel, and Martin Johns. Precise client-side protection against dom-based cross-site scripting. In USENIX Security Sympo- sium, pages 655–670, 2014.

[65] Ben Stock, Stephan Pfistner, Bernd Kaiser, Sebastian Lekies, and Martin Johns. From Facepalm to Brain Bender: Exploring Client-Side Cross-Site Scripting. In ACM CCS, 2015.

[66] Mike Ter Louw, Jin Soon Lim, and V. N. Venkatakrishnan. Extensible Web Browser Security. In Proceedings of the Conference on Detection of Intrusions and Malware & Vulnerability Assessment, pages 1–19, Berlin, Heidelberg, 2007. Springer.

[67] Mike Ter Louw, Jin Soon Lim, and V. N. Venkatakrishnan. Enhancing Web Browser Security against Malware Extensions. In Journal in Computer Virology, volume 4, pages 179–195. Springer-Verlag, 2008.

[68] Helen J. Wang, Chris Grier, Alexander Moshchuk, Samuel T. King, Piali Choudhury, and Herman Venter. The Multi-Principal OS Construction of the Gazelle Web Browser. In Pro- ceedings of the USENIX Security Symposium, pages 417–432, Berkeley, CA, USA, 2009. USENIX Association.

[69] Jiangang Wang, Xiaohong Li, Xuhui Liu, Xinshu Dong, Junjie Wang, Zhenkai Liang, and Zhiyong Feng. An Empirical Study of Dangerous Behaviors in Firefox Extensions. In Pro- ceedings of the Information Security Conference, pages 188–203, Berlin, Heidelberg, 2012. Springer.

[70] Lei Wang, Ji Xiang, Jiwu Jing, and Lingchen Zhang. Towards Fine-Grained Access Con- trol on Browser Extensions. In Proceedings of the International Conference on Information Security Practice and Experience, pages 158–169, Berlin, Heidelberg, 2012. Springer.

107 BIBLIOGRAPHY

[71] Tielei Wang, Kangjie Lu, Long Lu, Simon Chung, and Wenke Lee. Jekyll on iOS: When Benign Apps Become Evil. In Proceedings of the USENIX Security Symposium, Berkeley, CA, USA, 2013. USENIX Association.

[72] B. Yee, D. Sehr, G. Dardyk, J.B. Chen, R. Muth, T. Ormandy, S. Okasaka, N. Narula, and N. Fullagar. Native Client: A Sandbox for Portable, Untrusted Native Code. In Proceed- ings of the IEEE Symposium on Security and Privacy, pages 79–93. IEEE Computer Society, 2009.

[73] Mingyi Zhao, Jens Grossklags, and Peng Liu. An empirical study of Web vulnerability dis- covery ecosystems. In CCS, 2015.

108