Elena Prodromou ([email protected])

WEB SECURITY 1. Robust Defenses for Cross Site Request Forgery

Cross Site Request Forgery Attack - Definition Cross-Site Request Forgery (CSRF) is among the twenty most-exploited security vulnerabilities of 2007. In a CSRF attack, a malicious site instructs a victim’s browser to send a request to an honest site, as if the request were part of the victim’s interaction with the honest site, leveraging the victim’s network connectivity and the browser’s state, such as cookies, to disrupt the integrity of the victim’s session with the honest site.

For example, in late 2007, Gmail had a CSRF vulnerability. When a Gmail user visited a malicious site, the malicious site could generate a request to Gmail that Gmail treated as part of its ongoing session with the victim. In November 2007, a web attacker exploited this CSRF vulnerability to inject an filter into David Airey’s Gmail account. This filter forwarded all of David Airey’s email to the attacker’s email address, which allowed the attacker to assume control of davidairey.com because Airey’s domain registrar used email , leading to significant inconvenience and financial loss.

CSRF Defined In a CSRF attack, the attacker disrupts the integrity of the user’s session with a web site by injecting network requests via the user’s browser. The browser’s security policy allows web sites to send HTTP requests to any network address. This policy allows an attacker that controls content rendered by the browser to use resources not otherwise under his or her control: 1. Network Connectivity : For example, if the user is behind a , the attacker is able to leverage the user’s browser to send network requests to other machines behind the firewall that might not be directly reachable from the attacker’s machine. Even if the user is not behind a firewall, the requests carry the user’s IP address and might confuse services relying on IP address authentication. 2. Read Browser State : Requests sent via the browser’s network stack typically include browser state, such as cookies, client certificates, or basic authentication headers. Sites that rely on this authentication state might be confused by these requests. 3. Write Browser State : When the attacker causes the browser to issue a network request, the browser also parses and acts on the response.

There are three different threat models, varying by attacker capability. 1. Forum Poster : Many web sites, such as forums, let users to supply limited kinds of content. For example, sites often permit users to submit passive content, such as images or hyperlinks. If an attacker chooses the “image’s” URL maliciously, the network request might lead to a CSRF attack. The forum poster can issue requests from the honest site’s origin, but these requests cannot have custom HTTP headers and must use the HTTP “GET” method. Although the HTTP specification requires GET

1

Elena Prodromou ([email protected])

requests to be free of side effects, some sites do not comply with this requirement. 2. Web Attacker : A malicious principal who owns a domain, e.g. attacker.com, has a valid HTTPS certificate for attacker.com and operates a web server. If the user visits attacker.com, the attacker can mount a CSRF attack by instructing the user's browser to issue cross-site requests using both the GET and POST methods. 3. Network Attacker : A malicious principal who controls the user's network connection.

Login CSRF In this paper, authors present a new variation on CSRF attacks, login CSRF attacks, which are currently widely possible and damaging. In a login CSRF attack, an attacker uses the victim's browser to forge a cross-site request to the honest site's login URL, supplying the attacker's user name and password. If the forgery succeeds, the honest server responds with a Set-Cookie header that instructs the browser to mutate its state by storing a session cookie, logging the user into the honest site as the attacker. This session cookie is used to bind subsequent requests to the user’s session and hence to the attacker’s authentication credentials.

Example of login CSRF attack Search History : Many search engines, including Yahoo! and Google, allow their users to opt-in to saving their search history and provide an interface for a user to review his or her personal search history. Search queries contain sensitive details about the user’s interests and activities and could be used by an attacker to embarrass the user, to steal the user’s identity, or to spy on the user. An attacker can spy on a user’s search history by logging the user into the search engine as the attacker. The user’s search queries are then stored in the attacker’s search history, and the attacker can retrieve the queries by logging into his or her own account.

2

Elena Prodromou ([email protected])

Figure 1: Event trace diagram for a login CSRF attack. The victim visits the attacker's site, and the attacker forges a cross-site request to Google's login form, causing the victim to be logged into Google as the attacker. Later, the victim makes a web search, which is logged in the attacker's search history.

Existing CSRF defenses There are three mechanisms a site can use to defend itself against CSRF attacks: 1. Validating a secret token 2. Validating the HTTP Referrer Header 3. Including additional headers with XMLHttpRequest

(1) Secret Validation Token • Additional information in each HTTP request that can be used to determine whether the request came from an authorized source. • If a request is missing a validation token or the token does not match the expected value, the server should reject the request. • Can defend against login CSRF. • Developers often forget to implement because before login, there is no session to bind the CSRF token. The site must:  First create a “pre-session”.  Implement token-based CSRF protection.  Transition to a real session after successful authentication.

3

Elena Prodromou ([email protected])

There are a number of techniques for generating and validating tokens: • Session Identifier : Use the user's session identifier as the secret validation token. On every request, the server validates that the token matches the user's session identifier. o Disadvantage: Users reveal the contents of web pages they view to third parties. • Session- Independent Nonce : Server generates a random nonce and stores it as a cookie when the user first visits the site. On every request, the server validates that the token matches the value stored in the cookie. o Disadvantage: An active network attacker can overwrite the session independent nonce with his or her own CSRF token value and proceed to forge a CSRF with a matching token. • Session- Dependent Nonce : Store state on the server that binds the user's CSRF token value to the user's session identifier. On every request, the server validates that the supplied CSRF token is associated with the user's session identifier. o Disadvantage: The site must maintain a large state table in order to validate the tokens. • HMAC of Session Identifier : The site can use cryptography to bind the two values. All the site's servers have the HMAC key, each server can validate that the CSRF token is correctly bound to the session identifier. o An attacker who learns a user's token cannot infer the user's session identifier.

(2) The Referrer Header When the browser issues an HTTP request it includes a Referrer header that indicates which URL initiated the request and distinguishes a same-site request from a cross-site request because the header contains the URL of the site making the request.  Disadvantage: Usually suppressed due to privacy information leaking and can be spoofed due to browser bugs.

The site's developers must decide whether to implement lenient or strict Referrer validation. Lenient Referrer validation : The site blocks requests whose Referrer header has an incorrect value. If a request lacks the header, the site accepts the request.  A Web attacker can cause the browser to suppress the Referrer header. Strict Referrer validation : The site blocks requests that lack a Referrer header.  Protects against malicious Referrer suppression but incurs a compatibility penalty as some browsers and network configurations suppress the Referrer header for legitimate requests.

Experiment In order to evaluate the compatibility of strict Referrer validation, authors conducted an experiment to measure how often, and under which circumstances, the Referrer header is suppressed during legitimate requests.

4

Elena Prodromou ([email protected])

Design • They used two advertisement networks. • Used two servers with two domain names to host the advertisement. • Advertisement generates a unique id and randomly selects the primary server. • Primary server sends the client HTML that issues a sequence of GET and POST requests to their servers, both over HTTP and HTTPS. • Requests are generated by submitting forms, requesting images, and issuing XMLHttpRequests. • The advertisement generates both same-domain requests to the primary server and cross-domain requests to the secondary server. • Servers logged request parameters (Session identifier, Referrer etc). • Servers recorded the value of document.referrer DOM API.

Results

Figure 2: Requests with missing or incorrect Referrer header (283.945 observations). The 'x' and 'y' represent the domain names of the primary and secondary web servers, respectively.

• The Referrer header is suppressed more often for cross domain requests over HTTP. • The Referrer header is suppressed more often for HTTP requests than for HTTPS requests. • The Referrer header is suppressed more often in Ad Network B than on Ad Network A for all types of request. • Browsers that suppress the Referrer header also suppress the document.referrer value, but when Referrer is suppressed in the network, the document.referrer value is not suppressed.

5

Elena Prodromou ([email protected])

Figure 3 : Requests with a Missing or Incorrect Referrer Header on Ad Network A (241,483 observations). Opera blocks cross-site document.referrer for HTTPS. Firefox 1.0 and 1.5 do not send Referrer for XMLHttpRequest due to a bug. The PlayStation 3 (denoted PS) does not support document.referrer.

Conclusion • Strict Referrer validation can be used as CSRF defense for HTTPS (0.05- 0.22% of browsers suppress the header over https). • Strict Referrer validation is well-suited for preventing login CSRF because login requests are issued over HTTPS. • Over HTTP, sites cannot afford to block requests that lack Referrer header because they would cease to be compatible with 3-11% of users.

(3) Custom HTTP headers The browser prevents sites from sending custom HTTP headers to another site but allows sites to send custom HTTP headers to themselves using XMLHttpRequest. To use custom headers as a CSRF defense, a site must : • Issue all state-modifying requests using XMLHttpRequest. • Attach the custom header (e.g. X-Request-By). • Reject all state-modifying requests that are not accompanied by the header.

Authors' defense proposal They propose modifying browsers to send an Origin header with POST requests which identifies the origin that initiated the request. If the browser cannot determine the origin, the browser sends the value null.

The Origin header improves on the Referrer header by respecting the user’s privacy. • Origin header includes only the information required to identify the principal that initiated the request (port, host, scheme). • Origin header doesn’t contain the path or query portions of the URL.

6

Elena Prodromou ([email protected])

• Origin header is sent only for POST requests. • Referrer header is sent for all requests.

Server Behavior • All state-modifying requests, including login requests, must be sent using the POST method. • Server must reject any requests whose Origin header contains an undesired value or null.

Security Analysis • Rollback and Suppression : A supporting browser will always include the Origin header when making POST requests. The origin header takes on the value null when suppressed by the browser. • DNS Rebinding : Sites that rely only on network connectivity for authentication should use one of the DNS rebinding defenses, such as validating the Host header. • Plug-ins : If a site opts into cross-site HTTP requests, an attacker can use Flash Player to set the Origin header in cross-site requests. Opting into cross-site HTTP requests also defeats secret token validation CSRF defenses because the tokens leak during cross-site HTTP requests. So, sites should not opt into cross-site HTTP requests from untrusted origins.

Adoption The Origin header improves and unifies other proposals and has been adopted by several working groups.

Implementation Authors implemented both the browser and server components of the Origin header CSRF defense. On the browser side, they implemented the Origin header in an eight-line patch to WebKit, the open source component of Safari, and in a 466 line extension to Firefox. On the server side, they used the Origin header to implement a web application firewall for CSRF in 3 lines of ModSecurity, a web application firewall language for Apache. These rules validate that, for POST requests, the Host header and the Origin header contain an acceptable values.

Session Initialization There are two types of session initialization vulnerabilities, one in which the server associates the honest user's identity with the newly initialized session and another in which the server associates the attacker's identity with the session. • Authenticated as User : The attacker can force the site to use a predictable session identifier for a new session. After the user supplies their authentication credentials to the honest site, the site associates the user's authorization with the predictable session identifier. The attacker

7

Elena Prodromou ([email protected])

can then access the honest site direct using the session identifier and can act as the user. • Authenticated as Attacker : The attacker cause the honest site to begin a new session with the user's browser but force the session to be associated with the attacker's authorization. e.g. Login CSRF

There are two common approaches to mount an attack on session initialization: HTTP requests and cookie overwriting.

HTTP Requests (1)OpenID The sites include a self-signed nonce to protect against reply attacks, but does not suggest a mechanism to bind the OpenID session to the user's browser, letting a web attacker force the user's browser to initialize a session authenticated as the attacker. For example : 1. Web attacker visits the Relying Party (Blogger) and begins the authentication process with the Identity Provider (Yahoo!). 2. Identity Provider redirects the attacker’s browser to the “return to” URL of the Relying Party. 3. Instead of following the redirect, the attacker directs the user’s browser to the return to URL. 4. The Relying Party completes the OpenID protocol and stores a session cookie in the user’s browser. 5. The user is now logged in as the attacker.

Defense Relying Party should generate a fresh nonce at the start of the protocol, store it in browser’s cookie store and include it in the return to parameter of the OpenID protocol. Upon receiving a positive identity assertion from the user's Identity Provider, the Relying Party should validate that the nonce included in the return to URL matches the nonce stored the cookie store.

(2)PHP Cookieless Authentication • Stores the user’s session identifier in a query parameter. • Fails to bind the session to the user’s browser, letting a web attacker force the user’s browser to initialize a session authenticated as the attacker: 1. The web attacker logs into the honest web site. 2. The web attacker redirects the user’s browser to the URL currently displayed in the attacker’s location bar. 3. Because this URL contains the attacker’s session identifier, the user is now logged in as the attacker. Defense Site should maintain a long-lived frame that contains the session identifier token. This frame binds the session to the user’s browser by storing the session identifier in memory.

8

Elena Prodromou ([email protected])

Sites that use PHP Cookieless authentication often contain a session initialization vulnerability that lets a web attacker impersonate an honest user: 1. Using his or her own machine, the web attacker visits the honest web site’s login page. 2. The web attacker redirects the user’s browser to the URL currently displayed in the attacker’s location bar. 3. The user reads the location bar, accurately determines that displayed URL corresponds to the honest site, and logs into the site. 4. Because the URL supplied by the attacker contains the attacker’s session identifier, the attacker’s session is now authenticated as the user.

Cookie Overwriting • A Set-Cookie header can contain a “secure” flag, indicating that it should be only sent over an HTTPS connection. • An active network attacker can supply a Set-Cookie header over a HTTP connection to the same host name as the site and install either a Secure or a non-Secure cookie of the same name. • The secure flag does not offer integrity protection in the cross-scheme threat model. • If the secure cookie contains the user’s session identifier, an attacker can overwrite the user’s session identifier with her own session identifier. Defense Browsers report the integrity state of cookies using a Cookie-Integrity header in HTTPS requests. The header identifies the index of the cookies in the request's Cookie header that were set using HTTPS.

Conclusions and Advice Login CSRF • Strict Referrer validation (login forms typically submit over HTTPS, where the Referrer header is reliably present for legitimate requests). • If a login request lacks a Referrer header, the site should reject the request to defend against malicious suppression. HTTPS • For sites served over HTTPS (e.g. banking sites), the authors recommend strict Referrer validation. Third-party Content • Images, hyperlinks should use a framework that implements secret token validation correctly. Origin header • Eliminating the privacy concerns that lead the Referrer blocking. • HTTPS and non-HTTPS requests both work.

9

Elena Prodromou ([email protected])

2. : Attacks and Defenses

Definition Clickjacking attack is an emerging threat on the web. It is a malicious technique that deceives a web user into interacting (in most cases by clicking) with something different to what the users believe they are interacting with.

Some possible risks caused by clickjacking are: • Web surfing anonymity can be compromised • User’s private data and can be stolen • Spy on a user through his webcam

The root cause of clickjacking is that an attacker application presents a sensitive UI element of a target application out of context to a user (such as hiding the sensitive UI by making it transparent), and hence the user is tricked to act out of context (An attacker uses multiple transparent or opaque layers to trick a user into clicking on a button or link on another page when they were intending to click on the top level page).

When multiple applications or web sites share a graphical display, they are subject to clickjacking. For example, in Likejacking attacks, an attacker's web page presents a false visual context to the user and tricks them into clicking on a Facebook "Like" button by transparently overlaying it on top of an innocuous UI element, such as "Claim your free iPad" button. Hence, the user "claims" a free iPad, a story appears in the user's Facebook friends' news feed stating the she "likes" the attacker web site.

The context of a user's action consists of its visual context and temporal context. • Visual context is what a user should see right before their sensitive action. To ensure visual context integrity, both the sensitive UI element (target display integrity) and the pointer feedback (pointer integrity) need to be fully visible to a user. • Temporal context refers to the timing of a user action. Temporal integrity ensures that a user's action at a particular time is intended by the user.

Threat model A clickjacking attacker has all the capabilities of a web attacker: (1) they own a domain name and control content served from their web servers, and (2) they can make a victim visit their site, thereby rendering attacker's content in the victim's browser. When a victim user visits the attacker's page, the page hides a sensitive UI element either visually or temporally and lure the user into performing unintended UI actions on the sensitive element out of context.

Existing clickjacking attacks Three kinds of clickjacking attacks:

10

Elena Prodromou ([email protected])

1. Compromising target display integrity 2. Compromising pointer integrity 3. Compromising temporal integrity

(1) Compromising target display integrity • Hiding the target element: Modern browsers support HTML/CSS styling features that allow attackers to visually hide the target element but still route mouse events to it. For example, an attacker can make the target element transparent by wrapping it in a div container with a CSS opacity value of zero: to entice a victim to click on it, the attacker can draw a decoy under the target element by using a lower CSS z-index. Alternatively, the attacker may completely cover the decoy unclickable by setting the CSS property pointer-events: none. A victim's click would then fall through the decoy and land on the invisible target element.

• Partial overlays: Visually confuse a victim by obscuring only a part of the target element. For example, attackers could overlay their own information on top of a PayPal checkout iframe to cover the recipient and amount fields while leaving the "Pay" button intact: the victim will thus have incorrect context when clicking on "Pay".

• Cropping: The attacker may crop the target element to only show a piece of the target element, such as the "Pay" button, by wrapping the target element in a new iframe that uses carefully chosen negative CSS position offsets and the Pay button width and height.

(2) Compromising pointer integrity An attacker displays a fake cursor icon away from the pointer (known as cursorjacking). This leads victims to misinterpret a click's target, since they will have the wrong perception about the current cursor location. Another variant of cursor manipulation involves the blinking cursor which indicates keyboard focus using strokejacking attacks. For example, an attacker can embed the target element in a hidden frame, while asking users to type some text into a fake attacker-controlled input field. As the victim is typing, the attacker can momentarily switch keyboard focus to the target element. The blinking cursor confuses victims into thinking that they are typing text into the attacker's input field, whereas they are actually interacting with the target element.

(3) Compromising temporal integrity Manipulate UI elements after the user has decided to click, but before the actual click occurs. For example, an attacker could move the target element on top of a decoy button shortly after the victim hovers the cursor over the decoy, in anticipation of the click. To predict clicks more effectively, the attacker could ask the victim to repetitively click objects in a malicious game or to double click on a decoy button, moving the target element over the decoy immediately after the first click.

11

Elena Prodromou ([email protected])

Consequences • Tweetbomb: An attacker tricks victims to click on Twitter Tweet Like buttons using hiding techniques, causing a link to the attacker's site to be reposted to the victim's friends and thus propagating the link virally. • Facebook "Likejacking" attacks. • Attackers steal user’s private data by hijacking a button on the approval pages of the OAuth protocol (Open Authorization). • Several attacks target the Flash Player webcam settings dialogs, allowing rogue sites to access the victim’s webcam and spy on the user. • Other POCs have forged votes in online polls, committed click , uploaded private files via the HTML5 File API, stolen victims' location information, and injected content across domains by tricking the victim to perform a drag-and-drop action.

Existing anti-clicking defenses Protecting Visual Context • User Confirmation: One straightforward mitigation for preventing out- of-context clicks is to present a confirmation prompt to users when the target element has been clicked. o Degrades user experience especially on single-click buttons o It is vulnerable to double-click timing attacks, which could trick the victim to click through the target element and a confirmation popup. • UI Randomization: Randomize the target element's UI layout. For example, PayPal could randomize the position of the Pay button on its express checkout dialog to make it harder for the attacker to cover it with a decoy button. o Attacker may ask the victim to keep clicking until successfully guessing the Pay button's location. • Opaque Overlay Policy: The Gazelle web browser forces all cross-origin frames to be rendered opaquely. o Removes all transparency from all cross-origin elements, breaking legitimate sites. • Framebusting: Disallowing the target element from being rendered in iframes. This can be done either with JavaScript code in the target element which makes sure it is the top-level document, or with newly added browser support, using features called X-Frame-Options and CSP’s frame-ancestors. A fundamental limitation of framebusting is its incompatibility with target elements that are intended to be framed by arbitrary third-party sites, such as Facebook Like buttons. In addition, previous research found JavaScript framebusting unreliable, and there are attacks that bypass framebusting protection on OAuth dialogs using popup windows. Zalewski has also demonstrated how to bypass framebusting by navigating browsing history with JavaScript. • Visibility Detection on Click: Allow rendering transparent frames, but block mouse clicks if the browser detects that the clicked cross-origin frame is not fully visible.

12

Elena Prodromou ([email protected])

(1) ClearClick: Comparing the bitmap of the clicked object on a given page to the bitmap of that object rendered in isolation. Its on-by-default nature must assume that all cross-origin frames need clickjacking protection, which results in false positives on some sites. Due to these false positives ClearClick prompts users to confirm their actions on suspected clickjacking attacks, posing a usability burden. (2) ClickIDS: It was proposed to reduce the false positives of ClearClick by alerting users only when the clicked element overlaps with other clickable elements. Unfortunately, it cannot detect attacks based on partial overlays or cropping and it still yields false positives.

Protecting temporal context Give UI Delays: Give user enough time to comprehend any UI changes by imposing a delay after displaying a dialog, so that user cannot interact with the dialog until the delay expires. Unresponsive buttons during the UI delay have reportedly annoyed many users. The length of the UI delay is clearly a tradeoff between the user experience penalty and protection from timing attacks.

All the existing clickjacking defenses fall short in some way, with robustness and site compatibility being the main issues.

New Attack Variants Authors constructed and evaluated three attack variants using known clickjacking techniques to demonstrate the insufficiency of state-of-the-art defenses described above.

1. Cursor to steal webcam access 2. Double-click attack to steal user private data 3. Whack-a-mole attack to compromise web surfing anonymity

Cursor spoofing attack to steal webcam access The user is presented with an attack page. A fake cursor is programmatically rendered to provide false feedback of pointer location to the user, in which the fake cursor gradually shifts away from the hidden real cursor while the pointer is moving. A loud video ad plays automatically, leading the user to click on a "skip this ad" link. If the user moves the fake cursor to click on the skip link, the real click actually lands on the Adobe Flash Player webcam settings dialog that grants the site permission to access the user's webcam. The cursor hiding is achieved by setting the CSS cursor property to none, or a custom cursor icon that is completely transparent, depending on browser support.

Double-click attack to steal user private data Bait-and-switch double-click attack (Section 3.1.3) against the OAuth dialog for Google accounts, which is protected with X-Frame-Options. First, the attack page baits the user to perform a double-click on a decoy button. After the first click, the attacker switches in the Google OAuth pop-up window under the cursor right before the second click (the second half of the double-click). This

13

Elena Prodromou ([email protected]) attack can steal a user’s emails and other private data from the user’s Google account.

Whack-a-mole attack to compromise web surfing anonymity Authors ask the user to play a whack-a-mole game and encourage them to score high and earn rewards by clicking on buttons shown at random screen locations as fast as possible. Throughout the game, authors use a fake cursor to control where the user's attention should be. At a latter point in the game, authors switch in a Facebook Like button at the real cursor's location, tricking the user to click on it. After the click, immediately check the list of like to get the profile of clicking victim.

InContext Defense Authors propose a defense, InContext, to enforce context integrity of user actions on the sensitive UI elements. When target display integrity and pointer integrity are satisfied then the system activates sensitive UI elements and delivers user input to them.

Guaranteeing Target Display Integrity Not all web pages of a web site contain sensitive operations and are susceptible to clickjacking. Since only the applications know which UI elements require protection, authors let web sites indicate which UI elements or web pages are sensitive.

Several design alternatives for providing target display integrity: Strawman 1: CSS Checking. Let the browser check the computed CSS styles of elements (e.g. position, size, opacity and z-index), and make sure the sensitive element is not overlaid by cross origin elements. Unfortunately, it is not reliable since new CSS properties comes out.

Strawman 2: Static reference bitmap. Let a web site provide a static bitmap of its sensitive element as a reference, and let the browser make sure the rendered sensitive element matches the reference. Unfortunately, different browsers may produce slightly differently rendered bitmaps from the same HTML code, and it also fails when sensitive elements contain animated content.

Design InContext enforces target display integrity by comparing the OS-level screenshot of the area that contains the sensitive element (what the user sees), and the bitmap of the sensitive element rendered in isolation at the time of user action (this design works well with dynamic aspects such as animations or moves in a sensitive UI element, unlike Strawman 2 above). If these two bitmaps are not the same, then the user action is canceled and not delivered to the sensitive element, and the browser triggers a new oninvalidclick event.

Authors also enforce that a host page cannot apply any CSS transforms such as zooming, rotating, etc. that affect embedded sensitive elements: any such

14

Elena Prodromou ([email protected]) transformations will be ignored by InContext-capable browsers. This will prevent malicious zooming attacks, which change visual context via zoom. They also disallow any transparency inside the sensitive element itself.

Guaranteeing Pointer Integrity • No cursor customization: When a sensitive element is present, InContext disables cursor customization on the host page (which embeds the sensitive element) and on all of the host's ancestors, so that a user will always see the system cursor in the areas surrounding the sensitive element. • Screen freezing around sensitive element: InContext "freezes" the screen (i.e. ignores rendering updates) around a sensitive UI element when the cursor enters the element. • Muting: Sound could draw a user's attention away from their actions. To stop sound distractions, authors mute the speakers when a user interacts with sensitive elements. • Lightbox around sensitive element: Use a randomly generated mask to overlay all rendered content around the sensitive UI element whenever the cursor is within that element’s area. • No programmatic cross-origin keyboard focus changes: Once the sensitive UI element acquires keyboard focus, authors disallow programmatic changes of keyboard focus by other origins.

Ensuring Temporal Integrity • UI delay: the click on the sensitive UI element will not be delivered unless the sensitive element has been fully visible and stationary long enough. • UI delay after pointer entry: Impose the delay not after changes to visual context, but each time the pointer enters the sensitive element. • Pointer re-entry on a newly visible sensitive element: InContext capable browser invalidates input events until the user explicitly moves the pointer from the outside of the sensitive element to the inside. • Padding area around sensitive element: Add a padding around sensitive element to let user distinguish whether the pointer is on the sensitive element.

Experiments Experimental Design  In February of 2012, they posted a Human Interactive Task (HIT) at Amazon's Mechanical Turk to recruit prospective participants.  Each task consisted of a unique combination of a simulated attack and, in some cases, a simulated defense  3521 participants were assigned uniformly and at random to one of 27 treatment groups  Excluded 370 previously participated users, 1087 users not having Facebook logged in (remain 2064 participants)

15

Elena Prodromou ([email protected])

 10 treatment groups for the cursor-spoofing attacks, 4 for the double click attacks, 13 for the whack-a-mole attacks

Experiments on Cursor-spoofing attacks They tested the efficacy of the cursor-spoofing attack page and of the pointer integrity defenses.

Timeout: Wait for ad full video end Skip: Click on the "Skip ad" link Quit: Quit the task with no pay Attack success: Click on webcam "Allow" button

Table 1: Results of the cursor-spoofing attack.

Authors' attack tricked 43% of participants to click on a button that would grant webcam access. Several of their proposed defenses reduced the rate of clicking to the level expected if no attack had occurred.

Experiments on Double-click attacks They tested the efficacy of the double-click timing attack and the defenses.

Timeout: OAth "Allow" button is not clicked within 2 seconds Quit: Quit the experiment Attack success: Click the OAth "Allow" button

Table 2: Results of double-click attack.

43 out of 90(47%) participants fell for the attack that would grant access to their personal Google data. Two of the authors' defenses stopped the attack completely.

16

Elena Prodromou ([email protected])

Experiments on Whack-a-mole attacks They tested the efficacy of the fast-paced button clicking attack, the whack-a- mole. In attempt to increase the attack success rates, as a real attacker would do, they offered a $100 performance-based prize to keep users engaged in the game.

Timeout: Did not click on the target button within 10 seconds Attack success: Click on the "Like" button knowingly On 1st Mouseover: User clicked on the "Like" button right after 1st mouse over the "Like" button (unknowingly) Filter by survey: Exclude users who stated that they noticed the "Like" button and clicked on it knowingly Combined defense: Pointer re-entry, appearance delay of 500 ms, display freezing, and padding (M)

Table 3: Results of the Whack-a-mole attack.

 In the whack-a-mole attack with timing, the Like button is switched to cover one of the game buttons at a time chosen to anticipate the user's click. 80 out of 84 (95%) participants fell for the attack.  Combined the timing technique with cursor spoofing, so that the game is played with a fake cursor, with the attack succeeding on 83 out of 84 (98%) participants.  Larger padding areas provide noticeably better clickjacking protection.  There is a large discrepancy comparing 1st mouseover attack success to the survey- filtered attack success (many users answered their survey questions inaccurately).  Defense Entry delay (TE =1000 ms) + combined defense (M=20 px): 1 out of 71 (1%) participants fell for the attack  Pointer entry delay was crucial in reducing the first-mouseover success rate, the part of the attack’s efficacy that could potentially be addressed by a clickjacking defense. Thus, it is an important technique that should be included in a browser’s clickjacking protection suite, alongside freezing with a sufficiently large padding area, and the pointer re-entry protection. The pointer entry delay subsumes, and may be used in place of, the appearance delay. The only exception would be for devices that

17

Elena Prodromou ([email protected])

have no pointer feedback; having an appearance delay could still prove useful against a whack-a-mole-like touch-based attack.

Conclusion • Authors devised new clickjacking variants which bypass existing defenses. • Proposed InContext, a web browser or OS mechanism to ensure that a user’s action on a sensitive UI element is in context, having visual integrity and temporal integrity. • Experiments showed that their attacks are highly effective. • InContext defense can be very effective for clickjacking attacks.

18