Clickjacking: Attacks and Defenses

University of Cyprus Department of Computer Science Advanced Security Topics

Name: Soteris Constantinou Instructor: Dr. Elias Athanasopoulos Authors: Lin-Shung Huang, Alex Moshchuk, Helen J. Wang, Stuart Schechter, Collin Jackson (Carnegie Mellon University, Microsoft Research ) The definition of Clickjacking

• The term “Clickjacking” was discovered byRobert Hansen and Jeremiah Grossman in 2008 • A malicious technique that deceives a web user into interacting, most likely by clicking, withsomething different to what the users believe they are interacting with. • Has the ability to send unauthorized commands or reveal confidential information while the victim is interacting with seemingly harmless web sites Clickjacking • The root cause of clickjacking: An attacker application presents a sensitive UI element of a target application out of context to a user (such as hiding the sensitive UI by making it transparent), and hence the user is tricked to act out of context

• Possible risks caused byclickjacking:  User’s private data and can be stolen  Spy on a user through the webcam  Web surfing anonymity can be compromised Clickjacking Examples Clickjacking Examples Clickjacking Examples Likejacking Likejacking

• When a web page attacker tricks users into clicking on a Facebook “Like” button by transparently overlaying it on top of an innocuous UIelement, like for example a “claim your free iPad” button • It basically presents a false visual context to the user Types of Context Integrity Types of Context Integrity

The attacker is compromising context integrity of another application’s UI components:

Visual Integrity – What the user should see right before his/her sensitive action – The sensitive UI element & Pointer feedback (like cursors) should be fully visible. Also known as target display integrity and pointer integrity, respectively.

Temporal Integrity – It refers to the timing of a user action. Ensures that a user action at a particular time is intended by the user. Existing Attacks Existing Attacks

We classify existing attacks according to three ways of forcing the user into issuing input commands out of context:

[1] Compromising target display integrity The guarantee that users can fully see and recognize the target element before an input action

[2] Compromising pointer integrity The guarantee that users can rely on cursor feedback to select locations for their input events

[3] Compromising temporal integrity The guarantee that users have enough time to comprehend where they are clicking [1] Compromising target display integrity

• Hiding the target element

– An attacker can make the target element transparent by wrapping it in a div container with a CSS opacity value of zero; to entice a victim to click on it, the attacker can draw a decoy under the target element by using a lower CSS z-index.

– The attacker may also completely cover the target element with an opaque decoy, but make the decoy unclickable by setting the CSS property pointer-events:none [1] Compromising target display integrity

• Partial Overlays – Visually confuse a victim by obscuring only a part of the target element. – Overlay other elements onto an iframe using CSS z-index property or Flash Window Mode wmode=direct property.

• Cropping – Wrapping the target element in a new iframe that uses carefully chosen negative CSS position offsets and properties [2] Compromising pointer integrity

• Hiding the real cursor & creating a fake one – An attacker can display a fake cursor icon away from the pointer, known as cursorjacking – Victims misinterpret a click’starget

• Keyboard focus using strokejacking attack – An attacker can embed the target element in a hidden frame, while asking users to type some text into a fake attacker- controlled input field.  As the victim is typing, the attacker can momentarily switch keyboard focus to the target element [3] Compromising temporal integrity

Manipulate UI elements after the user has decided to click, but before the actual clickoccurs

Examples: • An attacker could move the target element on top of a decoy button shortly after the victim hovers the cursor over the decoy, in anticipation of the click.

• To predict clicks more effectively, the attacker could ask the victim to repetitively click objects in a malicious game or to double click on a decoy button, moving the target element over the decoy immediately after the first click Consequences

There are two kinds of widespread clickjacking attacks in the wild:

• Tweetbomb & Likejacking Tricks victims to click on Twitter Tweet buttons using hiding techniques, causing a link to the attacker’s site to be reposted to the victim’s friends and thus propagating the link virally.

Many proof-of-concept clickjacking techniques have also been published: • Attackers steal user’s private data by hijacking a button on the approval pages of the OAuth protocol (open-standard Authorization)

• Several attacks target the Flash Player webcam settings dialogs, allowing rogue sites to access the victim’s webcam and spy on the user Existing Defenses Existing anti-clickjacking defenses

• Protecting Visual Context  User Confirmation: presents a confirmation prompt to users when the target element has been clicked. Drawbacks: – Vulnerable to double-click timing attacks, which could trick the victim to click through both the target element and a confirmation popup – Degrades user experience on single-clickbuttons

 UI Randomization: Randomization of the target element’s UI layout Example: Randomize the position of the paybutton – Attacker may ask the victim to keep clicking until successfully guessing the Pay button’s location

 Opaque Overlay Policy: Forces all cross-origin frames tobe rendered opaquely Drawbacks: – Removes all transparency from all cross-origin elements, breaking benign sites Existing anti-clickjacking defenses  Framebusting: disallowing the target element from being rendered in iframes (uses features such as X-Frame-Options and CSP’sframe- ancestors) – Facebook “LIKE” button should be placed within a frame

 Visibility Detection on Click: Blocks mouse clicks if the browser detects that the clicked cross-origin frame is not fully visible 1. ClearClick: Compares the bitmap of the clicked object on a given page to the bitmap of that object rendered inisolation - It might have a false positive problem in some cases

2. ClickIDS: Alerts users only when the clicked element overlaps with other clickable elements - Cannot detect attacks based on partial overlays or cropping - It might have a false positive problem in some cases None of the above mentioned defenses guarantee pointer integrity! Existing anti-clickjacking defenses

• Protecting temporal context  UI Delays: Gives users enough time to comprehend any UI changes by imposing a delay after displaying a dialog, so that users cannot interact with the dialog until the delay expires Drawback: – The length of the UI delay is a tradeoff between the user experience penalty and protection from timing attacks

• Access Control Gadgets: Grants applications permissions to access user-owned resources such as camera or GPS New Attack Variants New Attack Variants

There has been a construction and evaluation of three attack variants using known clickjacking techniques:

[1] Cursor to steal webcam access

[2] Double-click attack to steal user private data

[3] Whack-a-mole attack to compromiseweb surfing anonymity [1] Cursor spoofing attack to steal webcam access

The target Flash Player webcam settings dialog is at the bottom right of the page, with a “skip this ad” bait link remotely above it.

There are two cursors displayed on the page: a fake cursor is drawn over the “skip this ad” link while the actual pointer hovers over the webcam access “Allow” button. [2] Double-click attack to steal user private data

Bait-and-switch attack : The attack page baits the user to perform a double-click on a decoy button. After the first click, the attacker switches in the GoogleOAuth pop-up window under the cursor right before the second click. [3] Whack-a-mole attack to compromise web surfing anonymity

• The user is being asked to click as fast as possible the “CLICK ME” button at random screen locations • Suddenly the attacker displays the target “Like” button underneath the actual pointer. (the user has been tricked) InContext Defense InContext Defense

 A defense has been proposed, InContext, in order to enforce context integrity of user actions on the sensitive UI elements

 When target display integrity (former) & pointer integrity (latter) are satisfied then thesystem activates sensitive UI elements and delivers user input to them Guaranteeing Target Display Integrity

 Not all web pages of a web site contain sensitive operations and are susceptible to clickjacking  Web-sites are let to indicate which UI elements or web pages are sensitive

 Strawman 1: CSS Checking It lets the browser check the computed CSS styles of elements and make sure the sensitive element is not overlaid by cross origin elements Drawback: Not reliable & insufficient since various techniques exist to bypass CSS and steal topmost display

 Strawman 2: Static reference bitmap It lets a web site provide a static bitmap of its sensitive element as a reference, and lets the browser make sure the rendered sensitive element matches the reference Drawback: – Different browsers render HTML bitmaps differently – Fails when sensitive elements contain animated content Guaranteeing Target Display Integrity

• InContext compares the OS-level screenshot of the area that contains the sensitive element (what actually the user sees), and the bitmap of the sensitive element rendered in isolation at the time of user action • If these two bitmaps are not the same, then the user action is canceled and not delivered to the sensitive element • The browser then triggers a new "oninvalidclick" event Guaranteeing Pointer Integrity

 No cursor customization: InContext disables cursor customization on the host page and on all of the host’s ancestors, so that a user will always see the system cursor in the areas surrounding the sensitive element.

 Screen freezing around sensitive element: It disables (freezes screen)animations to avoid distracting the user’s attention

 Muting: A loud noise could trigger a user to quickly look for a way to stop the noise. Thus, this approach mutes the speakers when a user interacts with sensitive elements Guaranteeing Pointer Integrity

 Lightbox around sensitive element: It uses a randomly generated mask to overlay all rendered content around the sensitive UI element whenever the cursor is within that element’s area

 No programmatic cross-origin keyboard focus changes: It disallows programmatic changes of keyboard focus by other origins to stop strokejacking attacks that steal keyboard focus Ensuring Temporal Integrity

The bait-and-switch attack is similar to time-of-checkto-time-of- use (TOCTTOU) race conditions

 UI delay: The click on the sensitive UI element will not be delivered unless the sensitive element has been fully visible and stationary long enough

 UI delay after pointer entry: It imposes the delay each time the pointer enters the sensitive element Ensuring Temporal Integrity

 Pointer re-entry on a newly visible sensitive element: An InContext capable browser invalidates input events until the user explicitly moves the pointer from the outside of the sensitive element to the inside.

 Padding area around sensitive element: It adds a padding around sensitive element to let user determine whether the pointer is on the sensitiveelement or not Prototype Implementation Prototype Implementation

 A prototype of InContext was built using Internet Explorer 9’s public COM interfaces.

 “IHTMLElementRender” was used in order to get isolated rendering result

 Performance of InContext: Experiments Experiments  A Human Interactive Task (HIT) was posted at Amazon’s MechanicalTurk to recruit prospective participants for the experiments  Each task consisted of a unique combination of a simulated attacks and a simulateddefense (only in some cases)

 3521 participants were assigned uniformly and at random to one of 27 treatment groups  10 treatment groups for the cursor-spoofingattacks  13 for the whack-a-moleattacks  4 for the double clickattacks Experiments on Cursor-spoofing attacks

 The attack tricked 43% of participants to click on a button that would grant webcam access  Several defenses reduced the rate of clicking  Timeout- watched the ad full video and forwarded to the end of the task with no clicks  Skip- Clicked on the “Skip ad”link  Quit- Quit the task with no pay  Attack success – successfully granted webcam permissions Experiments on Double-click attacks

 The attack attempts to trick the user into clicking on the “Allow Access” button of a Google Oauth window by moving it underneath the user’s cursor after the first click of a double-click on a decoy button

 47% of users are attacked successfully with the privilege to grant access to their individual Google data

 Timeout- OAth “Allow” button was not clicked within 2 seconds after the pop-up window was displayed  Quit- Quit the experiment  Attack success- The OAth “Allow” button was clicked Experiments on Whack-a-moleattacks

 Timeout- the target button was not clicked within 10 seconds  Attack success- clicked on the “Like” button  On 1st Mouseover- clicked on the “Like” button right after1st mouse over the “Like” button  Filter by survey- noticed & clicked on the “Like” button

 Combined defense: pointer re-entry, used an appearance delay (display freezing) of T=500ms and a padding area of M=20px Conclusion Conclusion • New clickjacking variants were devised which bypass existing defences

• Introduced "InContext": a web browser or OS mechanism to ensure that a user’s action on a sensitive UI element is in context, having visual integrity and temporal integrity at the same time

• The results from experiments presented that the attacks were highly effective upon different users

• The “InContext" defense can be significantly effective for clickjacking attacks Robust Defenses for Cross-Site Request Forgery

University of Cyprus Department of Computer Science Advanced Security Topics

Name: Soteris Constantinou Instructor: Dr. Elias Athanasopoulos Authors: Adam Barth, Collin Jackson, John C. Mitchell (Stanford University) Cross-Site Request Forgery (CSRF) Cross-Site RequestForgery (CSRF)

• CSRF is among the twenty most exploited security vulnerabilities of 2007

• A malicious site instructs a victim’s browser to send a request to an honest site, as if the request were part of the victim’s interaction with the honest site

• The attacker leverages the victim’s network connectivity and the browser’s state, such as cookies, to disrupt the integrity of the victim’s session with the honest site Example Cases

• CSRF vulnerabilities have been known in some cases exploited in 2007:

• When a user visited a malicious site, the malicious site could generate a request to Gmail that Gmail treated as part of its ongoing session with the victim

• A web attacker exploited this CSRF vulnerability to inject an filter. The domain registrar used email , leading to significant inconvenience and financial loss of the victim. CSRF Defined

The attacker takes advantage ofuser’s: • Network Connectivity If the user is behind a firewall, the attacker is able to leverage the user’s browser to send network requests to other machines behind the firewall that might not be directly reachable from the attacker’s machine

• Read Browser State Requests sent via the browser’s network stack typically include browser state, such as cookies, client certificates, or basic authentication headers.

• Write Browser State The attacker causes the browser to issue a network request, and the browser parses and acts on the response. In-Scope Threats

• Forum Poster Many sites often permit users to submit passive content, such as images or hyperlinks. If an attacker chooses the ‘images’ URL maliciously, the network request might lead to a CSRF attack.

• Web attacker A malicious principal who owns a e.g. attacker.com, has a valid HTTPS certificate for attacker.com and operates a server. If the user visits attacker.com, the attacker can mount a CSRF attack by instructing the user’s browser to issue cross-site requests using both GET and POST methods.

• Network attacker A malicious principal who controls the user’s network connection. Example: an “evil twin” wireless router or a compromised DNS server can be exploited by an attacker to control the user’s network connection. Out-of-Scope Threats

CSRF defenses are complementary to defenses against these threats:

• Cross-site Scripting (XSS) If the attacker is able to inject script into a site’s security origin, the attacker can disrupt both the integrity and confidentiality of the user’s session with the site.

• Malware If the attacker is able to run malicious software on the user’s machine, the attacker can compromise the user’s browser and inject script into the honest web site’s security origin.

• DNS Rebinding It can be used to obtain network connectivity to a server of an attacker’s choice using the browser’s IP address. Out-of-Scope Threats

• Certificate Errors If the user is willing to click through HTTPS certificate errors, much of the protection afforded by HTTPS against network attackers evaporates.

These attacks occur when an attacker’s web site solicits authentication credentials from the user.

• User Tracking Cross-site requests can be used by cooperating web sites to build a combined profile of a user’s browsing activities. Login Cross-Site Request Forgery Login Cross-Site Request Forgery

• When an attacker uses the victim’s browser to forge a cross-site request to the honest site’s login URL, using the attacker’s user name and password

• If the forgery succeeds, the honest server responds with a Set- Cookie header that instructs the browser to mutate its state by storing a session cookie, logging the user into the honest site as the attacker

• This session cookie is used to bind subsequent requests to the user’s session and hence to the attacker’s authentication credentials Example Cases of Login CSRF

• Search History

– Many search engines (e.g. Yahoo!,Google) allow their users to opt-in to saving their search history and provide an interface for a user to review his/her personal search history

– Search queries contain sensitive details about the user’s interests and activities and could be used by an attacker to embarrass the user, to steal the user’s identity, or to spy on the user

– An attacker can spy on a user’s search history by logging the user into the search engine as the attacker Search History – Example Case

The victim visit’s the attacker’s site and the attacker forges a cross-site request to Google’s login form, causing the victim to be logged into Google as the attacker. Later, the victim makes a web search, which is logged in the attacker’s search history. Examples of Login CSRF

• PayPal  The victim visits a malicious merchant’s site and chooses to pay using PayPal.  As usual, victim is redirected to PayPal and logs into his/her account.  The merchant silently logs the victim into his/her PayPal account.  To fund the purchase, the victim enrolls his/her credit card, but the credit card has actually been added to the merchant’s PayPal account.

Examples of Login CSRF • iGoogle – They have mitigated the vulnerability in two ways: 1) Deprecated the use of inline gadgets 2) Deployed the secret validation token defense Existing CSRF Defenses Existing CSRFdefenses

[1] Secret Validation Token

[2] The Referer Header

[3] Custom HTTP Headers [1] Secret Validation Token

– Sends additional information in each HTTP request that can be used to determine whether the request came from an authorized source

– If a request is missing a validation token or the token does not match the expected value, the server should reject the request

– Can defend against login CSRF • Developers often forget to implement the defense because, before login, there is no session to which to bind the CSRF token [1] Secret Validation Token Token Designs • Session Identifier – Use the user’s session identifier as the secret validation token  Disadvantage: Users may reveal the contents of WebPages that contain session identifiers to thirdparties

• Session- Independent Nonce – Server generates a random nonce and stores it as a cookie when the user first visits the website – On every request, the server validates that the token matches the value stored in the cookie  Disadvantage : An active network attacker canoverwrite the session independent nonce with his/her own matching CSRF token value [1] Secret Validation Token Token Designs • Session-Dependent Nonce – Stores state on the server that bind the user’s CSRF token value to the user’s session identifier – On every request, the server validates that the supplied CSRF token is associated with the user’s session identifier Disadvantage: Site must maintain a large state table in order to validate the tokens

• HMAC of Session Identifier (HMAC = Hash Message Authentication Code) – Uses cryptography to bind the CSRF token and the session identifier – If all site‘s servers share the HMAC key, each server can validate that the CSRF token is correctly bound to the session identifier – An attacker who learns a user’s token cannot infer the user’s session identifier [2] The Referer Header

– Indicates which URL initiated the request – Distinguishes a same-site request from a cross-site request because the header contains the URL of the site making the request Disadvantages: Privacy issues  confidential information, data leaking, can be spoofed due to browser bugs

Strictness & Lenient-RefererValidation as a CSRF defense – In Lenient Referer validation, the site blocks requests whose Referer header has an incorrect value. If a request lacks the header, the site accepts the request. A Web attacker can cause the browser to suppress the Referer header.

– In Strict Referer validation, the site blocks requests that lack a Referer header. It protects against malicious Referer suppression but incurs a compatibility penalty as some browsers and network configurations suppress the Referer header for legitimate requests. Experiment (how often the Referer header is suppressed)

• Design – They used 2 advertisement networks – They have used 2 servers with 2 domain names to host the advertisement – The advertisement generates a unique identifier and randomly selects the primary server – The primary server sends the client HTML that issues a sequence of GET and POST requests to their servers, both over HTTP and HTTPS – Requests are generated by submitting forms, requesting images, and issuing XMLHttpRequests – The advertisement generates both same-domain requests to the primary server and cross-domain requests to the secondary server – The servers logged request parameters – The servers recorded the value of document.referrer DOM API Experiment Results •Over HTTP, the Referer header is suppressed more often for cross domain requests (POST,GET) • Referer header is suppressed more often for HTTP requests than HTTPS requests •The Referer header is suppressed more often in Ad Network B than on Ad Network A for all types of request

Requests with a Missing or Incorrect Referer Header (283,945 observations). The “x” and “y” represent the domain names of the primary and secondary web servers, respectively. Experiment

Requests with a Missing or Incorrect Referer Header on Ad Network A (241,483 observations). Opera blocks cross-site document.Referer for HTTPS. Firefox 1.0 and 1.5 do not send Referer for XMLHttpRequest. The PlayStation 3 does not support document.Referer. Browsers that suppress the Referer header also suppress the document.referrer value, but when Referer is suppressed in the network, the document.referrer value is not suppressed. Experiment

• Conclusions – In order to use the Referer header as a CSRF defense, a site must reject requests that omit the header because an attacker can cause the browser to suppress the header.

– Strict Referer validation is well-suited for CSRF defense over HTTPS because only 0.05-0.22% of browsers suppress the header over https. (simple to implement)

– Over HTTP, sites cannot afford to block requests that lack Referer header because they would cease to be compatible with roughly 3-11% of users

– The poor privacy properties of the Referer header hamper attempts to use the header for security over HTTP [3] Custom HTTP Headers

Can be used to prevent CSRF because the Browser prevents sites from sending custom HTTP headers to another site but allows sites to send custom HTTP headers to themselves using XMLHttpRequest

– To use custom headers as a CSRF defense a site must: a) Issue all state-modifying requests using XMLHTTPRequest b) Attach a custom header (e.g.X-Requested-By) c) Reject all state-modifying requests that are not accompanied by the header Authors' defense proposal ORIGIN HEADER Authors’ defense proposal • Origin header: modifies browsers to send a Origin header with POST requests that identifies the origin that initiated the request. If the browser cannot determine the origin, the browser sends the value null.

– The Origin header improves on the Referer header by respecting the user’s privacy  Origin header includes only the information required to identify the principal that initiated the request (scheme, port, host)  Origin header doesn’t contain the path or query portions of the URL  Origin header is sent only for POST requests  Referer header is sent for all requests

– Server Behavior  All state-modifying requests, including login requests, must be sent using the POST method  Server must reject any requests whose Origin header contains an undesired value or null Origin Header Security Analysis  Rollback and Suppression A supporting browser will always include the Origin header when making POST requests.

 DNS Rebinding Sites that rely only on network connectivity for authentication, should use one of the DNS rebinding defenses, such as validating the Host header.

 Plug-ins If a site opts into cross-site HTTP requests, an attacker can use Flash Player to set the Origin header in cross-site requests. Sites should not opt into cross-site HTTP requests from untrusted origins. Origin Header

 Adoption Origin Header improves and unifies other similar proposals and has been adopted by several workinggroups

 Implementation Authors implemented both the browser and server components of the Origin header CSRF defense  Browser side: WebKit, Safari, Firefox  Server side: ModSecurity (forApache) Session Initialization Vulnerabilities of Session Initialization • Login CSRF is an example of a more general class vulnerability in session initialization

 Authenticated as User

 Authenticated as Attacker – e.g. Login CSRF, PayPal

• Two common approaches for mounting an attack on session initialization – HTTP Requests – Cookie Overwriting Vulnerabilities of Session Initialization (HTTP Requests)

HTTP Requests – OpenID

• Recommends that sites include a self-signed nonce to protect against reply attacks, but does not suggest a mechanism to bind the OpenID session to the user’s browser

1) Web attacker visits the Relying Party (e.g.Blogger) and begins the authentication process with the Identity Provider (e.g.Yahoo!) 2) In the final step the Identity Provider redirects the attacker’s browser to the "return to" URL of the Relying Party 3) Instead of following the redirect, the attacker directs the user’s browser to the "return to" URL 4) The Relying Party completes the OpenID protocol and stores a session cookie in the user’sbrowser 5) The user is now logged in as the attacker

• Defense The relying Party should generate a fresh nonce at the start of the protocol, store the nonce in the browser’s cookie store and include the nonce in the “return_to” parameter of the OpenID protocol. Vulnerabilities of Session Initialization (HTTP Requests)

HTTP Requests - PHP CookielessAuthentication

• Stores the user’s session identifier in a query parameter • Fails to bind the session to the user’s browser, letting a web attacker force the user’s browser to initialize a session authenticated as the attacker 1) The web attacker logs into the honest web site 2) The web attacker redirects the user’s browser to the URL currently displayed in the attacker’s locationbar 3) Because this URL contains the attacker’s session identifier, the user is now logged in as theattacker

Defense A site could maintain a long-lived frame that contains the session identifier token. This frame binds the session to the user’s browser by storing the session identifier in memory. Vulnerabilities of Session Initialization (Cookie Overwriting)

• Cookie Overwriting

 A server can include the Secure flag in the Set-Cookie header to instruct the browser that the cookie should be sent only over HTTPS connections.  The secure flag does not offer integrity protection in the cross- scheme threat model.  An active network attacker can supply a Set-Cookie header over a HTTP connection to the same host name as the site and install either a Secure or a non-Secure cookie of the same name.  If the secure cookie contains the user’s session identifier, the attacker can mount an attack on session initialization simply by overwriting the user’s session identifier with his/her own session identifier

Defense  Cookie-Integrity header in HTTPS requests  identifies the index of the cookies that were set usingHTTPS. Conclusions Conclusions

• Login CSRF – Strict Referer validation (login forms typically submit over HTTPS, where the Referer header is reliably present for legitimate requests) – If a login request lacks a Referer header, the site should reject the request to defend against malicious suppression • HTTPS – For sites served over HTTPS (e.g. banking sites), it is recommended a strict Referer validation • Third-party Content – Images, hyperlinks should use a framework that implements secret token validation correctly • Origin header – Eliminates the privacy concerns that lead the Referer blocking – Protects both HTTPS and non-HTTPS requests