Site Isolation: Process Separation for Web Sites Within the Browser
Total Page:16
File Type:pdf, Size:1020Kb
Site Isolation: Process Separation for Web Sites within the Browser Charles Reis, Alexander Moshchuk, and Nasko Oskov, Google https://www.usenix.org/conference/usenixsecurity19/presentation/reis This paper is included in the Proceedings of the 28th USENIX Security Symposium. August 14–16, 2019 • Santa Clara, CA, USA 978-1-939133-06-9 Open access to the Proceedings of the 28th USENIX Security Symposium is sponsored by USENIX. Site Isolation: Process Separation for Web Sites within the Browser Charles Reis Alexander Moshchuk Nasko Oskov Google Google Google [email protected] [email protected] [email protected] Abstract ploitable targets of older browsers are disappearing from the web (e.g., Java Applets [64], Flash [1], NPAPI plugins [55]). Current production web browsers are multi-process but place As others have argued, it is clear that we need stronger iso- different web sites in the same renderer process, which is lation between security principals in the browser [23, 33, 53, not sufficient to mitigate threats present on the web today. 62, 63, 68], just as operating systems offer stronger isolation With the prevalence of private user data stored on web sites, between their own principals. We achieve this in a produc- the risk posed by compromised renderer processes, and the tion setting using Site Isolation in Google Chrome, introduc- advent of transient execution attacks like Spectre and Melt- ing OS process boundaries between web site principals. down that can leak data via microarchitectural state, it is no While Site Isolation was originally envisioned to mitigate longer safe to render documents from different web sites in exploits of bugs in the renderer process, the recent discov- the same process. In this paper, we describe our successful ery of transient execution attacks [8] like Spectre [34] and deployment of the Site Isolation architecture to all desktop Meltdown [36] raised its urgency. These attacks challenge users of Google Chrome as a mitigation for process-wide a fundamental assumption made by prior web browser ar- attacks. Site Isolation locks each renderer process to doc- chitectures: that software-based isolation can keep sensi- uments from a single site and filters certain cross-site data tive data protected within an operating system process, de- from each process. We overcame performance and compat- spite running untrustworthy code within that process. Tran- ibility challenges to adapt a production browser to this new sient execution attacks have been demonstrated to work from architecture. We find that this architecture offers the best JavaScript code [25, 34, 37], violating the web security path to protection against compromised renderer processes model without requiring any bugs in the browser. We show and same-process transient execution attacks, despite current that our long-term investment in Site Isolation also provides limitations. Our performance results indicate it is practical a necessary mitigation for these unforeseen attacks, though to deploy this level of isolation while sufficiently preserving it is not sufficient: complementary OS and hardware miti- compatibility with existing web content. Finally, we discuss gations for such attacks are also required to prevent leaks of future directions and how the current limitations of Site Iso- information from other processes or the OS kernel. lation might be addressed. To deploy Site Isolation to users, we needed to over- come numerous performance and compatibility challenges 1 Introduction not addressed by prior research prototypes [23, 62, 63, 68]. Ten years ago, web browsers went through a major architec- Locking each sandboxed renderer process to a single site ture shift to adapt to changes in their workload. Web con- greatly increases the number of processes; we present pro- tent had become much more active and complex, and mono- cess consolidation optimizations that keep memory overhead lithic browser implementations were not effective against the low while preserving responsiveness. We reduce overhead security threats of the time. Many browsers shifted to a and latency by consolidating painting and input surfaces of multi-process architecture that renders untrusted web con- contiguous same-site frames, along with parallelizing pro- tent within one or more low-privilege sandboxed processes, cess creation with network requests and carefully managing mitigating attacks that aimed to install malware by exploiting a spare process. Supporting the entirety of the web pre- a rendering engine vulnerability [43, 51, 70, 76]. sented additional compatibility challenges. Full support for Given recent changes in the security landscape, that multi- out-of-process iframes requires proxy objects and replicated process architecture no longer provides sufficient safety for state in frame trees, as well as updates to a vast number of visiting untrustworthy web content, because it does not pro- browser features. Finally, a privileged process must filter vide similar mitigation for attacks between different web sensitive cross-site data without breaking existing cross-site sites. Browsers load documents from multiple sites within JavaScript files and other subresources. We show that such the same renderer process, so many new types of attacks filtering requires a new type of confirmation sniffing and can target rendering engines to access cross-site data [5, 10, 11, protect not just HTML but also JSON and XML, beyond 33, 53]. This is increasingly common now that the most ex- prior discussions of content filtering [23, 63, 68]. USENIX Association 28th USENIX Security Symposium 1661 With these changes, the privileged browser process can In this paper, we move to a stronger threat model empha- keep most cross-site sensitive data out of a malicious docu- sizing two different types of web attackers that each aim to ment’s renderer process, making it inconsequential for a web steal data across web site boundaries. First, we consider a attacker to access and exfiltrate data from its address space. renderer exploit attacker who can discover and exploit vul- While there are a set of limitations with its current imple- nerabilities to bypass security checks or even achieve ar- mentation, we argue that Site Isolation offers the best path to bitrary code execution in the renderer process. This at- mitigating the threats posed by compromised renderer pro- tacker can disclose any data in the renderer process’s ad- cesses and transient execution attacks. dress space, as well as lie to the privileged browser process. In this paper, Section 2 introduces a new browser threat For example, they might forge an IPC message to retrieve model covering renderer exploit attackers and memory dis- sensitive data associated with another web site (e.g., cook- closure attackers, and it discusses the current limitations ies, stored passwords). These attacks imply that the privi- of Site Isolation’s protection. Section 3 presents the chal- leged browser process must validate access to all sensitive re- lenges we overcame in fundamentally re-architecting a pro- sources without trusting the renderer process. Prior work has duction browser to adopt Site Isolation, beyond prior re- shown that such attacks can be achieved by exploiting bugs search browsers. Section 4 describes our implementation, in the browser’s implementation of the Same-Origin Policy consisting of almost 450k lines of code, along with critical (SOP) [54] (known as universal cross-site scripting bugs, or optimizations that made it feasible to deploy to all desktop UXSS), with memory corruption, or with techniques such as and laptop users of Chrome. Section 5 evaluates its effective- data-only attacks [5, 10, 11, 33, 53, 63, 68]. ness against compromised renderers as well as Spectre and Second, we consider a memory disclosure attacker who Meltdown attacks. We also evaluate its practicality, finding cannot run arbitrary code or lie to the browser process, but that it incurs a total memory overhead of 9-13% in practice who can disclose arbitrary data within a renderer process’s and increases page load latency by less than 2.25%, while address space, even when the SOP would disallow it. This sufficiently preserving compatibility with actual web con- can be achieved using transient execution attacks [8] like tent. Given the severity of the new threats, Google Chrome Spectre [34] and Meltdown [36]. Researchers have shown has enabled Site Isolation by default. Section 6 looks at the specifically that JavaScript code can manipulate microar- implications for the web’s future and potential ways to ad- chitectural state to leak data from within the renderer pro- dress Site Isolation’s current limitations. We compare to re- cess [25, 34, 37].1 While less powerful than renderer exploit lated work in Section 7 and conclude in Section 8. attackers, memory disclosure attackers are not dependent on Overall, we answer several new research questions: any bugs in web browser code. Indeed, some transient exe- cution attacks rely on properties of the hardware that are un- • Which parts of a web browser’s security model can be likely to change, because speculation and other transient mi- aligned with OS-level isolation mechanisms, while pre- croarchitectural behaviors offer significant performance ben- serving compatibility with the web? efits. Because browser vendors cannot simply fix bugs to • What optimizations are needed to make process-level mitigate cases of these attacks, memory disclosure attackers isolation of web sites feasible to deploy, and what is the