Memory Protection at Option

Total Page:16

File Type:pdf, Size:1020Kb

Memory Protection at Option Memory Protection at Option Application-Tailored Memory Safety in Safety-Critical Embedded Systems – Speicherschutz nach Wahl Auf die Anwendung zugeschnittene Speichersicherheit in sicherheitskritischen eingebetteten Systemen Der Technischen Fakultät der Universität Erlangen-Nürnberg zur Erlangung des Grades Doktor-Ingenieur vorgelegt von Michael Stilkerich Erlangen — 2012 Als Dissertation genehmigt von der Technischen Fakultät Universität Erlangen-Nürnberg Tag der Einreichung: 09.07.2012 Tag der Promotion: 30.11.2012 Dekan: Prof. Dr.-Ing. Marion Merklein Berichterstatter: Prof. Dr.-Ing. Wolfgang Schröder-Preikschat Prof. Dr. Michael Philippsen Abstract With the increasing capabilities and resources available on microcontrollers, there is a trend in the embedded industry to integrate multiple software functions on a single system to save cost, size, weight, and power. The integration raises new requirements, thereunder the need for spatial isolation, which is commonly established by using a memory protection unit (MPU) that can constrain access to the physical address space to a fixed set of address regions. MPU-based protection is limited in terms of available hardware, flexibility, granularity and ease of use. Software-based memory protection can provide an alternative or complement MPU-based protection, but has found little attention in the embedded domain. In this thesis, I evaluate qualitative and quantitative advantages and limitations of MPU-based memory protection and software-based protection based on a multi-JVM. I developed a framework composed of the AUTOSAR OS-like operating system CiAO and KESO, a Java implementation for deeply embedded systems. The framework allows choosing from no memory protection, MPU-based protection, software-based protection, and a combination of the two. This decision can be made individually for each protection realm in the system. For both MPU- and software-based protection, the framework provides different trade-offs between the cost and the provided level of protection. To achieve the configurability of MPU-based protection, I use aspect-oriented techniques to integrate the necessary changes to the operating system and the application. The configurability of software-based protection is based on static analyses in the Java compiler. The results of these analyses are also leveraged to improve the effectivity of MPU-based protection by aiding to determine private code and data items at a fine-grained level, showing significant improvements over the mostly manual existing approach in CiAO. The framework is completed by an extension that offers a soft-migration approach for existing applications. At the example of the control software for a quadrotor helicopter, I evaluate the cost of using Java instead of the spread C or C++ languages and the qualitative and quantitative pros and cons of the different memory protection mechanisms. The evaluation shows that a Java implementation tailored towards the application domain can provide competitive performance to C and C++, but provides the added value of comprehensive software-based memory safety. The comparison of MPU-based and software-based protection showed that either can be more efficient depending on the type of application. The existence of both approaches is justified, and the two approaches complement each other in many aspects, which makes a combination of the two feasible as well. iii Zusammenfassung Zur räumlichen Isolation verschiedener Anwendungen auf einem Mikrokontroller wird im Umfeld zutiefst eingebetteter, statisch-konfigurierter Systeme üblicherwei- se eine Speicherschutzeinheit (MPU) eingesetzt. Diese beschränkt Speicherzugriffe auf eine begrenzte Zahl von Adressbereichen. MPU-basierter Speicherschutz zeigt jedoch Einschränkungen in den Bereichen der Hardwareauswahl, der Flexibilität und Granularität des gebotenen Speicherschutzes sowie des Aufwands der Benutzung. Softwarebasierte Verfahren bieten eine Alternative oder Ergänzung, haben bislang im Bereich der eingebetteten Systeme jedoch kaum Beachtung gefunden. Diese Dissertation betrachtet vergleichend die qualitativen und quantitativen Vor- teile und Einschränkungen von MPU-basiertem Speicherschutz und Speicherschutz basierend auf einer Multi-JVM. Aufbauend auf CiAO, einem Betriebssystem mit einer AUTOSAR OS Schnittstelle, und KESO, einer Java Laufzeitumgebung für zutiefst eingebettete Systeme, wurde ein Rahmenwerk entwickelt, welches die freie Wahl zwischen beiden Verfahren sowie deren Kombination erlaubt. Diese Entscheidung kann individuell für jeden Schutzraum im System getroffen werden. Das Rahmenwerk unterstützt für beide Verfahren verschiedene Abstimmungsgrade zur Beeinflussung des Verhältnisses von Kosten und gebotenem Schutzgrad. Zur Realisierung des konfigurierbaren MPU-basierten Speicherschutzes wurden aspektorientierte Techniken verwendet, um die notwendigen Anpassungen in das Betriebssystem sowie die Anwendung einzubringen. Die Konfigurierbarkeit des soft- warebasierten Schutzes basiert auf statischen Programmanalysen im Java-Übersetzer, deren Ergebnisse auch zur Identifikation privater Programmteile und Datenstrukturen verwendet wurden. Hierdurch konnte die Effektivität des MPU-basierten Speicher- schutzes im Vergleich zum vorhandenen, größtenteils manuellen Ansatz in CiAO deutlich gesteigert werden. Eine Erweiterung zur Unterstützung eines weichen Migra- tionsansatzes für bestehende Anwendungen vervollständigt das Rahmenwerk. Am Beispiel der Steueranwendung eines Quadrokopters wurden die Kosten für die Nutzung von Java als Ersatz für die verbreiteten Sprachen C und C++ evaluiert, sowie die qualitativen und quantitativen Vor- und Nachteile der verschiedenen Spei- cherschutzverfahren untersucht. Die Ergebnisse zeigen, dass die Verwendung einer auf die Anwendungsdomäne zugeschnittenen Java-Laufzeitumgebung keine Mehrkosten im Vergleich zu C und C++ mitbringen muss, jedoch den Mehrwert der umfassenden softwarebasierten Speichersicherheit bietet. Der Vergleich beider Speicherschutzver- fahren ergab, dass jedes abhängig von den Anwendungseigenschaften das effizientere sein kann. Beide Verfahren haben ihre Existenzberechtigung und ergänzen sich in vielen Bereichen, so dass auch die Kombination sinnvoll sein kann. v Acknowledgments My years at the FAU’s system software group as a research assistant were some of the most fun yet. The product of those years – this thesis – would not have been possible in this form without the support and help of my great colleagues, to whom I want to express my gratitude. My professor Wolfgang Schröder-Preikschat enables and supports his doctoral stu- dents in pursuing their own research interests. It is the freedom and self-responsibility he grants to his students that make his group such a great environment to work in. Together with Jürgen Kleinöder, I organized the exercises accompanying the systems programming lectures for some years; he enabled me to incorporate my own ideas into the course contents and I enjoyed those years of teaching a lot. Some of my colleagues and former students contributed to parts of this thesis: Back when I was an undergraduate student, Christian Wawersich raised my interest in embedded systems and safety and brought me into the KESO project. Daniel Lohmann helped me throughout my research with invaluable discussions that often provided me with new perspectives on my research topics. Peter Ulbrich is the head of the I4Copter project; he not only provided me with the ideal evaluation scenario but also spent many hours helping me adapting and porting it; unforgotten, he showed very forgiving when I accidentally crashed the I4Copter into a building. Jens Schedel ported the I4Copter software to use the AUTOSAR OS interface and thereby made it usable for my setup. Michael Strotz implemented my ideas on gradual software-based memory protection in the KESO project. Christoph Erhardt did outstanding work on the compiler core; his work provided an important piece of infrastructure for my thesis. My wife Isabella supported and motivated me in the challenging phases of thesis writing, proof-read and discussed with me the contents of this thesis. Finally, I’d like to thank all my other colleagues that made working in the group such a great time. Erlangen, December 2012 vii Contents 1 Introduction1 1.1 Motivation.................................1 1.1.1 Electronic Control Units in the Automotive Industry......2 1.1.2 Paradigm Shift Towards an Integrated Architecture......3 1.1.3 New System-Software Requirements: Need for Isolation....3 1.2 Problem Statement and Proposed Solution...............4 1.3 Broader Scope of this Work........................5 1.4 Structure of this Thesis..........................6 1.5 Own Publications Related to this Thesis.................7 2 State of the Art9 2.1 Levels of Memory Protection.......................9 2.1.1 Sandboxing.............................9 2.1.2 Memory Safety........................... 10 2.1.3 Type Safety............................. 10 2.2 Comparison Criteria............................ 11 2.3 Hardware-Based Memory Protection Approaches............ 13 2.3.1 Coarse-Grained Approaches Without In-Memory Data Structures 13 2.3.2 Caching Approaches with In-Memory Data Structures..... 14 2.3.3 Discussion.............................. 17 2.4 Software-Based Memory Protection Approaches............. 18 2.4.1 Instruction-Set-Architecture-Level Approaches......... 19 2.4.2 Compiler-Level Approaches.................... 21 2.4.3 Language-Level Approaches.................... 24 2.4.4 Discussion.............................
Recommended publications
  • Virtual Memory
    Chapter 4 Virtual Memory Linux processes execute in a virtual environment that makes it appear as if each process had the entire address space of the CPU available to itself. This virtual address space extends from address 0 all the way to the maximum address. On a 32-bit platform, such as IA-32, the maximum address is 232 − 1or0xffffffff. On a 64-bit platform, such as IA-64, this is 264 − 1or0xffffffffffffffff. While it is obviously convenient for a process to be able to access such a huge ad- dress space, there are really three distinct, but equally important, reasons for using virtual memory. 1. Resource virtualization. On a system with virtual memory, a process does not have to concern itself with the details of how much physical memory is available or which physical memory locations are already in use by some other process. In other words, virtual memory takes a limited physical resource (physical memory) and turns it into an infinite, or at least an abundant, resource (virtual memory). 2. Information isolation. Because each process runs in its own address space, it is not possible for one process to read data that belongs to another process. This improves security because it reduces the risk of one process being able to spy on another pro- cess and, e.g., steal a password. 3. Fault isolation. Processes with their own virtual address spaces cannot overwrite each other’s memory. This greatly reduces the risk of a failure in one process trig- gering a failure in another process. That is, when a process crashes, the problem is generally limited to that process alone and does not cause the entire machine to go down.
    [Show full text]
  • The Case for Compressed Caching in Virtual Memory Systems
    THE ADVANCED COMPUTING SYSTEMS ASSOCIATION The following paper was originally published in the Proceedings of the USENIX Annual Technical Conference Monterey, California, USA, June 6-11, 1999 The Case for Compressed Caching in Virtual Memory Systems _ Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis aUniversity of Texas at Austin © 1999 by The USENIX Association All Rights Reserved Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. For more information about the USENIX Association: Phone: 1 510 528 8649 FAX: 1 510 548 5738 Email: [email protected] WWW: http://www.usenix.org The Case for Compressed Caching in Virtual Memory Systems Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis Dept. of Computer Sciences University of Texas at Austin Austin, Texas 78751-1182 g fwilson|sfkaplan|smaragd @cs.utexas.edu http://www.cs.utexas.edu/users/oops/ Abstract In [Wil90, Wil91b] we proposed compressed caching for virtual memory—storing pages in compressed form Compressed caching uses part of the available RAM to in a main memory compression cache to reduce disk pag- hold pages in compressed form, effectively adding a new ing. Appel also promoted this idea [AL91], and it was level to the virtual memory hierarchy. This level attempts evaluated empirically by Douglis [Dou93] and by Russi- to bridge the huge performance gap between normal (un- novich and Cogswell [RC96]. Unfortunately Douglis’s compressed) RAM and disk.
    [Show full text]
  • Virtual Memory
    Virtual Memory Copyright ©: University of Illinois CS 241 Staff 1 Address Space Abstraction n Program address space ¡ All memory data ¡ i.e., program code, stack, data segment n Hardware interface (physical reality) ¡ Computer has one small, shared memory How can n Application interface (illusion) we close this gap? ¡ Each process wants private, large memory Copyright ©: University of Illinois CS 241 Staff 2 Address Space Illusions n Address independence ¡ Same address can be used in different address spaces yet remain logically distinct n Protection ¡ One address space cannot access data in another address space n Virtual memory ¡ Address space can be larger than the amount of physical memory on the machine Copyright ©: University of Illinois CS 241 Staff 3 Address Space Illusions: Virtual Memory Illusion Reality Giant (virtual) address space Many processes sharing Protected from others one (physical) address space (Unless you want to share it) Limited memory More whenever you want it Today: An introduction to Virtual Memory: The memory space seen by your programs! Copyright ©: University of Illinois CS 241 Staff 4 Multi-Programming n Multiple processes in memory at the same time n Goals 1. Layout processes in memory as needed 2. Protect each process’s memory from accesses by other processes 3. Minimize performance overheads 4. Maximize memory utilization Copyright ©: University of Illinois CS 241 Staff 5 OS & memory management n Main memory is normally divided into two parts: kernel memory and user memory n In a multiprogramming system, the “user” part of main memory needs to be divided to accommodate multiple processes. n The user memory is dynamically managed by the OS (known as memory management) to satisfy following requirements: ¡ Protection ¡ Relocation ¡ Sharing ¡ Memory organization (main mem.
    [Show full text]
  • Extracting Compressed Pages from the Windows 10 Virtual Store WHITE PAPER | EXTRACTING COMPRESSED PAGES from the WINDOWS 10 VIRTUAL STORE 2
    white paper Extracting Compressed Pages from the Windows 10 Virtual Store WHITE PAPER | EXTRACTING COMPRESSED PAGES FROM THE WINDOWS 10 VIRTUAL STORE 2 Abstract Windows 8.1 introduced memory compression in August 2013. By the end of 2013 Linux 3.11 and OS X Mavericks leveraged compressed memory as well. Disk I/O continues to be orders of magnitude slower than RAM, whereas reading and decompressing data in RAM is fast and highly parallelizable across the system’s CPU cores, yielding a significant performance increase. However, this came at the cost of increased complexity of process memory reconstruction and thus reduced the power of popular tools such as Volatility, Rekall, and Redline. In this document we introduce a method to retrieve compressed pages from the Windows 10 Memory Manager Virtual Store, thus providing forensics and auditing tools with a way to retrieve, examine, and reconstruct memory artifacts regardless of their storage location. Introduction Windows 10 moves pages between physical memory and the hard disk or the Store Manager’s virtual store when memory is constrained. Universal Windows Platform (UWP) applications leverage the Virtual Store any time they are suspended (as is the case when minimized). When a given page is no longer in the process’s working set, the corresponding Page Table Entry (PTE) is used by the OS to specify the storage location as well as additional data that allows it to start the retrieval process. In the case of a page file, the retrieval is straightforward because both the page file index and the location of the page within the page file can be directly retrieved.
    [Show full text]
  • Using an MPU to Enforce Spatial Separation
    HighIntegritySystems Using an MPU to Enforce Spatial Separation Issue 1.3 - November 26, 2020 Copyright WITTENSTEIN aerospace & simulation ltd date as document, all rights reserved. WITTENSTEIN high integrity systems v Americas: +1 408 625 4712 ROTW: +44 1275 395 600 Email: [email protected] Web: www.highintegritysystems.com HighIntegritySystems Contents Contents..........................................................................................................................................2 List Of Figures.................................................................................................................................3 List Of Notation...............................................................................................................................3 CHAPTER 1 Introduction.....................................................................................................4 1.1 Introduction..............................................................................................................................4 1.2 Use Case - An Embedded System...........................................................................................5 CHAPTER 2 System Architecture and its Effect on Spatial Separation......................6 2.1 Spatial Separation with a Multi-Processor System....................................................................6 2.2 Spatial Separation with a Multi-Core System............................................................................6 2.3 Spatial Separation
    [Show full text]
  • Strict Memory Protection for Microcontrollers
    Master Thesis Spring 2019 Strict Memory Protection for Microcontrollers Erlend Sveen Supervisor: Jingyue Li Co-supervisor: Magnus Själander Sunday 17th February, 2019 Abstract Modern desktop systems protect processes from each other with memory protection. Microcontroller systems, which lack support for memory virtualization, typically only uses memory protection for areas that the programmer deems necessary and does not separate processes completely. As a result the application still appears monolithic and only a handful of errors may be detected. This thesis presents a set of solutions for complete separation of processes, unleash- ing the full potential of the memory protection unit embedded in modern ARM-based microcontrollers. The operating system loads multiple programs from disk into indepen- dently protected portions of memory. These programs may then be started, stopped, modified, crashed etc. without affecting other running programs. When loading pro- grams, a new allocation algorithm is used that automatically aligns the memories for use with the protection hardware. A pager is written to satisfy additional run-time demands of applications, and communication primitives for inter-process communication is made available. Since every running process is unable to get access to other running processes, not only reliability but also security is improved. An application may be split so that unsafe or error-prone code is separated from mission-critical code, allowing it to be independently restarted when an error occurs. With executable and writeable memory access rights being mutually exclusive, code injection is made harder to perform. The solution is all transparent to the programmer. All that is required is to split an application into sub-programs that operates largely independently.
    [Show full text]
  • A Minimal Powerpc™ Boot Sequence for Executing Compiled C Programs
    Order Number: AN1809/D Rev. 0, 3/2000 Semiconductor Products Sector Application Note A Minimal PowerPCª Boot Sequence for Executing Compiled C Programs PowerPC Systems Architecture & Performance [email protected] This document describes the procedures necessary to successfully initialize a PowerPC processor and begin executing programs compiled using the PowerPC embedded application interface (EABI). The items discussed in this document have been tested for MPC603eª, MPC750, and MPC7400 microprocessors. The methods and source code presented in this document may work unmodiÞed on similar PowerPC platforms as well. This document contains the following topics: ¥ Part I, ÒOverview,Ó provides an overview of the conditions and exceptions for the procedures described in this document. ¥ Part II, ÒPowerPC Processor Initialization,Ó provides information on the general setup of the processor registers, caches, and MMU. ¥ Part III, ÒPowerPC EABI Compliance,Ó discusses aspects of the EABI that apply directly to preparing to jump into a compiled C program. ¥ Part IV, ÒSample Boot Sequence,Ó describes the basic operation of the boot sequence and the many options of conÞguration, explains in detail a sample conÞgurable boot and how the code may be modiÞed for use in different environments, and discusses the compilation procedure using the supporting GNU build environment. ¥ Part V, ÒSource Files,Ó contains the complete source code for the Þles ppcinit.S, ppcinit.h, reg_defs.h, ld.script, and MakeÞle. This document contains information on a new product under development by Motorola. Motorola reserves the right to change or discontinue this product without notice. © Motorola, Inc., 2000. All rights reserved. Overview Part I Overview The procedures discussed in this document perform only the minimum amount of work necessary to execute a user program.
    [Show full text]
  • HALO: Post-Link Heap-Layout Optimisation
    HALO: Post-Link Heap-Layout Optimisation Joe Savage Timothy M. Jones University of Cambridge, UK University of Cambridge, UK [email protected] [email protected] Abstract 1 Introduction Today, general-purpose memory allocators dominate the As the gap between memory and processor speeds continues landscape of dynamic memory management. While these so- to widen, efficient cache utilisation is more important than lutions can provide reasonably good behaviour across a wide ever. While compilers have long employed techniques like range of workloads, it is an unfortunate reality that their basic-block reordering, loop fission and tiling, and intelligent behaviour for any particular workload can be highly subop- register allocation to improve the cache behaviour of pro- timal. By catering primarily to average and worst-case usage grams, the layout of dynamically allocated memory remains patterns, these allocators deny programs the advantages of largely beyond the reach of static tools. domain-specific optimisations, and thus may inadvertently Today, when a C++ program calls new, or a C program place data in a manner that hinders performance, generating malloc, its request is satisfied by a general-purpose allocator unnecessary cache misses and load stalls. with no intimate knowledge of what the program does or To help alleviate these issues, we propose HALO: a post- how its data objects are used. Allocations are made through link profile-guided optimisation tool that can improve the fixed, lifeless interfaces, and fulfilled by
    [Show full text]
  • Memory Protection in Embedded Systems Lanfranco Lopriore Dipartimento Di Ingegneria Dell’Informazione, Università Di Pisa, Via G
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Archivio della Ricerca - Università di Pisa Memory protection in embedded systems Lanfranco Lopriore Dipartimento di Ingegneria dell’Informazione, Università di Pisa, via G. Caruso 16, 56126 Pisa, Italy E-mail: [email protected] Abstract — With reference to an embedded system featuring no support for memory manage- ment, we present a model of a protection system based on passwords and keys. At the hardware level, our model takes advantage of a memory protection unit (MPU) interposed between the processor and the complex of the main memory and the input-output devices. The MPU sup- ports both concepts of a protection context and a protection domain. A protection context is a set of access rights for the memory pages; a protection domain is a set of one or more protection contexts. Passwords are associated with protection domains. A process that holds a key match- ing a given password can take advantage of this key to activate the corresponding domain. A small set of protection primitives makes it possible to modify the composition of the domains in a strictly controlled fashion. The proposed protection model is evaluated from a number of salient viewpoints, which include key distribution, review and revocation, the memory requirements for storage of the information concerning protection, and the time necessary for key validation. Keywords: access right; embedded system; protection; revocation. 1. INTRODUCTION We shall refer to a typical embedded system architecture featuring a microprocessor inter- facing both volatile and non-volatile primary memory devices, as well as a variety of input/out- put devices including sensors and actuators.
    [Show full text]
  • Chapter 4 Memory Management and Virtual Memory Asst.Prof.Dr
    Chapter 4 Memory management and Virtual Memory Asst.Prof.Dr. Supakit Nootyaskool IT-KMITL Object • To discuss the principle of memory management. • To understand the reason that memory partitions are importance for system organization. • To describe the concept of Paging and Segmentation. 4.1 Difference in main memory in the system • The uniprogramming system (example in DOS) allows only a program to be present in memory at a time. All resources are provided to a program at a time. Example in a memory has a program and an OS running 1) Operating system (on Kernel space) 2) A running program (on User space) • The multiprogramming system is difference from the mentioned above by allowing programs to be present in memory at a time. All resource must be organize and sharing to programs. Example by two programs are in the memory . 1) Operating system (on Kernel space) 2) Running programs (on User space + running) 3) Programs are waiting signals to execute in CPU (on User space). The multiprogramming system uses memory management to organize location to the programs 4.2 Memory management terms Frame Page Segment 4.2 Memory management terms Frame Page A fixed-lengthSegment block of main memory. 4.2 Memory management terms Frame Page A fixed-length block of data that resides in secondary memory. A page of data may temporarily beSegment copied into a frame of main memory. A variable-lengthMemory management block of data that residesterms in secondary memory. A segment may temporarily be copied into an available region of main memory or the segment may be divided into pages which can be individuallyFrame copied into mainPage memory.
    [Show full text]
  • Memory Management
    Memory Management These slides are created by Dr. Huang of George Mason University. Students registered in Dr. Huang’s courses at GMU can make a single machine readable copy and print a single copy of each slide for their own reference as long as the slide contains the copyright statement, and the GMU facilities are not used to produce the paper copies. Permission for any other use, either in machine-readable or printed form, must be obtained form the author in writing. CS471 1 Memory 0000 A a set of data entries 0001 indexed by addresses 0002 0003 Typically the basic data 0004 0005 unit is byte 0006 0007 In 32 bit machines, 4 bytes 0008 0009 grouped to words 000A 000B Have you seen those 000C 000D DRAM chips in your PC ? 000E 000F CS471 2 1 Logical vs. Physical Address Space The addresses used by the RAM chips are called physical addresses. In primitive computing devices, the address a programmer/processor use is the actual address. – When the process fetches byte 000A, the content of 000A is provided. CS471 3 In advanced computers, the processor operates in a separate address space, called logical address, or virtual address. A Memory Management Unit (MMU) is used to map logical addresses to physical addresses. – Various mapping technologies to be discussed – MMU is a hardware component – Modern processors have their MMU on the chip (Pentium, Athlon, …) CS471 4 2 Continuous Mapping: Dynamic Relocation Virtual Physical Memory Memory Space Processor 4000 The processor want byte 0010, the 4010th byte is fetched CS471 5 MMU for Dynamic Relocation CS471 6 3 Segmented Mapping Virtual Physical Memory Memory Space Processor Obviously, more sophisticated MMU needed to implement this CS471 7 Swapping A process can be swapped temporarily out of memory to a backing store (a hard drive), and then brought back into memory for continued execution.
    [Show full text]
  • Arm System Memory Management Unit Architecture Specification
    Arm® System Memory Management Unit Architecture Specification SMMU architecture version 3 Document number ARM IHI 0070 Document version D.a Document confidentiality Non-confidential Copyright © 2016-2020 Arm Limited or its affiliates. All rights reserved. Arm System Memory Management Unit Architecture Specifica- tion Release information Date Version Changes 2020/Aug/31 D.a• Update with SMMUv3.3 architecture • Amendments and clarifications 2019/Jul/18 C.a• Amendments and clarifications 2018/Mar/16 C• Update with SMMUv3.2 architecture • Further amendments and clarifications 2017/Jun/15 B• Amendments and clarifications 2016/Oct/15 A• First release ii Non-Confidential Proprietary Notice This document is protected by copyright and other related rights and the practice or implementation of the information contained in this document may be protected by one or more patents or pending patent applications. No part of this document may be reproduced in any form by any means without the express prior written permission of Arm. No license, express or implied, by estoppel or otherwise to any intellectual property rights is granted by this document unless specifically stated. Your access to the information in this document is conditional upon your acceptance that you will not use or permit others to use the information for the purposes of determining whether implementations infringe any third party patents. THIS DOCUMENT IS PROVIDED “AS IS”. ARM PROVIDES NO REPRESENTATIONS AND NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY, SATISFACTORY QUALITY, NON-INFRINGEMENT OR FITNESS FOR A PARTICULAR PURPOSE WITH RESPECT TO THE DOCUMENT. For the avoidance of doubt, Arm makes no representation with respect to, and has undertaken no analysis to identify or understand the scope and content of, patents, copyrights, trade secrets, or other rights.
    [Show full text]