The UVM Virtual Memory System

Total Page:16

File Type:pdf, Size:1020Kb

The UVM Virtual Memory System THE ADVANCED COMPUTING SYSTEMS ASSOCIATION The following paper was originally published in the Proceedings of the USENIX Annual Technical Conference Monterey, California, USA, June 6-11, 1999 The UVM Virtual Memory System _ _ Charles D. Cranor and Gurudatta M. Parulkar Washington University © 1999 by The USENIX Association All Rights Reserved Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. For more information about the USENIX Association: Phone: 1 510 528 8649 FAX: 1 510 548 5738 Email: [email protected] WWW: http://www.usenix.org The UVM Virtual Memory System Charles D. Cranor Gurudatta M. Parulkar Department of Computer Science Washington University St. Louis, MO 63130 g fchuck,guru @arl.wustl.edu Abstract virtual memory system that was written for Carnegie We introduce UVM, a new virtual memory system for Mellon University’s Mach operating system [18]. The the BSD kernel that has an improved design that in- BSD VM system features a clean separation of machine- creases system performance over the old Mach-based dependent functions, support for mmap, and fine-grain 4.4BSD VM system. In this paper we present an data structure locking suitable for multiprocessor sys- overview of both UVM and the BSD VM system. We tems. focus our discussion on the design decisions made when creating UVM and contrast the UVM design with the 1.1 Why Replace BSD VM? less efficient BSD VM design. Topics covered in- clude mapping, memory object management, anony- The BSD VM system has four main drawbacks that con- mous memory and copy-on-write mechanisms, and tributed to our decision to replace it: complex data struc- pager design. We also present an overview of virtual tures, poor performance, no virtual memory based data memory based data movement mechanisms that have movement mechanisms, and poor documentation. been introduced in BSD by UVM. We believe that the The data structures and functions used by BSD to lessons we learned from designing and implementing manage memory are complex and thus difficult to main- UVM can be applied to other kernels and large software tain. This is especially true of the structures used to systems. Implemented in the NetBSD operating system, represent copy-on-write mappings of memory objects. UVM will completely replace BSD VM in NetBSD 1.4. As memory objects are copied using the copy-on-write mechanism [2] (e.g., during a fork) they are linked to- gether in lists called object chains. If left unchecked, 1 Introduction use of the copy-on-write mechanism can cause object chains to grow quite long. Long object chains are a In this paper we introduce UVM1, a new virtual memory problem for two reasons. First, long object chains slow system that replaces the 4.4BSD virtual memory system memory search times by increasing the number of ob- in BSD-based operating systems. UVM is the third gen- jects that need to be checked to locate a requested page. eration BSD virtual memory system that improves the Second, long object chains are likely to contain inacces- performance and reduces the complexity of the BSD ker- sible redundant copies of the same page of data, thus nel. UVM’s improved performance is particularly useful wasting memory. If the BSD VM system allows too to applications that make heavy use of VM features such many pages of memory to be wasted this way the sys- as memory-mapped files and copy-on-write memory. tem’s swap area will eventually become filled with re- Versions of BSD prior to 4.4BSD used the old BSD- dundant data and the system will deadlock. This condi- VAX virtual memory system that was tightly bound to tion is known as a swap memory leak deadlock. To avoid the VAX architecture and lacked support for memory- problems associated with long object chains, the BSD mapped files (mmap) and fine-grain data structure lock- VM system attempts to reduce their length by using a ing for multiprocessors. To address these issues, the old complex collapse operation. To successfully collapse an VAX-based VM system was replaced with a new VM object chain, the VM system must search for an object system for 4.4BSD [12]. The 4.4BSD virtual mem- that contains redundant data and is no longer needed in ory system (BSD VM) is a modified version of the the chain. If a redundant object is found, then it is either 1Note that “UVM” is a name, not an acronym bypassed or discarded. Note that even a successfully col- lapsed object chain can still contain inaccessible redun- combined with I/O and IPC system changes currently dant pages. The collapse operation can only repair swap under development, these mechanisms can reduce the memory leaks after they occur, it cannot prevent them kernel’s data copying overhead. from happening. UVM’s source code is freely available under the stan- BSD VM exhibits poor performance due to several dard BSD license. UVM was merged into the NetBSD factors. First, the overhead of object chain management operating system’s master source repository in early slows down common operations such as page faults and 1998 and has proven stable on the architectures sup- memory mapping. Second, I/O operations in BSD VM ported by NetBSD including the Alpha, I386, M68K, are performed one page at a time rather than in more effi- MIPS, Sparc, and VAX. UVM will appear as the official cient multi-page clusters, thus slowing paging response virtual memory system of NetBSD in NetBSD release time. Third, BSD VM’s integration into the BSD ker- 1.4. A port of UVM to OpenBSD is also in progress. nel has not been optimized. For example, unreferenced In this paper we present the design and implementa- memory-mapped file objects are cached at the I/O sys- tion of the UVM virtual memory system, highlighting its tem (vnode) layer and redundantly at the virtual memory improvements over the BSD VM design. In Section 2 layer as well. Finally, several virtual memory operations we present an overview of BSD VM and UVM. In Sec- are unnecessarily performed multiple times at different tions 3, 4, 5, 6, and 7 we present the design of UVM’s layers of the BSD kernel. map, object, anonymous memory, pager, and data move- Another drawback of the BSD VM system is its lack ment facilities, respectively. In Section 8 we present the of virtual memory based data movement mechanisms. overall performance of UVM. Section 9 contains related While BSD VM does support the copy-on-write mecha- work, and Section 10 contains our concluding remarks nism, it is not possible in BSD VM for the virtual mem- and directions for future research. ory system to safely share memory it controls with other kernel subsystems such as I/O and IPC without perform- ing a costly data copy. It is also not possible for pro- 2VMOverview cesses to easily exchange, copy, or share chunks of their Both BSD VM and UVM can be divided into two layers: virtual address space between themselves. a small machine-dependent layer, and a larger machine- Finally, the BSD VM system is poorly documented. independent layer. The machine-dependent layer used While some parts of the BSD kernel such as the network- by both BSD and UVM is called the pmap layer. The ing and IPC system are well documented [20], the BSD pmap layer handles the low level details of program- VM system lacks such detailed documentation and is ming a processor’s MMU. This task consists of adding, poorly commented. This has made it difficult for devel- removing, modifying, and querying the mappings of a opers of free operating systems projects such as NetBSD virtual address or of a page of physical memory. The [17] to understand and maintain the 4.4BSD VM code. pmap layer has no knowledge of higher-level operat- ing system abstractions such as files. Each architecture 1.2 The UVM Approach supported by the BSD kernel must have its own pmap module. Note that UVM was designed to use the same One way to address the shortcomings of the BSD virtual machine-dependent layer that BSD VM and Mach use. memory system is to try and evolve the data structures This allows pmap modules from those systems to be and functions BSD inherited from Mach into a more effi- reused with UVM, thus reducing the overhead of port- cient VM system. This technique has been successfully ing a UVM-based kernel to a new architecture. applied by the FreeBSD project to the BSD VM sys- The machine-independent code is shared by all BSD- tem. However, the improved VM system in FreeBSD supported processors and contains functions that per- still suffers from the object chaining model it inherited form the high-level operations of the VM system. Such from BSD VM. An alternative approach is to reimple- functions include managing a process’ file mappings, ment the virtual memory system, reusing the positive requesting data from backing store, paging out mem- aspects of the BSD VM design, replacing the parts of ory when it becomes scarce, managing the allocation the design that do not work well, and adding new fea- of physical memory, and managing copy-on-write mem- tures on top of the resulting VM system. This is the ory. Figure 1 shows the five main abstractions that cor- approach we used for UVM.
Recommended publications
  • LS-09EN. OS Permissions. SUID/SGID/Sticky. Extended Attributes
    Operating Systems LS-09. OS Permissions. SUID/SGID/Sticky. Extended Attributes. Operating System Concepts 1.1 ys©2019 Linux/UNIX Security Basics Agenda ! UID ! GID ! Superuser ! File Permissions ! Umask ! RUID/EUID, RGID/EGID ! SUID, SGID, Sticky bits ! File Extended Attributes ! Mount/umount ! Windows Permissions ! File Systems Restriction Operating System Concepts 1.2 ys©2019 Domain Implementation in Linux/UNIX ! Two types domain (subjects) groups ! User Domains = User ID (UID>0) or User Group ID (GID>0) ! Superuser Domains = Root ID (UID=0) or Root Group ID (root can do everything, GID=0) ! Domain switch accomplished via file system. ! Each file has associated with it a domain bit (SetUID bit = SUID bit). ! When file is executed and SUID=on, then Effective UID is set to Owner of the file being executed. When execution completes Efective UID is reset to Real UID. ! Each subject (process) and object (file, socket,etc) has a 16-bit UID. ! Each object also has a 16-bit GID and each subject has one or more GIDs. ! Objects have access control lists that specify read, write, and execute permissions for user, group, and world. Operating System Concepts 1.3 ys©2019 Subjects and Objects Subjects = processes Objects = files (regular, directory, (Effective UID, EGID) devices /dev, ram /proc) RUID (EUID) Owner permissions (UID) RGID-main (EGID) Group Owner permissions (GID) +RGID-list Others RUID, RGID Others ID permissions Operating System Concepts 1.4 ys©2019 The Superuser (root) • Almost every Unix system comes with a special user in the /etc/passwd file with a UID=0. This user is known as the superuser and is normally given the username root.
    [Show full text]
  • Overcoming Traditional Problems with OS Huge Page Management
    MEGA: Overcoming Traditional Problems with OS Huge Page Management Theodore Michailidis Alex Delis Mema Roussopoulos University of Athens University of Athens University of Athens Athens, Greece Athens, Greece Athens, Greece [email protected] [email protected] [email protected] ABSTRACT KEYWORDS Modern computer systems now feature memory banks whose Huge pages, Address Translation, Memory Compaction aggregate size ranges from tens to hundreds of GBs. In this context, contemporary workloads can and do often consume ACM Reference Format: Theodore Michailidis, Alex Delis, and Mema Roussopoulos. 2019. vast amounts of main memory. This upsurge in memory con- MEGA: Overcoming Traditional Problems with OS Huge Page Man- sumption routinely results in increased virtual-to-physical agement. In The 12th ACM International Systems and Storage Con- address translations, and consequently and more importantly, ference (SYSTOR ’19), June 3–5, 2019, Haifa, Israel. ACM, New York, more translation misses. Both of these aspects collectively NY, USA, 11 pages. https://doi.org/10.1145/3319647.3325839 do hamper the performance of workload execution. A solu- tion aimed at dramatically reducing the number of address translation misses has been to provide hardware support for 1 INTRODUCTION pages with bigger sizes, termed huge pages. In this paper, we Computer memory capacities have been increasing sig- empirically demonstrate the benefits and drawbacks of using nificantly over the past decades, leading to the development such huge pages. In particular, we show that it is essential of memory hungry workloads that consume vast amounts for modern OS to refine their software mechanisms to more of main memory. This upsurge in memory consumption rou- effectively manage huge pages.
    [Show full text]
  • Linux Filesystem Hierarchy Chapter 1
    Linux Filesystem Hierarchy Chapter 1. Linux Filesystem Hierarchy 1.1. Foreward When migrating from another operating system such as Microsoft Windows to another; one thing that will profoundly affect the end user greatly will be the differences between the filesystems. What are filesystems? A filesystem is the methods and data structures that an operating system uses to keep track of files on a disk or partition; that is, the way the files are organized on the disk. The word is also used to refer to a partition or disk that is used to store the files or the type of the filesystem. Thus, one might say I have two filesystems meaning one has two partitions on which one stores files, or that one is using the extended filesystem, meaning the type of the filesystem. The difference between a disk or partition and the filesystem it contains is important. A few programs (including, reasonably enough, programs that create filesystems) operate directly on the raw sectors of a disk or partition; if there is an existing file system there it will be destroyed or seriously corrupted. Most programs operate on a filesystem, and therefore won't work on a partition that doesn't contain one (or that contains one of the wrong type). Before a partition or disk can be used as a filesystem, it needs to be initialized, and the bookkeeping data structures need to be written to the disk. This process is called making a filesystem. Most UNIX filesystem types have a similar general structure, although the exact details vary quite a bit.
    [Show full text]
  • Asynchronous Page Faults Aix Did It Red Hat Author Gleb Natapov August 10, 2010
    Asynchronous page faults Aix did it Red Hat Author Gleb Natapov August 10, 2010 Abstract Host memory overcommit may cause guest memory to be swapped. When guest vcpu access memory swapped out by a host its execution is suspended until memory is swapped back. Asynchronous page fault is a way to try and use guest vcpu more efficiently by allowing it to execute other tasks while page is brought back into memory. Part I How KVM Handles Guest Memory and What Inefficiency it Has With Regards to Host Swapping Mapping guest memory into host memory But we do it on demand Page fault happens on first guest access What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn()
    [Show full text]
  • What Is an Operating System III 2.1 Compnents II an Operating System
    Page 1 of 6 What is an Operating System III 2.1 Compnents II An operating system (OS) is software that manages computer hardware and software resources and provides common services for computer programs. The operating system is an essential component of the system software in a computer system. Application programs usually require an operating system to function. Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
    [Show full text]
  • Filesystem Hierarchy Standard
    Filesystem Hierarchy Standard LSB Workgroup, The Linux Foundation Filesystem Hierarchy Standard LSB Workgroup, The Linux Foundation Version 3.0 Publication date March 19, 2015 Copyright © 2015 The Linux Foundation Copyright © 1994-2004 Daniel Quinlan Copyright © 2001-2004 Paul 'Rusty' Russell Copyright © 2003-2004 Christopher Yeoh Abstract This standard consists of a set of requirements and guidelines for file and directory placement under UNIX-like operating systems. The guidelines are intended to support interoperability of applications, system administration tools, development tools, and scripts as well as greater uniformity of documentation for these systems. All trademarks and copyrights are owned by their owners, unless specifically noted otherwise. Use of a term in this document should not be regarded as affecting the validity of any trademark or service mark. Permission is granted to make and distribute verbatim copies of this standard provided the copyright and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this standard under the conditions for verbatim copying, provided also that the title page is labeled as modified including a reference to the original standard, provided that information on retrieving the original standard is included, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this standard into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the copyright holder. Dedication This release is dedicated to the memory of Christopher Yeoh, a long-time friend and colleague, and one of the original editors of the FHS.
    [Show full text]
  • Measuring Software Performance on Linux Technical Report
    Measuring Software Performance on Linux Technical Report November 21, 2018 Martin Becker Samarjit Chakraborty Chair of Real-Time Computer Systems Chair of Real-Time Computer Systems Technical University of Munich Technical University of Munich Munich, Germany Munich, Germany [email protected] [email protected] OS program program CPU .text .bss + + .data +/- + instructions cache branch + coherency scheduler misprediction core + pollution + migrations data + + interrupt L1i$ miss access + + + + + + context mode + + (TLB flush) TLB + switch data switch miss L1d$ +/- + (KPTI TLB flush) miss prefetch +/- + + + higher-level readahead + page cache miss walk + + multicore + + (TLB shootdown) TLB coherency page DRAM + page fault + cache miss + + + disk + major minor I/O Figure 1. Event interaction map for a program running on an Intel Core processor on Linux. Each event itself may cause processor cycles, and inhibit (−), enable (+), or modulate (⊗) others. Abstract that our measurement setup has a large impact on the results. Measuring and analyzing the performance of software has More surprisingly, however, they also suggest that the setup reached a high complexity, caused by more advanced pro- can be negligible for certain analysis methods. Furthermore, cessor designs and the intricate interaction between user we found that our setup maintains significantly better per- formance under background load conditions, which means arXiv:1811.01412v2 [cs.PF] 20 Nov 2018 programs, the operating system, and the processor’s microar- chitecture. In this report, we summarize our experience on it can be used to improve high-performance applications. how performance characteristics of software should be mea- CCS Concepts • Software and its engineering → Soft- sured when running on a Linux operating system and a ware performance; modern processor.
    [Show full text]
  • Autodesk Alias 2016 Hardware Qualification
    Autodesk Alias 2016 Hardware Qualification Updated June 1, 2015 Windows Mac OSX Build Information Products Platform Version Software Date Build Number • Autodesk AutoStudio • Autodesk Alias Surface 64-bit 2016 March 6, 2015 201503061129-441529 • Autodesk Alias Design Supported Operating Systems and CPU Platforms Operating System CPU Platform Windows 7 SP1 Intel Xeon (Enterprise, Ultimate or Professional) 64-bit Intel Core AMD Opteron Windows 8.0 or 8.1 Intel Xeon (Enterprise or Professional) 64-bit Intel Core AMD Opteron Important Notes • Alias AutoStudio, Automotive, Surface and Design fully support 64-bit environments. Running the 64-bit native version requires Windows 8 or 8.1 64-bit or Windows 7 64-bit operating system. • Certain 3rd party software may alter the processor affinity settings, affecting multi-cpu systems running Alias.exe and its spawned processes. To check the affinity setting, right-click on the Alias.exe process inside the Windows Task Manager and select Set Affinity... ensure that all available CPUs are enabled. • Alias or its component programs may not launch successfully depending on your Windows security settings. If this occurs, you may either unblock the program via the Windows Firewall Security Alert dialog, or add it as an Exception in the Exceptions Tab in the Windows Firewall dialog box. For more information, please see the Microsoft Update. Similar configurations are necessary for any third party firewall software, Please Read • It may be possible to successfully use Alias for Windows with a non-qualified configuration, however, Support and Maintenance programs will be subject to the Autodesk Support services guidelines. • The configurations shown are subject to change, and additional qualified configurations may be added after qualification testing has been carried out.
    [Show full text]
  • Guide to Openvms Performance Management
    Guide to OpenVMS Performance Management Order Number: AA–PV5XA–TE May 1993 This manual is a conceptual and tutorial guide for experienced users responsible for optimizing performance on OpenVMS systems. Revision/Update Information: This manual supersedes the Guide to VMS Performance Management, Version 5.2. Software Version: OpenVMS VAX Version 6.0 Digital Equipment Corporation Maynard, Massachusetts May 1993 Digital Equipment Corporation makes no representations that the use of its products in the manner described in this publication will not infringe on existing or future patent rights, nor do the descriptions contained in this publication imply the granting of licenses to make, use, or sell equipment or software in accordance with the description. Possession, use, or copying of the software described in this publication is authorized only pursuant to a valid written license from Digital or an authorized sublicensor. © Digital Equipment Corporation 1993. All rights reserved. The postpaid Reader’s Comments forms at the end of this document request your critical evaluation to assist in preparing future documentation. The following are trademarks of Digital Equipment Corporation: ACMS, ALL–IN–1, Bookreader, CI, DBMS, DECnet, DECwindows, Digital, MicroVAX, OpenVMS, PDP–11, VAX, VAXcluster, VAX DOCUMENT, VMS, and the DIGITAL logo. All other trademarks and registered trademarks are the property of their respective holders. ZK4521 This document was prepared using VAX DOCUMENT Version 2.1. Contents Preface ............................................................ ix 1 Introduction to Performance Management 1.1 Knowing Your Workload ....................................... 1–2 1.1.1 Workload Management .................................... 1–3 1.1.2 Workload Distribution ..................................... 1–4 1.1.3 Code Sharing ............................................ 1–4 1.2 Evaluating User Complaints .
    [Show full text]
  • Virtual Memory §5.4 Virt Virtual Memory U Al Memo Use Main Memory As a “Cache” For
    Chapter 5 Large and Fast: Exppgloiting Memory Hierarchy Part II Virtual Memory §5.4 Virt Virtual Memory u al Memo Use main memory as a “cache” for secondary (disk) storage r y Managed jointly by CPU hardware and the operating system (OS) Programs share main memory Each gets a private virtual address space holdinggqy its frequently used code and data Protected from other programs CPU and OS translate virtual addresses to physical addresses VM “block” is called a page VM translation “miss” is called a page fault Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2 Address Translation Fixed-size pages (e.g., 4K) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 3 Page Fault Penalty On page fault, the page must be fetched from disk Takes millions of clock cycles Handled by OS code Try to minimize page fault rate Fully associative placement Smart replacement algorithms Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4 Page Tables Stores placement information Array of page table entries, indexed by virtual page number Page table register in CPU points to page table in physical memory If page is present in memory PTE stores the physical page number Plus other status bits (referenced, dirty, …) If page is not present PTE can refer to location in swap space on dis k Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 5 Translation Using a Page Table Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6 Mapping Pages to Storage Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 7 Replacement and Writes
    [Show full text]
  • Filesystem Hierarchy Standard
    Filesystem Hierarchy Standard Filesystem Hierarchy Standard Group Edited by Rusty Russell Daniel Quinlan Filesystem Hierarchy Standard by Filesystem Hierarchy Standard Group Edited by Rusty Russell and Daniel Quinlan Published November 4 2003 Copyright © 1994-2003 Daniel Quinlan Copyright © 2001-2003 Paul ’Rusty’ Russell Copyright © 2003 Christopher Yeoh This standard consists of a set of requirements and guidelines for file and directory placement under UNIX-like operating systems. The guidelines are intended to support interoperability of applications, system administration tools, development tools, and scripts as well as greater uniformity of documentation for these systems. All trademarks and copyrights are owned by their owners, unless specifically noted otherwise. Use of a term in this document should not be regarded as affecting the validity of any trademark or service mark. Permission is granted to make and distribute verbatim copies of this standard provided the copyright and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this standard under the conditions for verbatim copying, provided also that the title page is labeled as modified including a reference to the original standard, provided that information on retrieving the original standard is included, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this standard into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the copyright holder. Table of Contents 1. Introduction........................................................................................................................................................1 1.1.
    [Show full text]
  • Open Directory Administration for Version 10.5 Leopard Second Edition
    Mac OS X Server Open Directory Administration For Version 10.5 Leopard Second Edition Apple Inc. © 2008 Apple Inc. All rights reserved. The owner or authorized user of a valid copy of Mac OS X Server software may reproduce this publication for the purpose of learning to use such software. No part of this publication may be reproduced or transmitted for commercial purposes, such as selling copies of this publication or for providing paid-for support services. Every effort has been made to make sure that the information in this manual is correct. Apple Inc., is not responsible for printing or clerical errors. Apple 1 Infinite Loop Cupertino CA 95014-2084 www.apple.com The Apple logo is a trademark of Apple Inc., registered in the U.S. and other countries. Use of the “keyboard” Apple logo (Option-Shift-K) for commercial purposes without the prior written consent of Apple may constitute trademark infringement and unfair competition in violation of federal and state laws. Apple, the Apple logo, iCal, iChat, Leopard, Mac, Macintosh, QuickTime, Xgrid, and Xserve are trademarks of Apple Inc., registered in the U.S. and other countries. Finder is a trademark of Apple Inc. Adobe and PostScript are trademarks of Adobe Systems Incorporated. UNIX is a registered trademark of The Open Group. Other company and product names mentioned herein are trademarks of their respective companies. Mention of third-party products is for informational purposes only and constitutes neither an endorsement nor a recommendation. Apple assumes no responsibility with regard to the performance or use of these products.
    [Show full text]