Chapter 9. Address Translation and Virtual Memory

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 9. Address Translation and Virtual Memory SPEDE-2000 Lab Manual, CSU Sacramento 79 CChhaapptteerr 99.. AAddddrreessss TTrraannssllaattiioonn aanndd VViirrttuuaall MMeemmoorryy You step in the stream But the water has moved on. Page not found. — Computer Haiku error message The Intel386 and later models include an on-chip paging memory-mapping unit (MMU). Paging occurs after the logical address has been resolved to a linear address. If paging is enabled, the linear address will be translated into a page frame number and offset, run through the page tables, then sent to the bus interface unit of the CPU. This process creates a virtual address space. The first section describes how a logical address (selector and offset) is converted to a physical address. It starts with an overview of the whole process, then explains the two steps in detail. The next page describes the paging system. When a page-fault occurs the offending linear address is stored in CR2. The chapter ends with information about setting up virtual address spaces. many O.S. textbooks describe the Intel segmentation-paging system; you might want to also reference those texts. The Address Conversion Process All addresses on a Pentium CPU begin as logical addresses consisting of a selector and an offset. The CPU sends a physical address to memory. When paging is enabled, each address goes through two conversions. A protected-mode, virtual memory OS uses both. First the selector is used to index into a descriptor table, usually the Global Descriptor Table (GDT). The descriptor provides a base address, which is added to the segment’s offset, provided by the original logical address. This sum is the linear address. The offset is compared against the segment’s limit value to ensure the offset is within bounds. See Figure 9-1 for a picture of this. With paging enabled (bit 31 in CR0 set), the linear address is really two values: a page frame number (PFN) and an offset with that page. The Intel Pentium uses a two-level scheme for page numbers, so the PFN is actually a directory index and a page table index. This scheme reduces the number of page tables required when there are “holes” (unmapped areas) in the address space. This way a very large “address space” can be supported with a small amount of physical RAM. Each “page entry” is 4 bytes. The descriptor table and the page tables are all located in system memory. To realize one memory access for a program, the CPU must actually read the descriptor from memory, a page directory, and a page table. So for every program memory access, the CPU must perform three additional accesses. This would really slow down any program. For that reason, the CPU caches as much information as it can onboard itself. In normal operation, only three or so selectors are used. When a selector register is first loaded, the CPU checks to make sure the descriptor is valid, and if so, loads its contents into the selector’s cache storage (these registers are hidden from the programmer). When the CPU performs the addition of the segments base address and the offset in the logical address, both values are already inside the CPU. Even though each page table is 4K, the CPU doesn’t need to read the whole thing to translate a linear address. It needs only one page entry from the page directory (top level) and one entry from the page table (second level). A translation lookaside buffer (TLB) is used to cache these entries. It remembers recent page entries. Each time a new set of page tables is used (e.g., each address space has its own SPEDE-2000 Lab Manual, CSU Sacramento 80 Logical Selector Offset Address Dir Table Offset + Segment + Descriptor Page Entry Linear Page Entry Physical Address Address Global Descriptor Table (from GDTR) Segmentation Paging Figure 9-1: Overview of Segmentation and Paging set), this cache must be flushed (i.e., emptied). This is done automatically by the CPU when CR3 (page directory base register) is loaded. After the TLB is flushed (i.e., “cold”), the next few memory accesses will incur a lot of memory clock cycles. ♦ Segmentation Figure 9-2 below shows how a segmented protected mode address becomes a linear address. It takes a logical address and generates a linear address. Open arrows indicate base addresses. Each descriptor has four fields of primary interest. The first is the type information defining it as a code or data segment. Second are the access (permission) bits, which state if the whole segment can be written (if data) or executed (if code). Third is the base linear address of the segment, and lastly is the size or limit of the segment. For 159, all the segments are setup with a base address of zero. This way all addresses point to the same place in the address space. The limit is set to 4GB, so that won’t get in your way. All this is done by the boot loader, before FLAMES runs. The CPU register GDTR (global descriptor base register) supplies a base address and segment limit for the descriptor table. Using the selector’s upper 13 bits, a descriptor is selected and the limit and size fields are examined. If the limit is exceeded a general protection fault will occur. This stops the memory and terminates the instructions, but the EIP register will point to the faulting instruction so it can be retired once the OS has recovered from the general protection fault. Note the LDTR holds a selector, not a pointer value. Its base and limit are from the descriptor is indicates. There are a couple of places where an incorrect segment can be referenced. First, the descriptor index must be with the descriptor table. Bit 2 is the table indicator, and determines whether the GDT (zero) or the LDT (one) is used. The segment might also be accessed in an invalid manner, e.g., writing to a code segment. All these conditions will generate a general protection fault. SPEDE-2000 Lab Manual, CSU Sacramento 81 Really 13 Selector (16) Offset (32) Logical Byte Offset bit index Address Index Into Limit (TI=0, so use GDT) Base Addr Segment Ref Add Offset and Local Descriptor Segment’s Base Segment Address Linear Code or Data Address Descriptor Compare Offset and Segment’s Limit GDTR LDTR Offset >= Limit, then SegFault! Global and Local Descriptor Tables (8,192 entries each) Figure 9-2. Logical to Linear Address Translation (first part) ♦ How the Page Tables Work This section describes how a linear address is translated through the page tables to generate a physical address. The page directory and all the page tables are stored in main memory. If paging is disabled, then the linear address is emitted from the CPU as the physical address. If paging is enabled, the two-level page tables are referenced. As shown in Figure 9-3 below, the linear address is chopped into three fields (described next). Two of those fields index into page tables with 1024 page table entries (PTE). Each PTE contains a physical base address and some status bits. Twenty bits form the base address used in the next level down. The base address from the page table provides the upper 20 address bits of the frame. The CPU will cache portions of the tables in a Translation Look-aside Buffer (TLB). Thus, if it caches two entries, it can now access a 4K chunk of linear memory without having to read those parts again. GENERATING A PHYSICAL ADDRESS This base address of the segment is added to the offset from the memory reference to generate a logical address. If paging is enabled, CPU’s memory interface unit (MIU) gets a chance to change this address. The linear address is split into three pieces. The top two fields are used as index values into the page tables for the current address space. The pages tables form a sparse, two-level, 1024-ary tree, anchored by the CPU’s CR3 (page directory base register) register. The upper 10 bits are combined with CR3 to find the appropriate page directory. Address bits 31 to 22 index into the directory to get a page table pointer. Address bits 21 to 12 are used to select the page table entry with the frame’s base address. This base is combined to the lower 12 bits (page offset) to get SPEDE-2000 Lab Manual, CSU Sacramento 82 Byte Offset Linear Address Index Into Limit msb lsb Base Addr Page Page Table Page Frame Segment Ref Directory Index Offset Index (10) (10) (12) Combine Offset and Frame’s Base Address PDBR (CR3) Physical Address Page Directory Tables Figure 9-3. Logical to Physical Address Translation (second part) a physical address inside the page frame. Each index is 10 bits, so it can index 1024 different page entries. Each page entry is 4 bytes, therefore each page table is 4K bytes in size. This is also the size of a page frame! When paging is enabled, the two-level page tables are referenced. The upper ten bits index into a page directory structure. Each page table entry (PTE) contains a physical base address and some status bits. Twenty bits form this physical base address, and they are combined with the lower twelve bits of the linear address (a perfect match) to finally generate the physical address. (The status bits are masked out when forming an address.) If either the page directory or PTE is marked not present, a page fault will occur. Register CR2 will contain the virtual address that caused the fault.
Recommended publications
  • Lab 7: Floating-Point Addition 0.0
    Lab 7: Floating-Point Addition 0.0 Introduction In this lab, you will write a MIPS assembly language function that performs floating-point addition. You will then run your program using PCSpim (just as you did in Lab 6). For testing, you are provided a program that calls your function to compute the value of the mathematical constant e. For those with no assembly language experience, this will be a long lab, so plan your time accordingly. Background You should be familiar with the IEEE 754 Floating-Point Standard, which is described in Section 3.6 of your book. (Hopefully you have read that section carefully!) Here we will be dealing only with single precision floating- point values, which are formatted as follows (this is also described in the “Floating-Point Representation” subsec- tion of Section 3.6 in your book): Sign Exponent (8 bits) Significand (23 bits) 31 30 29 ... 24 23 22 21 ... 0 Remember that the exponent is biased by 127, which means that an exponent of zero is represented by 127 (01111111). The exponent is not encoded using 2s-complement. The significand is always positive, and the sign bit is kept separately. Note that the actual significand is 24 bits long; the first bit is always a 1 and thus does not need to be stored explicitly. This will be important to remember when you write your function! There are several details of IEEE 754 that you will not have to worry about in this lab. For example, the expo- nents 00000000 and 11111111 are reserved for special purposes that are described in your book (representing zero, denormalized numbers and NaNs).
    [Show full text]
  • The Hexadecimal Number System and Memory Addressing
    C5537_App C_1107_03/16/2005 APPENDIX C The Hexadecimal Number System and Memory Addressing nderstanding the number system and the coding system that computers use to U store data and communicate with each other is fundamental to understanding how computers work. Early attempts to invent an electronic computing device met with disappointing results as long as inventors tried to use the decimal number sys- tem, with the digits 0–9. Then John Atanasoff proposed using a coding system that expressed everything in terms of different sequences of only two numerals: one repre- sented by the presence of a charge and one represented by the absence of a charge. The numbering system that can be supported by the expression of only two numerals is called base 2, or binary; it was invented by Ada Lovelace many years before, using the numerals 0 and 1. Under Atanasoff’s design, all numbers and other characters would be converted to this binary number system, and all storage, comparisons, and arithmetic would be done using it. Even today, this is one of the basic principles of computers. Every character or number entered into a computer is first converted into a series of 0s and 1s. Many coding schemes and techniques have been invented to manipulate these 0s and 1s, called bits for binary digits. The most widespread binary coding scheme for microcomputers, which is recog- nized as the microcomputer standard, is called ASCII (American Standard Code for Information Interchange). (Appendix B lists the binary code for the basic 127- character set.) In ASCII, each character is assigned an 8-bit code called a byte.
    [Show full text]
  • POINTER (IN C/C++) What Is a Pointer?
    POINTER (IN C/C++) What is a pointer? Variable in a program is something with a name, the value of which can vary. The way the compiler and linker handles this is that it assigns a specific block of memory within the computer to hold the value of that variable. • The left side is the value in memory. • The right side is the address of that memory Dereferencing: • int bar = *foo_ptr; • *foo_ptr = 42; // set foo to 42 which is also effect bar = 42 • To dereference ted, go to memory address of 1776, the value contain in that is 25 which is what we need. Differences between & and * & is the reference operator and can be read as "address of“ * is the dereference operator and can be read as "value pointed by" A variable referenced with & can be dereferenced with *. • Andy = 25; • Ted = &andy; All expressions below are true: • andy == 25 // true • &andy == 1776 // true • ted == 1776 // true • *ted == 25 // true How to declare pointer? • Type + “*” + name of variable. • Example: int * number; • char * c; • • number or c is a variable is called a pointer variable How to use pointer? • int foo; • int *foo_ptr = &foo; • foo_ptr is declared as a pointer to int. We have initialized it to point to foo. • foo occupies some memory. Its location in memory is called its address. &foo is the address of foo Assignment and pointer: • int *foo_pr = 5; // wrong • int foo = 5; • int *foo_pr = &foo; // correct way Change the pointer to the next memory block: • int foo = 5; • int *foo_pr = &foo; • foo_pr ++; Pointer arithmetics • char *mychar; // sizeof 1 byte • short *myshort; // sizeof 2 bytes • long *mylong; // sizeof 4 byts • mychar++; // increase by 1 byte • myshort++; // increase by 2 bytes • mylong++; // increase by 4 bytes Increase pointer is different from increase the dereference • *P++; // unary operation: go to the address of the pointer then increase its address and return a value • (*P)++; // get the value from the address of p then increase the value by 1 Arrays: • int array[] = {45,46,47}; • we can call the first element in the array by saying: *array or array[0].
    [Show full text]
  • Geekos Overview
    CMSC 412 GeekOS overview GeekOS Overview A. Udaya Shankar [email protected] Jeffrey K. Hollingsworth [email protected] 2/12/2014 Abstract This document gives an overview of the GeekOS distribution (for 412- Spring 2014) and related background on QEMU and x86. It describes some operations in GeekOS in more detail, in particular, initialization, low-level interrupt handling and context switching, thread creation, and user program spawning. Contents 1. Introduction ..................................................................................................................................................................... 2 2. Qemu ............................................................................................................................................................................... 3 3. Intel x86 real mode .......................................................................................................................................................... 4 4. Intel x86 protected mode ................................................................................................................................................. 5 5. Booting and kernel initialization ...................................................................................................................................... 8 6. Context switching .......................................................................................................................................................... 11 6.1. Context state .............................................................................................................................................................
    [Show full text]
  • SOS Internals
    Understanding a Simple Operating System SOS is a Simple Operating System designed for the 32-bit x86 architecture. Its purpose is to understand basic concepts of operating system design. These notes are meant to help you recall the class discussions. Chapter 1 : Starting Up SOS 3 Registers in the IA-32 x86 Architecture BIOS (Basic Input/Ouput System) Routines Real Mode Addressing Organization of SOS on Disk and Memory Master Boot Record SOS Startup A20 Line 32-bit Protected Mode Addressing Privilege Level Global Descriptor Table (GDT) More on Privilege Levels The GDT Setup Enabling Protected Mode Calling main() Chapter 2 : SOS Initializations 10 In main() Disk Initialization Display Initialization Setting Up Interrupt Handlers Interrupt Descriptor Table (IDT) A Default Interrupt Handler Load IDT The Programmable Interrupt Controller (PIC) The Keyboard Interrupt Handler Starting the Console Putting It All Together Chapter 3: SOS1 – Single-tasking SOS 16 Running User Programs GDT Entries Default Exception Handler User Programs Executable Format System Calls Creating User Programs The run Command Understanding a Simple Operating System The DUMB Memory Manager Program Address Space Process Control Block Switching to a User Program Kernel-Mode Stack Chapter 4 : SOS2 – Multi-tasking SOS 24 Running Multiple User Programs NAÏVE Memory Manager Programmable Interval Timer (PIT) Process States Timer Interrupt Handler Sleep System Call The run Command Process Queue The Scheduler The Complete Picture ps Command Chapter 5 : SOS3 – Paging in SOS 31
    [Show full text]
  • Assignment No. 6 Aim: Write X86/64 ALP To
    Assignment No. 6 Att Perm Oral Total Sign (2) (5) (3) (10) Aim: Write X86/64 ALP to switch from real mode to protected mode and display the values of GDTR, LDTR, IDTR, TR and MSW Registers. 6.1 Theory: Real Mode: Real mode, also called real address mode, is an operating mode of all x86-compatible CPUs. Real mode is characterized by a 20-bit segmented memory address space (giving exactly 1 MiB of addressable memory) and unlimited direct software access to all addressable memory, I/O addresses and peripheral hardware. Real mode provides no support for memory protection, multitasking, or code privilege levels. Protected Mode: In computing, protected mode, also called protected virtual address mode is an operational mode of x86-compatible central processing units (CPUs). It allows system software to use features such as virtual memory, paging and safe multi-tasking designed to increase an operating system's control over application software. When a processor that supports x86 protected mode is powered on, it begins executing instructions in real mode, in order to maintain backward compatibility with earlier x86 processors. Protected mode may only be entered after the system software sets up several descriptor tables and enables the Protection Enable (PE) bit in the control register 0 (CR0). Control Register : Global Descriptor Table Register This register holds the 32-bit base address and 16-bit segment limit for the global descriptor table (GDT). When a reference is made to data in memory, a segment selector is used to find a segment descriptor in the GDT or LDT.
    [Show full text]
  • Subtyping Recursive Types
    ACM Transactions on Programming Languages and Systems, 15(4), pp. 575-631, 1993. Subtyping Recursive Types Roberto M. Amadio1 Luca Cardelli CNRS-CRIN, Nancy DEC, Systems Research Center Abstract We investigate the interactions of subtyping and recursive types, in a simply typed λ-calculus. The two fundamental questions here are whether two (recursive) types are in the subtype relation, and whether a term has a type. To address the first question, we relate various definitions of type equivalence and subtyping that are induced by a model, an ordering on infinite trees, an algorithm, and a set of type rules. We show soundness and completeness between the rules, the algorithm, and the tree semantics. We also prove soundness and a restricted form of completeness for the model. To address the second question, we show that to every pair of types in the subtype relation we can associate a term whose denotation is the uniquely determined coercion map between the two types. Moreover, we derive an algorithm that, when given a term with implicit coercions, can infer its least type whenever possible. 1This author's work has been supported in part by Digital Equipment Corporation and in part by the Stanford-CNR Collaboration Project. Page 1 Contents 1. Introduction 1.1 Types 1.2 Subtypes 1.3 Equality of Recursive Types 1.4 Subtyping of Recursive Types 1.5 Algorithm outline 1.6 Formal development 2. A Simply Typed λ-calculus with Recursive Types 2.1 Types 2.2 Terms 2.3 Equations 3. Tree Ordering 3.1 Subtyping Non-recursive Types 3.2 Folding and Unfolding 3.3 Tree Expansion 3.4 Finite Approximations 4.
    [Show full text]
  • A Variable Precision Hardware Acceleration for Scientific Computing Andrea Bocco
    A variable precision hardware acceleration for scientific computing Andrea Bocco To cite this version: Andrea Bocco. A variable precision hardware acceleration for scientific computing. Discrete Mathe- matics [cs.DM]. Université de Lyon, 2020. English. NNT : 2020LYSEI065. tel-03102749 HAL Id: tel-03102749 https://tel.archives-ouvertes.fr/tel-03102749 Submitted on 7 Jan 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. N°d’ordre NNT : 2020LYSEI065 THÈSE de DOCTORAT DE L’UNIVERSITÉ DE LYON Opérée au sein de : CEA Grenoble Ecole Doctorale InfoMaths EDA N17° 512 (Informatique Mathématique) Spécialité de doctorat :Informatique Soutenue publiquement le 29/07/2020, par : Andrea Bocco A variable precision hardware acceleration for scientific computing Devant le jury composé de : Frédéric Pétrot Président et Rapporteur Professeur des Universités, TIMA, Grenoble, France Marc Dumas Rapporteur Professeur des Universités, École Normale Supérieure de Lyon, France Nathalie Revol Examinatrice Docteure, École Normale Supérieure de Lyon, France Fabrizio Ferrandi Examinateur Professeur associé, Politecnico di Milano, Italie Florent de Dinechin Directeur de thèse Professeur des Universités, INSA Lyon, France Yves Durand Co-directeur de thèse Docteur, CEA Grenoble, France Cette thèse est accessible à l'adresse : http://theses.insa-lyon.fr/publication/2020LYSEI065/these.pdf © [A.
    [Show full text]
  • Bringing Virtualization to the X86 Architecture with the Original Vmware Workstation
    12 Bringing Virtualization to the x86 Architecture with the Original VMware Workstation EDOUARD BUGNION, Stanford University SCOTT DEVINE, VMware Inc. MENDEL ROSENBLUM, Stanford University JEREMY SUGERMAN, Talaria Technologies, Inc. EDWARD Y. WANG, Cumulus Networks, Inc. This article describes the historical context, technical challenges, and main implementation techniques used by VMware Workstation to bring virtualization to the x86 architecture in 1999. Although virtual machine monitors (VMMs) had been around for decades, they were traditionally designed as part of monolithic, single-vendor architectures with explicit support for virtualization. In contrast, the x86 architecture lacked virtualization support, and the industry around it had disaggregated into an ecosystem, with different ven- dors controlling the computers, CPUs, peripherals, operating systems, and applications, none of them asking for virtualization. We chose to build our solution independently of these vendors. As a result, VMware Workstation had to deal with new challenges associated with (i) the lack of virtual- ization support in the x86 architecture, (ii) the daunting complexity of the architecture itself, (iii) the need to support a broad combination of peripherals, and (iv) the need to offer a simple user experience within existing environments. These new challenges led us to a novel combination of well-known virtualization techniques, techniques from other domains, and new techniques. VMware Workstation combined a hosted architecture with a VMM. The hosted architecture enabled a simple user experience and offered broad hardware compatibility. Rather than exposing I/O diversity to the virtual machines, VMware Workstation also relied on software emulation of I/O devices. The VMM combined a trap-and-emulate direct execution engine with a system-level dynamic binary translator to ef- ficiently virtualize the x86 architecture and support most commodity operating systems.
    [Show full text]
  • Virtual Memory in X86
    Fall 2017 :: CSE 306 Virtual Memory in x86 Nima Honarmand Fall 2017 :: CSE 306 x86 Processor Modes • Real mode – walks and talks like a really old x86 chip • State at boot • 20-bit address space, direct physical memory access • 1 MB of usable memory • No paging • No user mode; processor has only one protection level • Protected mode – Standard 32-bit x86 mode • Combination of segmentation and paging • Privilege levels (separate user and kernel) • 32-bit virtual address • 32-bit physical address • 36-bit if Physical Address Extension (PAE) feature enabled Fall 2017 :: CSE 306 x86 Processor Modes • Long mode – 64-bit mode (aka amd64, x86_64, etc.) • Very similar to 32-bit mode (protected mode), but bigger address space • 48-bit virtual address space • 52-bit physical address space • Restricted segmentation use • Even more obscure modes we won’t discuss today xv6 uses protected mode w/o PAE (i.e., 32-bit virtual and physical addresses) Fall 2017 :: CSE 306 Virt. & Phys. Addr. Spaces in x86 Processor • Both RAM hand hardware devices (disk, Core NIC, etc.) connected to system bus • Mapped to different parts of the physical Virtual Addr address space by the BIOS MMU Data • You can talk to a device by performing Physical Addr read/write operations on its physical addresses Cache • Devices are free to interpret reads/writes in any way they want (driver knows) System Interconnect (Bus) : all addrs virtual DRAM Network … Disk (Memory) Card : all addrs physical Fall 2017 :: CSE 306 Virt-to-Phys Translation in x86 0xdeadbeef Segmentation 0x0eadbeef Paging 0x6eadbeef Virtual Address Linear Address Physical Address Protected/Long mode only • Segmentation cannot be disabled! • But can be made a no-op (a.k.a.
    [Show full text]
  • Supervisor-Mode Virtualization for X86 in Vdebug
    Supervisor-Mode Virtualization for x86 in VDebug Prashanth P. Bungale, Swaroop Sridhar, and Jonathan S. Shapiro { prash, swaroop, shap } @ cs.jhu.edu Systems Research Laboratory The Johns Hopkins University Baltimore, MD 21218, U. S. A. March 10, 2004 Abstract Machine virtualization techniques offer many ways to improve both debugging and performance analysis facilities available to kernel developers. A minimal hardware interposition, exposing as much as possible of the underlying hardware device model, would enable high-level debugging of almost all parts of an operating system. Existing emulators either lack a debugging interface, impose excessive performance penalties, or obscure details of machine-specific hardware, all of which impede their value as debugging platforms. Because it is a paragon of complexity, techniques for emulating the protection state of today's most popular processor – the Pentium – have not been widely published. This paper presents the design of supervisor-mode virtualization in the VDebug kernel debugger system, which uses dynamic translation and dynamic shadowing to provide translucent CPU emulation. By running directly on the bare machine, VDebug is able to achieve an unusually low translation overhead and is able to exploit the hardware protection mechanisms to provide interposition and protection. We focus here on our design for supervisor-mode emulation. We also identify some of the fundamental challenges posed by dynamic translation of supervisor-mode code, and propose new ways of overcoming them. While the work described is not yet running fully, it is far enough along to have confidence in the design presented here, and several of the techniques used have not previously been published.
    [Show full text]
  • CS31 Discussion 1E Spring 17’: Week 08
    CS31 Discussion 1E Spring 17’: week 08 TA: Bo-Jhang Ho [email protected] Credit to former TA Chelsea Ju Project 5 - Map cipher to crib } Approach 1: For each pair of positions, check two letters in cipher and crib are both identical or different } For each pair of positions pos1 and pos2, cipher[pos1] == cipher[pos2] should equal to crib[pos1] == crib[pos2] } Approach 2: Get the positions of the letter and compare } For each position pos, indexes1 = getAllPositions(cipher, pos, cipher[pos]) indexes2 = getAllPositions(crib, pos, crib[pos]) indexes1 should equal to indexes2 Project 5 - Map cipher to crib } Approach 3: Generate the mapping } We first have a mapping array char cipher2crib[128] = {‘\0’}; } Then whenever we attempt to map letterA in cipher to letterB in crib, we first check whether it violates the previous setup: } Is cipher2crib[letterA] != ‘\0’? // implies letterA has been used } Then cipher2crib[letterA] should equal to letterB } If no violation happens, } cipher2crib[letterA] = letterB; Road map of CS31 String Array Function Class Variable If / else Loops Pointer!! Cheat sheet Data type } int *a; } a is a variable, whose data type is int * } a stores an address of some integer “Use as verbs” } & - address-of operator } &b means I want to get the memory address of variable b } * - dereference operator } *c means I want to retrieve the value in address c Memory model Address Memory Contents 1000 1 byte 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 Memory model Address Memory
    [Show full text]