Virtual Address Space Physical Memory Allocation
Total Page:16
File Type:pdf, Size:1020Kb
4/12/2017 Memory Management Memory Management 5A. Memory Management and Address Spaces 1. allocate/assign physical memory to processes 5B. Simple Memory Allocation – explicit requests: malloc (sbrk) 5C. Dynamic Allocation Algorithms – implicit: program loading, stack extension 5D. Advanced Allocation Techniques 2. manage the virtual address space 5G. Errors and Diagnostic Free Lists – instantiate virtual address space on context switch – extend or reduce it on demand 5F. Garbage Collection 3. manage migration to/from secondary storage – optimize use of main storage – minimize overhead (waste, migrations) Memory Management 1 Memory Management 2 Memory Management Goals Linux Process – virtual address space 1. transparency – process sees only its own virtual address space 0x00000000 0x0100000 0x0110000 – process is unaware memory is being shared shared code private data DLL 1 DLL 2 2. efficiency – high effective memory utilization DLL 3 private stack – low run-time cost for allocation/relocation 0x0120000 0xFFFFFFFF 3. protection and isolation All of these segments appear to be present in memory – private data will not be corrupted whenever the process runs. – private data cannot be seen by other processes Memory Management 3 Memory Management 4 Physical Memory Allocation (code segments) • program code process 1 shared lib data and stack X – allocated when program loaded OS kernel shared code process 3 – initialized with contents of load module segment A data and stack process 2 • shared and Dynamically Loadable Libraries data and stack – mapped in at exec time or when needed shared code segment B • all are read-only and fixed size – somehow shared by multiple processes Physical memory is divided between the OS kernel, process private data, and shared code segments. – shared code must be read only Memory Management 5 Memory Management 6 1 4/12/2017 (implementing: code segments) (data/stack segments) • program loader • they are process-private, read/write – ask for memory (size and virtual location) • initialized data – copy code from load module into memory – allocated when program loaded • run-time loader – initialized from load module – request DLL be mapped (location and size) • data segment expansion/contraction – edit PLT pointers from program to DLL – requested via system calls (e.g. sbrk) • memory manager – only added/truncated part is affected – allocates memory, maps into process • process stack – allocated and grown automatically on demand Memory Management 7 Memory Management 8 (implementing: data/stack) Fixed Partition Memory Allocation • program loader • pre-allocate partitions for n processes – ask for memory (location and size) – reserving space for largest possible process – copy data from load module into memory • very easy to implement – common in old batch processing systems – zero the uninitialized data • well suited to well-known job mix memory manager • – must reconfigure system for larger – invoked for allocations and stack extensions processes – allocates and deallocates memory • likely to use memory inefficiently – adjusts process address space accordingly – large internal fragmentation losses – swapping results in convoys on partitions Memory Management 9 Memory Management 10 Fragmentation Internal Fragmentation partn #1 partn #2 partn #3 • The curse of spatially partitioned resources 8MB 4MB 4MB – wasted or unusable free space • internal fragmentation – not needed by its owner waste 2MB • external fragmentation – uselessly small pieces – all such resources, not merely memory process • inheritance of farm land A waste 1MB waste 2MB • calendar appointment packing (6 MB) process process B C • Choose your poison (3 MB) (2 MB) – static allocation … constant internal waste – dynamic allocation … progressive external loss Total waste = 2MB + 1MB + 2MB = 5/16MB = 31% Memory Management 11 Memory Management 12 2 4/12/2017 (Internal Fragmentation) Variable Partition Allocation • wasted space in fixed sized blocks • start with one large "heap" of memory • caused by a mis-match between • when a process requests more memory – the chosen sizes of a fixed-sized blocks – find a large enough chunk of memory – the actual sizes that programs request – carve off a piece of the requested size • average waste: 50% of each block – put the remainder back on the free list • overall waste reduced by multiple sizes • when a process frees memory – suppose blocks come in sizes S1 and S2 – put it back on the free list – average waste = ((S1/2) + (S2 - S1)/2)/2 • eliminates internal fragmentation losses Memory Management 13 Memory Management 14 External Fragmentation (External/Global Fragmentation) each allocation creates left-over fragments P • P P F A A – over time these become smaller and smaller P P PB D D • tiny left-over fragments are useless – they are too small to satisfy any request PC PC PC – but their combined size may be significant PE PE • there are three obvious approaches: – try to avoid creating tiny fragments – try to recombine adjacent fragments – re-pack the allocated space more densely Memory Management 15 Memory Management 16 Fixed vs Variable Partition Stack vs Heap Allocation • Fixed partition allocation • stack allocation – allocation and free lists are trivial – compiler manages space (locals, call info) – internal fragmentation is inevitable – data is valid until stack frame is popped • average 50% (unless we have multiple sizes) – OS automatically extends/shrinks stack segment • Variable partition allocation • heap allocation – allocation is complex and expensive – explicitly allocated by application (malloc/new) • long searches of complex free lists – data is valid until free/delete (or G.C.) – eliminates internal fragmentation – heap space managed by user-mode library – external fragmentation is inevitable data segment size adjusted by system call • can be managed by (complex) coalescing – Memory Management 17 Memory Management 18 3 4/12/2017 sbrk(2) vs. malloc(3) managing process private data • sbrk(2) … managing size of data segment initialized 000000000000 data 000000000000 – each address space has a private data segment – process can request that it be grown/shrunk – sbrk(2) specifies desired ending address 1. loader allocates space for, and copies initialized data from load module. – this is a coarse and expensive operation • malloc(3) … dynamic heap allocation 2. loader allocates space for, and zeroes uninitialized data from load module. 3. after it starts, program uses sbrk to extend the process data segment – sbrk(2) is called to extend/shrink the heap and then puts the newly created chunk on the free list – malloc(3) is called to carve off small pieces 4. Free space “heap” is eventually consumed – mfree(3) is called to return them to the heap program uses sbrk to further extend the process data segment and then puts the newly created chunk on the free list Memory Management 19 Memory Management 20 Variable Partition Free List (Free lists: keeping track of it all) F N L • fixed sized blocks are easy to track R E E head E X N E T – a bit map indicating which blocks are free U N L S E E E X N D T • variable chunks require more information List might F N L contain all R E – a linked list of descriptors, one per chunk E E X N memory E T fragments – each lists size of chunk, whether it is free U N L S E E E X N each has pointer to next chunk on list …or only D T – B1 fragments F N descriptors often at front of each chunk L – R E that are free. E E X N E T … • allocated memory may have descriptors too Memory Management 21 Memory Management 22 Free Chunk Carving Avoid Creating Small Fragments U N L S E • Choose carefully which piece to carve E E X N 1. find large enough free chunk D T • “There Ain’t No Such Thing As A Free Lunch” U S – careful choices mean longer searches E 2. reduce len to requested size D F N F N – optimization means more complex data structures L L R E R E E E E X E X N N 3. new header for residual chunk E T E T – cost of reduced fragmentation is more cycles • A few obvious choices for “smart” choices … 4. insert new chunk into list – Best fit U N L S E E E X N 5. mark carved piece as in use D T – Worst fit … – First fit – Next fit Memory Management… 23 Memory Management 24 4 4/12/2017 Which chunk: best fit Which chunk: worst fit • search for the "best fit" chunk • search for the "worst fit" chunk – smallest size greater/equal to requested size – largest size greater/equal to requested size • advantages: • advantages: – might find a perfect fit – tends to create very large fragments • disadvantages: … for a while at least – have to search entire list every time • disadvantages: – quickly creates very small fragments – still have to search entire list every time Memory Management 25 Memory Management 26 Which chunk: first fit Which Chunk: next Fit F N • take first chunk that is big enough L R E E head E X N • advantages: E T U N After each L S E E E X guess N – very short searches search, set D T guess pointer pointer F N L – creates random sized fragments to chunk after R E E E X N the one we E T • disadvantages: chose. U N L S E E E X N – the first chunks quickly fragment That is the D T point at which F N L – searches become longer R E E we will begin E X N – ultimately it fragments as badly as best fit our next E T search. … Memory Management 27 Memory Management 28 (next-fit ... guess pointers) Coalescing – de-fragmentation • the best of both worlds • all dynamic algorithms have ext fragmentation – short searches (maybe shorter than first fit) – some get it faster, some spread it out more evenly – spreads out fragmentation (like worst fit) • we need a way to reassemble fragments • guess pointers are a general technique – check neighbors when ever a chunk is freed – think of them as a lazy (non-coherent) cache – recombine w/free neighbors whenever possible – if they are right, they save a lot of time – free list can be designed to make this easier – if they are wrong, the algorithm still works • e.g.