280 likes | 627 Views
Contiguous Memory Management and External Fragmentation. Summary of Contiguous memory Management and the Free Space List (FSL). Contiguous memory management means that the physical address space of a process in main memory must be one contiguous block
E N D
Summary of Contiguous memory Management and the Free Space List (FSL) • Contiguous memory management means that the physical address space of a process in main memory must be one contiguous block • The key OS data structure involved in contiguous memory management is the free space list, the list of free (available, unused) blocks of physical memory • The FSL must be managed according to some policy • All FSL policies (e.g., first-fit, best-fit, worst-fit) eventually lead to external fragmentation
External Fragmentation • External fragmentation refers to memory external to any process that is unusable because it’s in fragments too small to be useful • Note: The distinction between a small but possibly usable hole in memory and an unusable fragment is not intended to be technically precise; we just say that memory has become too fragmented when the OS cannot admit some new process despite the fact that there is plenty of unused memory available, just scattered all over in fragments too small for typical new processes
External Fragmentation with Best Fit mainmemory • Note that just because process #8 can’t be admitted, that doesn’t mean that another (smaller) process can’t still be admitted, provided there’s a hole big enough • But there’s no guarantee that there will ever again be a hole big enough for process #8; starvation is possible • Contiguous memory management requires a free space list policy • All FSL policies (e.g., first-fit, best-fit, worst-fit) eventually lead to external fragmentation • External fragmentation refers to memory external to any process that is unusable because it’s in fragments too small to be useful • Note: The distinction between a small but possibly usable hole in memory and an unusable fragment is not intended to be technically precise; we just say that memory has become too fragmented when the OS cannot admit some new process despite the fact that there is plenty of unused memory available, just scattered all over in fragments too small for typical new processes • OS processes, being the first ones created, are usually placed at one end of the memory, with low memory being the most common choice • Our textbook shows low memory at the top of its figures, addresses increasing as one moves down the picture, so I’ll do the same thing in this diagram OS processes Let’s see how best-fit works on some random series of arrivals and terminations of processes of random sizes Because we’re doing best-fit, process #4 went into the smallest available hole that was large enough • When process #1 terminates, its memory is reclaimed • First, the FSL must be searched to see if there are any adjacent holes it can be merged with (in this case, no) • Then the new hole is inserted in the FSL • Process #2 terminates • It’s memory is reclaimed and merged with the hole above it Process #4 terminates But memory is still fragmented, meaning that some new processes may not be admitted, despite the fact that there is, in total, enough unused memory for them process #1 process #9 process #4 • As new processes are created, they must be admitted by the long term scheduler which must check with the memory manager to see if there is sufficient space for the new process • The memory manager searches the free space list (FSL) to find a hole that is of sufficient size • This animation illustrates “best-fit” FSL logic process #7 process #2 process #8 process #6 process #3 When process #6 terminates, process #8 can be admitted process #10 • When process #8 applies for admission, memory has become too fragmented • The sum total of all free space is more than the process #8 demand, but a process needs its physical address space to be contiguous and there is no single (contiguous) hole big enough to admit process #8 Process #3 terminates process #5
External Fragmentation with Worst-Fit mainmemory OS processes Let’s look at the same sequence of arrivals and terminations as for the previous best-fit example, but let’s see what happened if we do worst-fit for our FSL process #1 process #2 process #3 process #6 process #8 process #4 process #5 process #7 process #9
Summary of FSL Policies for Contiguous Memory Management • Regardless of what your textbook seems to imply, all FSL policies (including worst-fit) are subject to external fragmentation • Some delay serious problems longer than others, but everybody eventually succumbs • It has nothing to do, really, with the specific FSL policy; the problem is intrinsic to contiguous memory management
So What’s the Answer? • Palliative: Quantized allocation • Curative: • Compaction • Paged memory management
Internal Fragmentation (and Then Quantized Allocation) • Suppose we’re doing best-fit and a process needs an allocation of 1599810 bytes and the smallest hole we have on the FSL bigger than 1599810 is 1600310 bytes • There’s no really point at all in giving the process the 1599810 bytes it wants and then creating a new hole of 5 bytes and inserting it (a 5-byte hole) into the FSL • Instead, let’s just give the process the whole hole of 1600310 bytes • The 5 extra bytes is now referred to as internalfragmentation: “wasted” memory that is internal to the allocated physical address space for some process rather than sitting pointlessly on the list of available memory ─ i.e., the FSL, the list of available memory, which, by definition, must be external to all processes
Quantized Allocation • Next, let’s “quantize” our memory allocation policy • All memory will be allocated in “chunks” or quanta of some fixed size • If our quantum is 1K bytes for example, every process’s physical address space will be some integer multiple of 1K and every hole on the FSL will also be some integer multiple of 1K, the smallest possible hole being simply 1K • So, for example, a process that needed 1500210 bytes would be given 16K of memory from the smallest hole ≥16K in size
Quantized Allocation (cont’d) • Quantification doesn’t solve external fragmentation, but it lessens some of its ill effects and can thus postpone “the day of reckoning” e.g., we won’t fill up the FSL with absurdly small holes that still take time to get through when we search the FSL • So we can avoid the slow down that comes with searching an overly long FSL but we still can’t stop external fragmentation from eventually accumulating too many small holes, • Even if the quantum is, for example, 1K, so that all holes are a multiple of 1K, how useful is a 1K fragment?
Internal Fragmentation Again • Can we avoid external fragmentation by making our quantum really large, say 128K, so no hole will ever be smaller than 128K? • Maybe, but … • Now our internal fragmentation will start to really hurt
Average Internal Fragmentation • The average internal fragmentation is ½ quantum per process • Suppose our quantum is 128K bytes: • A process that needs 128,003 bytes will be given 256,000 bytes, “wasting” 127,997 bytes to internal fragmentation • A process that needs 383,995 bytes will be given 384,000 bytes ( = 3 x 128,000 bytes), wasting only 5 bytes • On the average, half the processes will waste more than ½ a quantum, half will waste less; overall, the average internal fragmentation will be ½ per quantum per process • If there are 500 processes and the quantum is 128K, internal fragmentation will eat up (on average) 500 x 64K = 32MBytes • So too big a quantum to try to avoid external fragmentation will waste too much to internal fragmentation
So What’s the Answer? • Palliative: Quantized allocation • Curative: • Compaction • Full compaction • Partial compaction • Paged memory management
Compaction • Compaction is the only complete cure for external fragmentation without giving up on contiguous memory management altogether, which we will eventually do but first you have to suffer through compaction • Why?Academic tradition (I suffered, so you have to suffer ;-)
Compaction mainmemory OS processes • To compact memory is to relocate processes so as to consolidate all the holes into one big hole to make room for a new process that otherwise couldn’t be admitted because of external fragmentation • Compaction can be total, as we just saw, where every process not already “snug at the end” gets relocated, or … process #7 process #8 process #6 process #5
Compaction (cont’d) mainmemory OS processes • It can be “partial”: Just relocate enough processes to make a hole big enough for the new process, don’t consolidate all the holes (unless necessary) • There are several issues to consider in either case process #7 process #8 process #6 process #5
Compaction Requires Execution Time BindingWhich Requires an MMU Extra Hardware mainmemory The address 0x00f32 here can’t be a physical address or it would be incorrect after the relocation of process #7 and the OS would have no way to correct it OS processes • If this were a physical address, it would have to have been bound earlier i.e., at compile or load time and we can’t rebind it now: • We obviously can’t recompile a process in the middle of it’s execution • And the execution environment doesn’t include the relocation flags used by the loader at load time they’re left behind in the load module on the disk process #7 0x2a00f32 So if the OS is going to have to dynamically relocate a process during its execution, which is what compaction requires, the addresses in the programs in memory have to be logical addresses and we must be doing execution time binding, which requires an MMU Here’s our old friend, some sort of jump or transfer instruction process #6 process #5
Other Issues with Compaction mainmemory OS processes Physically copying all the processes in memory to new locations is generally going to be too time-consuming for a real-time system process #7 process #6 process #5
Other Issues with Compaction (cont’d) mainmemory OS processes • Partial compaction can reduce the number of processes relocated and thus the time required • But: • Since we are still left with some degree of external fragmentation, we’ll have to run the compactor again sooner than if we had done a full compaction • The algorithm to decide which process(es) to relocate can get fairly complex and hence time consuming in its own right the tradeoff, obviously being between clever but time consuming algorithms that efficiently reduce the external fragmentation and simpler algorithms that don’t clean up the fragmentation as much or as quickly process #7 process #6 process #5
Giving Up the Requirement for a Process’s Physical Address Space to be Contiguous • Palliative: Quantized allocation • Curative: • Compaction • Full compaction • Partial compaction • Paged memory management