380 likes | 395 Views
Memory Management. Fixed Partitions. Legend. Divide memory into n (possible unequal) partitions. Problem: Fragmentation. 0k. Free Space. 4k. 16k. Internal fragmentation (cannot be reallocated). 64k. 128k. Fixed Partition Allocation Implementation Issues.
E N D
Fixed Partitions Legend • Divide memory into n (possible unequal) partitions. • Problem: • Fragmentation 0k Free Space 4k 16k Internalfragmentation (cannot be reallocated) 64k 128k
Fixed Partition Allocation Implementation Issues • Separate input queue for each partition • Inefficient utilization of memory • when the queue for a large partition is empty but the queue for a small partition is full. Small jobs have to wait to get into memory even though plenty of memory is free. • One single input queue for all partitions. • Allocate a partition where the job fits in. • Best Fit • Available Fit
Relocation • Correct starting address when a program should start in the memory • Different jobs will run at different addresses • Logical addresses, Virtual addresses • Logical address space , range (0 to max) • Physical addresses, Physical address space • range (R+0 to R+max) for base value R. • Memory-management unit (MMU) • map virtual to physical addresses. • Relocation register • Mapping requires hardware (MMU) with the base register
Relocation Register Base Register Memory BA CPU Instruction Address Physical Address Logical Address + MA MA+BA
Question 1 - Protection • Problem: • How to prevent a malicious process to write or jump into other user's or OS partitions • Solution: • Base bounds registers
Base Bounds Registers Bounds Register Base Register Memory Base Address Logical Address LA Base Address BA CPU Address MA+BA < + Memory Address MA Physical Address PA Limit Address Fault
Question 2 • What if there are more processes than what could fit into the memory?
Swapping Disk Monitor User Partition
Swapping Disk Monitor User 1 User Partition
Swapping Disk Monitor User 1 User Partition User 1
Swapping Disk Monitor User 1 User Partition User 1 User 2
Swapping Disk Monitor User 1 User Partition User 2 User 2
Solve Fragmentation w. Compaction Free 5 Monitor Job 7 Job 5 Job 3 Job 8 Job 6 Free 6 Monitor Job 7 Job 5 Job 3 Job 8 Job 6 7 Monitor Job 7 Job 5 Job 3 Job 8 Job 6 Free 8 Monitor Job 7 Job 5 Job 3 Job 8 Job 6 Free 9 Monitor Job 7 Job 5 Job 3 Job 8 Job 6
Storage Management Problems • Fixed partitions suffer from • internal fragmentation • Variable partitions suffer from • external fragmentation • Compaction suffers from • overhead
Paging • Provide user with virtual memory that is as big as user needs • Store virtual memory on disk • Cache parts of virtual memory being used in real memory • Load and store cached virtual memory without user program intervention
Benefits of Virtual Memory • Use secondary storage($) • Extend DRAM($$$) with reasonable performance • Protection • Programs do not step over each other • Convenience • Flat address space • Programs have the same view of the world • Load and store cached virtual memory without user program intervention • Reduce fragmentation: • make cacheable units all the same size (page)
Real Memory Request Address within Virtual Memory Page 3 Page Table 1 VM Frame Cache 3 1 2 2 3 3 4 4 1 2 3 4 Virtual Memory Stored on Disk Disk 1 2 3 4 5 6 7 8 Paging Request
Real Memory Request Address within Virtual Memory Page 1 Page Table 1 VM Frame Cache 3 1 2 1 2 3 3 4 4 1 2 3 4 Virtual Memory Stored on Disk Disk 1 2 3 4 5 6 7 8 Paging
Real Memory Request Address within Virtual Memory Page 6 Page Table 1 VM Frame Cache 3 1 2 1 2 3 6 3 4 4 1 2 3 4 Virtual Memory Stored on Disk Disk 1 2 3 4 5 6 7 8 Paging
Real Memory Request Address within Virtual Memory Page 2 Page Table 1 VM Frame Cache 3 1 2 1 2 3 6 3 2 4 4 1 2 3 4 Virtual Memory Stored on Disk Disk 1 2 3 4 5 6 7 8 Paging
Paging Real Memory Store Virtual Memory Page 1 to disk Request Address within Virtual Memory 8 Page Table 1 VM Frame Cache 3 1 2 1 2 3 6 3 2 4 4 1 2 3 4 Virtual Memory Stored on Disk Disk 1 2 3 4 5 6 7 8
Real Memory Load Virtual Memory Page 8 to cache Page Table 1 Frame VM Cache 3 1 2 8 2 3 6 3 4 2 4 1 2 3 4 Virtual Memory Stored on Disk Disk 1 2 3 4 5 6 7 8 Paging
Virtual Memory Virtual Address (P,D) P Page Table P D 0 D P 1 Contents(P,D) 0 1 P-> F 1 0 Physical Memory 1 F D F Physical Address (F,D) D Contents(F,D) Page Mapping Hardware
Virtual Memory Virtual Address (004006) 004 Page Table 004 006 0 006 4 1 Contents(4006) 0 1 4-> 5 1 0 Physical Memory 1 005 006 005 Physical Address (F,D) Page size 1000 Number of Possible Virtual Pages 1000 006 Contents(5006) Page Mapping Hardware
Paging Issues • size of page is 2n , usually 512, 1k, 2k, 4k, or 8k • E.g. 32 bit VM address may have 220 (1 meg) pages with 4k (212 ) bytes per page
Paging Issues • Question • Physical memory size: 32 MB (2^25 ) • Page size 4K bytes • How many pages? • 2^13 • Page Table base register must be changed for context switch • Why? • Different page table • NO external fragmentation • Internal fragmentation on last page ONLY
Paging - Caching the Page Table • Can cache page table in registers and keep page table in memory at location given by a page table base register • Page table base register changed at context switch time
Paging Implementation Issues • Caching scheme can use associative registers, look-aside memory or content-addressable memory • TLB • Page address cache (TLB) hit ratio: percentage of time page found in associative memory • If not found in associative memory, must load from page tables: requires additional memory reference
Page Mapping Hardware Virtual Memory Address (P,D) Page Table P D Associative Look Up 0 P 1 P F 0 First 1 P-> F 1 0 1 F D Physical Address (F,D)
Page Mapping Hardware Virtual Memory Address (P,D) Page Table 004 006 Associative Look Up 0 1 12 4 1 7 4 9 6 0 19 3 First 1 004-> 009 3 7 1 0 Table organized by LRU 1 009 006 Physical Address (F,D)
Page Mapping Hardware Virtual Memory Address (P,D) Page Table 004 006 Associative Look Up 0 1 12 4 1 4 9 0 19 3 First 1 004-> 009 3 7 1 0 Table organized by LRU 1 009 006 Physical Address (F,D)
TLB Function • If a virtual address is presented to MMU, the hardware checks TLB by comparing all entries simultaneously (in parallel). • If match is valid, the page is taken from TLB without going through page table. • If match is not valid • MMU detects miss and does an ordinary page table lookup. • It then evicts one page out of TLB and replaces it with the new entry, so that next time that page is found in TLB.
Multilevel Page Tables • Since the page table can be very large, one solution is to page the page table • Divide the page number into • An index into a page table of second level page tables • A page within a second level page table • Advantage • No need to keeping all the page tables in memory all the time • Only recently accessed memory’s mapping need to be kept in memory, the rest can be fetched on demand
pte dir table Multilevel Page Tables Virtual address offset . . . . . . . . . Directory . . .
Example Addressing on a Multilevel Page Table System • A logical address (on 32-bit x86 with 4k page size) is divided into • A page number consisting of 20 bits • A page offset consisting of 12 bits • Divide the page number into • A 10-bit page number • A 10-bit page offset