190 likes | 201 Views
Explore the concept of Virtual Memory, leveraging the combination of RAM and Disk to simulate a larger, faster memory capacity. Learn about the use of cache, memory hierarchy, page mapping, transparency, and isolation in managing data storage efficiently.
E N D
Virtual Memory Pretending that your PC has a TeraByte of memory
10x-100x 105x-106x FAST DYNAMIC CPU DISK STATIC RAM "CACHE" "MAIN "Secondary MEMORY" Storage" Extending the Memory Hierarchy So, we’ve used SMALL fast memory + BIG slow memory to fake BIG FAST memory. Can we combine RAM and DISK to fake DISK size at RAM speeds? VIRTUAL MEMORY •use of RAM as cache to much larger storage pool, on slower devices • TRANSPARENCY - VM locations "look" the same to program whether on DISK or in RAM. • ISOLATION of RAM size from software.
MMU CPU RAM Virtual Memory Memory Management Unit • ILLUSION: Huge memory (232 bytes? 264 bytes?) • ACTIVE USAGE: small fraction (224 bytes?) • HARDWARE: • • 226 (64M) bytes of RAM • • 232 (4 G) bytes of DISK... • ... maybe more, maybe less! Virtual Address (VA) Physical Address (PA) Main Memory DISK • ELEMENTS OF DECEIT: • • Partition memory into “Pages” (2K-4K-8K) • • MAP a few to RAM, others to DISK • • Keep “HOT” pages in RAM.
Page Size = 4KB P = 12 Main Memory = 512MB M + P = 29 Page Mapping in MMU N + P = 32 Virtual Address Virtual Page Number N Page Offset Virtual to Physical Page Map P M Physical Page Number Physical Address M + P
Virtual Memory and Page Mapping Virtual Page Number Physical Page Number Main Memory (RAM) PP0 VPN PPN 0 0 1 1 2 DP1 3 2 4 DP0 5 3 DISK PP1 Program PP2 Free DP2 PP3 DP1 Page Map/Table in MMU PP4 DP0 Free
DP1 is brought into free RAM page PP4 4 Virtual Memory and Page Mapping Program accesses VPN 2 Main Memory (RAM) DISK PP0 VPN PPN 0 0 1 1 2 DP1 3 2 4 DP0 5 3 PP1 DP4 DP3 Free Program PP2 Free DP2 PP3 DP1 PP4 DP0 Free Page Map/Table in MMU
PP0 is written to free disk page DP2 (swapped out) DP0 is written to PP0 DP2 0 Virtual Memory and Page Mapping Program accesses VPN 4 Main Memory (RAM) DISK PP0 VPN PPN 0 0 1 1 2 3 2 4 DP0 5 3 PP1 DP4 DP3 Free 4 Program PP2 Free DP2 PP3 DP1 PP4 DP0
Demand Paging • Start with all of VM on DISK (“swap area”) • Begin running program… each VA “mapped” to a PA by MMU • Reference to RAM-resident page: RAM accessed by hardware • Reference to a non-resident page: traps to software handler, which • Fetches missing page from DISK into RAM • Adjusts MMU to map newly-loaded virtual page directly in RAM • If RAM is full, may have to replace (“swap out”) some little-used page to free up RAM for the new page.
Virtual-to-Physical Page Map or Table resident bit is set when page is in RAM, not set if on disk Page dirty on RAM (has been modified) Physical Memory R D Physical Page or Disk Address Virtual Page Number (VPN) 1 0 0 0 1 1 1 0 1 1 0 0 DISK Number of entries in table = 2N Size of table = 2N * (3 + M) Where does Page Table reside?
Virtual Memory (putting it together) Wish to read a location in Main Memory. Steps are: • Present a virtual address to Main Memory (MM). • Determine if corresponding page is in MM by accessing Page Table (which is in MM). • If page is in Main Memory, access Main Memory again to obtain contents of location in page. • If page is not in Main Memory, interrupt program and call a Page Fault Handler to bring in the page from disk. • Repeat read, this time page is in Main Memory Even assuming 100% HIT rate in Main Memory, the penalty of using virtual memory is 2X.
Making the Page Table Faster CPU Virtual Address N Virtual to Physical Page Map P TLB MMU M Physical Address Main Memory/RAM DISK Cache a small number of Page Table entries in a fully associative Translation Lookaside Buffer (TLB)
Translation Lookaside Buffer (TLB) TLB caches mappings of some VPN’s to RAM pages Tag D Virtual Page Number (VPN) Physical Address 1 0 Physical Memory R D Physical Page or Disk Address 1 0 0 0 1 1 1 0 1 1 DISK 0 0
Integrating the MMU and Cache – Take 1 CPU Virtual Address N Virtual to Physical Page Map P TLB MMU M Physical Address Problem with physically addressed cache is increased latency to return data (even on cache hit) L1 Cache Data
Integrating the MMU and Cache – Take 2 Virtual Address N P L1 Cache Virtual to Physical Page Map TAG DATA PP1 01..0 PP3 10..0 TLB 2P M Data Physical Address PPN PPN tag = L1 HIT Cache indexed by Page Offset bits, stores physical address tags Index limited to P bits, for larger caches increase associativity
Virtual Memory and the b Virtual Address N P Virtual to Physical Page Map TLB L1 Cache 2P M TLB miss Data Physical Address Page Fault (Address not present) PPN PPN (tag) = L1 HIT
Pipelined b TLB Miss ILL OP JT XAdr Stall (freeze) pipeline on a TLB miss until Main Memory Page Table is accessed Instruction Memory 4 3 2 1 0 A D 00 PCIF NOP +4 0 1 2 AnnulIF PCRF IRRF RA1 RA2 Register File NOP RD1 RD2 0 1 AnnulRF PCALU IRALU A B NOP A B ALU 0 1 AnnulALU ALUFN PCMEM IRMEM YMEM TLB miss BNE(r31, 0, XP) VIRTUAL MEMORY ADDRESS TLB miss? 0 2 1 AnnulMEM PCWB IRWB YWB
Pipelined b Page Faults ILL OP JT XAdr Instruction Memory 4 3 2 1 0 A D 00 PCIF NOP +4 0 1 2 AnnulIF PCRF IRRF RA1 RA2 Register File NOP RD1 RD2 0 1 AnnulRF PCALU IRALU A B NOP A B ALU ALUFN 0 1 AnnulALU PCMEM IRMEM YMEM Page Fault BNE(r31, 0, XP) VIRTUAL MEMORY ADDRESS Fault? 0 2 1 AnnulMEM PCWB IRWB YWB
Jump to fault handler, save <PC> + 4 in XP Suppose this address elicits a fault, i.e., page not present in Main Memory Return to program, retry instruction Page Fault Handler Page Fault Handler f_h: SUBC(XP, 4, XP) JMP(XP, r31) Program … ADD(r1, r2, r3) LD( r3, 1000 , r4) MUL(r4, r5, r6) … Fault handler Body Move old RAM page to Disk, bring page from Disk to RAM
Next Time: Virtual Machines and Kernels Dilbert : S. Adams