240 likes | 680 Views
Virtual Memory. Lecturer : Dr. Pavle Mogin. Plan for Virtual Memory Topic. What is a virtual memory (VM) Virtual Memory: benefits and costs How is Virtual Memory provided? The page table Address translation – from virtual to physical addresses Writes and the dirty bit
E N D
Virtual Memory Lecturer : Dr. Pavle Mogin
Plan for Virtual Memory Topic • What is a virtual memory (VM) • Virtual Memory: benefits and costs • How is Virtual Memory provided? • The page table • Address translation – from virtual to physical addresses • Writes and the dirty bit • Page replacement policies • Translation lookaside buffer • Reading 7.4
Virtual Memory: What Is It? • ‘Real’ or ‘physical’ computer memory is made out of • Processorregisters, e.g. $0, $1, … $31 • Cache slots, e.g. slot 0, slot 1, … slot 1023 • DRAM cells, e.g. 0, 1, … 33,554,431 (32M) • Hard disk locations, e.g. sector 108 of track 2 of cylinder 3,294 • Using more hardware and system software we can hide most of this nastiness from the programmer • ‘Virtual’ computer memory is made out of • Addresses, e.g. 0, 1, … 4,294,967,295 (232) • That’s it!
The Benefits of Virtual Memory • A single, ‘flat’ address space • No need to worry about where data is in the hierarchy • (actually, you can still access the registers directly, if you want) • A large address space that is limited only by the size of your registers • A separate address space for each program (!) • No need to worry about one program corrupting another’s data • Possibility of ‘segments’ for text, data, etc • With read/write/execute permission per segment
Before And After Virtual Memory A programmer has to divide a program into segments. To load and unload segments Paging and swapping is done by OS 1.2 Gbytes disk A’s stack B’s stack A’s stack Many Mbytes B’s data B’s stack A’s data 8 Mbytes A’s data B’s data program B program A program B program A Before virtual memory After virtual memory
Virtual Memory Versus Physical Memory A’s stack B’s stack 100 Gbytes disk Many Mbytes B’s data 1 Gbyte DRAM A’s data program B program A The programmer’s view The computer’s view
The Costs of Virtual Memory • Every address used in your HLL program… • Be it the address of the next program instruction • Or the address of some data you want to read or write must be translated on the fly into a physical address • This physical address is then satisfied from • Cache • Or DRAM if the caches misses • Or hard disk if the DRAM misses • But the translation and ‘satisfaction’ are invisible to the programmer
How Is VM Provided? (1) • The computer treats each VM space as a series of ‘pages’ • Similar to blocks in the cache • Typically between 512 and 4096 words per page • With a large VM many pages are, in fact, empty and unused • Pages that contain some data, text, or stack are stored • In DRAM as a unit (of 512 to 4096 words, etc) • On disk as a block (of 512 to 4096 words, etc) • ‘Paging’ between DRAM and disk is done by OS • Pages are used to exploit spatial locality • And because disk bandwidth is better for large blocks
data code program C Pages Move Between DRAM and Disk Virtual addresses data data page 2 page 2 page 2 page 1 page 1 page 1 code code page 0 page 0 page 0 program A program B DRAM A page 0 Physical addresses A page 0 C page 2 All pages are on disk A page 2 B page 2 A page 2 B page 1 B page 0 C page 2 Hard disk
How Is VM Provided (2) • When the processor generates a (virtual) address • How do we know if the required page is in DRAM? • If it is, how do we know what physical address to use? • If it isn’t, how do we know where on the disk it is? • The computer maintains • A ‘page table’ for each program in the system • A page table register that points at the page table of the currently executing program • (parts of the current table my be cached in hardware to make lookup faster) • Page table must be consulted for every address generated by the program
Example Page Tables Physical View Virtual memory view B page 0 data Data code data page 2 3070 (unusedpage) A page 0 B page 0 page 1 2046 C page 2 A page 2 C page 2 code code page 0 1024 A page 0 B page 1 B page 2 program A program B program C 0 DRAM Hard disk PAGE TABLE FOR PROGRAM B Page 0 is in DRAM at address 3070 Page 1 is on disk at cylinder C2, track T2, sector S2 Page 2 is on disk at cylinder C3, track T3, sector S3 PAGE TABLE FOR PROGRAM A Page 0 is in DRAM at address 0 Page 1 is not used Page 2 is on disk at cylinder C1, track T1, sector S1
Address Translation • Suppose the current instruction is lw $4, 200($10) • $10 + 200 is a virtual address • It must be translated to a physical address so the read can happen address generated by processor 31 30 29 ... … 14 13 12 11 10 9 ... … 3 2 1 0 virtual page number offset inside the page page table (no translation) address presented to DRAM 29 28 27 ... … 14 13 12 11 10 9 ... … 3 2 1 0 physical page number offset inside the page
Example • Suppose pages are 100 bytes in size • Suppose the currently executing program has this page table: • Page 0: maps to DRAM address 500 • Page 1: not in use • Page 2: maps to DRAM address 300 • Page 4: … • Then virtual addresses 0, 1, 2, … 99 • Map to physical addresses 500, 501, 502, … 599 respectively • and virtual addresses 200, 201, 202, …, 299 • Map to physical addresses 300, 301, 302, … 399 respectively
physical page no. physicaladdress 6 600 5 500 4 400 3 300 2 200 1 100 0 0 DRAM Example Page Table All numbers in decimal, page size is 100 bytes. virtual page no. virtual address 300 data 2 200 code 1 Program A in main memory 100 code 0 Program A’s virtual address space data 0
Page Table Lookups Virtual page number Offset within page Page table register Valid Physical page number Page Table 0 means on disk 1 means in DRAM 2 means page not used Physical page number or disk address Offset within page
Handling Writes • Many caches use write-through, as we saw • Virtual memory uses write-back: • Since writing pages to disk is so inefficient • Writes are accumulated in the page (‘dirty’ bit is set) • On replacement, page is written back to disk • Nothing need be done if no writes to page
Page Replacement Policy • A ‘page miss’ means page is on disk • The operating system initiates a disk read • (this usually takes so long that it’s worth running another program in the meantime) • An important design consideration: how to make space in DRAM • i.e. what page replacement policy to use? • Possible solutions • Keep a ‘dirty’ bit in the page table, overwrite a clean page • (saves having to flush dirty page to disk) • Keep an ‘age since last access’ count in the page table • ‘least recently used (LRU)’ policy • Other policies discussed in COMP305
Where Are the Page Tables Stored? • How big are they? • Assume virtual memory space is 232 • Assume pages are 4 K in size • Each page table entry requires approx 1 word • Each page table is 232 / 212 = 220 = 1 Mwords in size • How many page tables are there? • One per program (10s of programs) • In fact, page tables are stored in DRAM • (they even get paged out to disk - but don’t think about the implications of this) • So to translate a VM address you must first search the page table in DRAM?!?
Translation Lookaside Buffer • Since page tables are stored in the main memory, each memory reference from a program requires at least one memory accesses to translate virtual into physical address and then to try to satisfy it from cache • On the cache miss, there will be two memory accesses • The key to improving access performance is to rely on locality of references to page table. When a translation for a virtual page is used, it will probably be needed again in the near future because the references to the words on that page have both temporal and spatial locality • Translation lookaside buffer - a special cache used to keep track of recently used translations
0 1 1 0 0 1 0 1 0 Introducing the TLB CPU Translation Lookaside Buffer DRAM V tag PPN virt. page no. 1 0 0 virt. page no. 1 1 V PPN 1 Page table in DRAM new page Disk
Using TLB • Look up VPN in TLB • On hit, concatenate PPN in TLB with offset • On miss, check page table • On hit, copy page table entry to TLB • Use replacement policy to make space in TLB • On miss, do a page fault • TLB is direct mapped (hence valid bit and tag field) • For the purpose of acting as a page table, TLB has: • Use bit (to monitor usage for replacement), • Dirty bit (to monitor alterations). • Typical performance figures • 32 - 4K entries • Block size: 1 or two PPNs • 1 cycle hit time, 50 cycles miss time
Virtual Memory Summary CPU On a TLB miss Main Memory virtual address VPN Page table TLB On a Page Fault PPN physical address word disk address physical address Disk Cache page cache block On a cache miss
Summary (1) • Virtual memory • hides the memory hierarchy • protects programs from each other • involves swapping pages between DRAM and disk • Operating system controls the virtual memory: • Executes paging between DRAM and disk • Maintains page table for every program in the execution • Controls page table register of the active program • Page table must be consulted for every address issued by the program
Summary (2) • Page table • Valid, dirty, last-accessed-time bits • Physical address or disk address • Address translation • Virtual address = VPN + offset • Physical address = PPN + offset • Page replacement policies • TLB: a cache for the page table