300 likes | 403 Views
Virtual Memory. Kashyap sheth Kishore putta Bijal shah Kshama desai. Overview. Virtual Memory Paging Paging Algorithms Segmentation Combined Segmentation and Paging Bibliography. Terms & Notions. Virtual memory ( VM ) is No t a physical device but an abstract concept
E N D
Virtual Memory Kashyap sheth Kishore putta Bijal shah Kshama desai
Overview • Virtual Memory • Paging • Paging Algorithms • Segmentation • Combined Segmentation and Paging • Bibliography
Terms & Notions • Virtual memory (VM) is • Not a physical device but an abstract concept • Comprised of the virtual address spaces (of all processes) • Virtual address space (VAS) (of one process) • Set of visible virtual addresses • (Some systems may use a single VAS for all processes) • Resident set • Pieces of a process currently in physical memory • Working set • Set of pieces a process is currently working on
Why Virtual Memory (VM)? • Shortage of memory Efficient memory management needed Process 3 • Process may be too big for physical memory • More active processes than physical memory can hold • Requirements of multiprogramming • Efficient protection scheme • Simple way of sharing Process 2 Process 4 Process 1 OS Memory
Virtual memory Can be implemented using 1.Paging 2.Segmentation 3.Paging and Segmentation combined
Paging • Page: The Virtual Address Space is divided into equal number of units called a Page. Each page is of the same size. • Page frame: The Physical Address Space is divided into equal number of units called Page Frames. Each page frame is of the same size. • Memory Management Unit (MMU): it is used to map the virtual address onto the physical address.
Table (one per Process) 0xA0F4 Mapping Unit (MMU) 0xC0F4 So…how does it work? • Program: • .... • Mov AX, 0xA0F4 • .... Virtual Address „Piece“ of Virtual Memory PhysicalMemory „Piece“ of Physical Memory Physical Address VirtualMemory Note: It does not matter at which physical address a „piece“ of VM is placed, since the corresponding addresses are mapped by the mapping unit.
piece in physical memory? OS brings „piece“ in from HDD memory access fault yes OS adjusts mapping table physical address translate address MMU The Mapping Process • Usually every process has its own mapping table own virtual address space (assumed from now on) • Not every „piece“ of VM has to be present in PM • „Pieces“ may be loaded from HDD as they are referenced check using mapping table virtual address
0x00 Page 0 Page 0 Page 1 Page 1 0x00 Page 2 Page 2 Frame 1 v v Frame 0 Page 3 Page 3 Frame 1 Page 4 Page 4 Frame 0 v v Frame 2 Page 5 Page 5 Frame 3 Page 6 Page 6 Frame 3 v v Page 7 Page 7 Virtual memory (divided into equal size pages) Page Table (one per process, one entry per page maintained by OS) Physical memory (divided into equal size page frames) Paging
Page Frame # r read s shared w write c caching disabled x execute su super-page v valid pid process id re referenced g (extended) guard m modified gd guard data r w x v r w x v re re m m s s c c su su pid pid g g gd gd other PagingTypical Page Table Entry other
Virtual address Page # Offset 0x2 0x14 Page Table Page Table Base Register (PTBR) 0x0 * L 0x1 * L 0x2 * L 0x8 L : size of entry 0x8 0x14 Frame # Offset ... Physical address PagingSinglelevel Page Tables one entry per page one table per process
VM: FeaturesProtection • Each process has its own virtual address space • Processes invisible to each other • Process cannot access another processes memory • MMU checks protection bits on memory access (during address mapping) • „Pieces“ can be protected from being written to or being executed or even being read • System can distinguish different protection levels (user / kernel mode) • Write protection can be used to implement copy on write ( Sharing)
Piece0 Piece0 Piece 0 Piece 1 Piece1 Piece1 Piece 2 Piece2 Piece2 Physical memory Virtual memory Process 1 Virtual memory Process 2 VM: FeaturesSharing • „Pieces“ of different processes mapped to one single „piece“ of physical memory • Allows sharing of code (saves memory), e.g. libraries • Copy on write: „piece“ may be used by several processes until one writes to it (then that process gets its own copy) • Simplifies interprocess-communication (IPC) shared memory
VM: Advantages (1) • VM supports • Swapping • Rarely used „pieces“ can be discarded or swapped out • „Piece“ can be swapped back in to any free piece of physical memory large enough, mapping unit translates addresses • Protection • Sharing • Common data or code may be shared to save memory • Process need not be in memory as a whole • No need for complicated overlay techniques (OS does job) • Process may even be larger than all of physical memory • Data / code can be read from disk as needed
VM: Advantages (2) • Code can be placed anywhere in physical memory without relocation (adresses are mapped!) • Increased cpu utilization • more processes can be held in memory (in part) more processes in ready state
VM: Disadvantages • Memory requirements (mapping tables) • Longer memory access times (mapping table lookup) • Can beimproved usingTLB
Page Reference Stream • For a given process, the page reference stream is the list of virtual page numbers ordered according to when they are referenced • Example: Process with 5 pages: 1, 2, 3, 1, 2, 3, 4, 5
Static Paging Algorithms • When process starts, it is allocated a fixed number of frames in RAM • Paging Policies: defines how pages will be loaded/unloaded into frames allocated to the process
3 Basic Paging Policies • Fetch Policy: determines when a page should be loaded into RAM • Replacement Policy: if all frames are full, determines which page should be replaced • Placement Policy: determines where a fetched page should be placed in RAM
Demand Paging • Since the Page Reference Stream is not known ahead of time, we can’t “pre-fetch” pages • We know which page to load during run-time so we load them on “demand”
Demand Paging Algorithms Random, Optimal, LRU,FIFO
Random Replacement • Choose a page at random to be the victim page • Simple to implement, but is the worst algorithm: • No knowledge is taken into account! • Each page thus has same probability of being the victim
Optimal Algorithm • Best algorithm; however requires knowledge of the future page references (!) • Always results in least possible page faults • Replace the page that will be referenced the latest from now
0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 0 0 0 0 0 0 0 0 0 1 1 1 4 4 4 7 fr 0 1 1 1 1 1 2 2 2 2 2 2 2 5 5 5 fr 1 fr 2 2 3 3 3 3 3 3 3 3 3 3 3 6 6 Optimal Algorithm Example • The process’ reference stream is: 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 • The process is allocated 3 frames
0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 0 0 0 3 3 3 2 2 2 1 1 1 4 4 4 7 fr 0 1 1 1 0 0 0 3 3 3 2 2 2 5 5 5 fr 1 fr 2 2 2 2 1 1 1 0 0 0 3 3 3 6 6 Least Recently Used (LRU) • Replace the page which was last referenced the longest time ago
0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 0 0 0 3 3 3 2 2 2 1 1 1 4 4 4 7 fr 0 1 1 1 0 0 0 3 3 3 2 2 2 5 5 5 fr 1 fr 2 2 2 2 1 1 1 0 0 0 3 3 3 6 6 First In First Out (FIFO) • Replace the page that has been in memory longest
Comparison of the samples • For the same reference stream and number of available frames: • Optimal caused 10 page faults • LRU caused 16 page faults • FIFO caused 16 page faults
Topics for Next Presentation • Translation Look Aside Buffer (TLB) • Working set model • Segmentation • Combination of paging and segmentation
References • Operating Systems,3rd edition-Gary Nutt. • www.ira.uka.de/teaching/ coursedocuments • www.crucial.com/library/glossary.asp