290 likes | 447 Views
Virtual Memory:Part 2. Kashyap Sheth Kishore Putta Bijal Shah Kshama Desai. Index. Recap Translation lookaside buffer Segmentation Segmentation with paging Working set model References . Terms & Notions. Virtual memory ( VM ) is No t a physical device but an abstract concept
E N D
Virtual Memory:Part 2 Kashyap Sheth Kishore Putta Bijal Shah Kshama Desai
Index • Recap • Translation lookaside buffer • Segmentation • Segmentation with paging • Working set model • References
Terms & Notions • Virtual memory (VM) is • Not a physical device but an abstract concept • Comprised of the virtual address spaces (of all processes) • Virtual address space (VAS) (of one process) • Set of visible virtual addresses • (Some systems may use a single VAS for all processes)
Paging • Page: The Virtual Address Space is divided into equal number of units called a Page. Each page is of the same size. • Page frame: The Physical Address Space is divided into equal number of units called Page Frames. Each page frame is of the same size. • Memory Management Unit (MMU): it is used to map the virtual address onto the physical address.
Translation Lookaside Buffer • Each virtual memory reference can cause two physical memory accesses • one to fetch the page table • one to fetch the data • To overcome this problem a high-speed cache is set up for page table entries • called the TLB - Translation Lookaside Buffer
Translation Lookaside Buffer • Contains page table entries that have been most recently used • Functions same way as a memory cache
Translation Lookaside Buffer • Given a virtual address, processor examines the TLB • If page table entry is present (a hit), the frame number is retrieved and the real address is formed • If page table entry is not found in the TLB (a miss), the page number is used to index the process page table
Translation Lookaside Buffer • First checks if page is already in main memory • if not in main memory a page fault is issued • The TLB is updated to include the new page entry
Operation of Paging and Translation Lookaside Buffer(Stallings Fig 8.8)
Segmentation • What is Segmentation? • Segmentation: Advantages • Segmentation: Disadvantages
Virtual address space Call stack free Address space allocated to the parse tree Space currently being used by the parse tree Parse tree Constant table Source text Symbol table has bumped into the source text table Symbol table
virtual address External fragmentation Segment # Offset Seg 1 (code) Seg 2 (data) Physical memory Seg 3 (stack) Seg 3 (stack) Virtual memory Seg 1 (code) Segment table MMU Base Limit Other offset < limit ? no STBR Seg 2 (data) yes STLR memory access fault Segment Base + Offset physical address 0x00 What is Segmentation? as in paging: valid, modified, protection, etc.
Segmentation: Advantages • As opposed to paging: • No internal fragmentation (but: external fragmentation) • May save memory if segments are very small and should not be combined into one page (e.g. for reasons of protection) • Segment tables: only one entry per actual segment as opposed to one per page in VM • Average segment size >> average page sizeless overhead (smaller tables)
Segmentation: Disadvantages • External fragmentation • Costly memory management algorithms • Segmentation: find free memory area big enough (search!) • Paging: keep list of free pages, any page is ok (take first!) • Segments of unequal size not suited as well for swapping
Combined Segmentationand Paging (CoSP) • What is CoSP? • CoSP: Advantages • CoSP: Disadvantages
Architecture For Segmentation With Paging Segment table( for process) Segment limit Page table base yes CPU Physical memory s so < so Logical address no p po Memory trap + Page table (for segment) f po f
CoSP: Advantages • Reduces memory usage as opposed to pure paging • Page table size limited by segment size • Segment table has only one entry per actual segment • Simplifies handling protection and sharing of larger modules (define them as segments) • Most advantages of paging still hold • Simplifies memory allocation • Eliminates external fragmentation • Supports swapping, demand paging, prepaging etc.
Process requests a 6KB address range (4KB pages) Page 1 Page 2 internal fragmentation CoSP: Disadvantages • Internal fragmentation • Yet only an average of about ½ page per contiguous address range
Working Sets • Working set of pages: minimum collection of pages that must be loaded in main memory for a process to operate efficiently without unnecessary page faults. • “Smallest collection of information that must be present in main memory to assure efficient execution of the program. • Process/Working Set: two manifestations of same ongoing computational activity.
Working Set Strategy • W(t,D) = set of pages in memory at time t of that process that have been referenced in the last D virtual time units. • Virtual time = time that elapses while process in execution measured in instruction steps. • Working set size: number of pages in W(t,D).
Characteristics of Working Sets • Size: • Working set is non decreasing function of window (D) size. Specifically, W(t, D+1) contains W(t,D). • Prediction: Expect intuitively that immediate past page reference behavior constitutes good prediction of immediate future behavior.
Detecting/Measuring W(t,D) • Hardware mechanism to record if page referenced in last D seconds. • Software: • Sample page table entries at intervals of D/K. • Any page that was referenced in these intervals is in working set.
Memory Allocation • A program will not be run unless there is space in memory for its working set.
Using the Working Set Concept • A strategy for resident set size: • Monitor the working set of each process • Periodically remove from the resident set of a process those pages that are not in its working set. • A process may execute only if its working set is in main memory (if resident set includes its working set).
Issues With this Strategy • Past does not necessarily predict future. Size and membership of working set change over time. • A true measurement of WS for each process is impractical. Need to time stamp every page reference and keep a time-ordered queue. • Optimal value of D is unknown and would vary.
Alternatively • Look at page fault rate, not exact page references. • Page fault rate falls as resident set size increases. • If page fault rate is below some threshold, give a smaller resident set size • If above some threshold, increase resident set size.
References • Operating Systems,3rd edition-Gary Nutt. • Modern Operating systems,2nd edition-Tanenbaum. • World Wide Web. • Operating Systems, William Stallings.