1 / 30

Virtual Memory

Virtual Memory. In Chapter 8, we saw that programs can be moved into and out of memory as needed To efficiently move a program, it is often broken into pieces (pages or segments) Do we really need to load all of a program’s pages/segments into memory to run the program?

todd-riggs
Download Presentation

Virtual Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Virtual Memory • In Chapter 8, we saw that programs can be moved into and out of memory as needed • To efficiently move a program, it is often broken into pieces (pages or segments) • Do we really need to load all of a program’s pages/segments into memory to run the program? • No - we will use virtual memory where only those parts (pages/segments) that are needed are loaded into memory

  2. Overlays • Recall overlays, a way to share a block of memory between two parts of a process • This was a form of virtual memory • However, the difference here is that virtual memory will be implemented wholly by the OS (and invisible to the user/programmer) whereas overlays were implemented by the programmer and not the concern of the OS

  3. Reasons for Virtual Memory • A program would no longer be limited in size to the amount of physical memory • Because a program can take up less memory space, more programs can be loaded at a time thus increasing CPU utilization through multiprogramming • Less I/O would be needed to load or swap each user program • Since most programs have routines that are not commonly used, these advantages will be available for most programs

  4. Demand Paging • Rather than loading in all pages of a process, load in only those that are needed, on demand • When a process first begins, guess which pages are initially needed • Lazy pager • only swap into memory those pages that will be used • Demand paging • never bring a page into memory until it is needed (I.e. actually called for by a CPU generated address)

  5. Paging Process • CPU generates a memory address • which is a page number and a page offset • Look up memory location of the page in the page table • If found • perform memory access • If not found • generate a page-fault trap • Page-fault traps require • the OS locate the page on secondary storage • load it into memory • See figure 9.1 p. 291

  6. Hardware Support • The page table will include a valid-invalid bit • as described in chapter 8 • Here, the bit represents two things: • if valid • then the page is in memory • if invalid • then the page is either not valid (not in the process’s logical address space) or not in memory • if invalid • OS must determine which is the case • see figure 9.3, page 293 for an example

  7. Handling Page Faults • Check internal table of this process to determine whether reference was valid or invalid • If invalid, terminate process • if valid, find a free frame in memory • schedule disk operation to read desired page into free frame • modify page table • restart interrupted process (repeat interrupted instruction) • See figure 9.4 p. 294

  8. Restarting an Instruction • This is more difficult than it might seem • If part of the instruction was completed, did it already alter memory • e.g. a move string command which is interrupted? • If so, how do we know this and reset it? • One solution is to use registers to store what is being overwritten in memory • Another solution is to check ahead in the instruction to make sure that all pages are valid before proceeding with instruction execution

  9. Where a page-fault might occur • Consider an instruction, • ADD A, B, C • add value stored in memory location A to value stored in location B storing result in location C • Page fault could occur when instruction is fetched • Page fault could occur when either A or B are fetched • Page fault could occur when storing in C • Therefore, a page fault could occur after any portion of the instruction has been executed

  10. Demand Paging Performance • Access time = (1-p)*ma + p*pft • p = probability of page fault occuring • ma = memory access time • pft = page fault access time (including interrupt, finding free frame space, disk access and transfer time) • As shown on page 299, most of the time is taken by the disk access causing a large degradation in access time. • To achieve less than 10% degradation, p must be less than 1 in 2 1/2 million (I.e. less than 1 page fault in 2 1/2 million memory accesses) -- very unrealistic!

  11. Page Replacement • What happens if there are no available free frames? • One or more present pages must be swapped out of memory to free up new frames • What pages are swapped out? How many? Should swap out pages be of the current process or any process? Should the OS always keep a few free frames available?

  12. Page Replacement Algorithms • First-in First-Out • Optimal • Least Recently Used • LRU Approximation • Second-Chance or Enhanced Second-Chance • Counting • Page Buffering

  13. FIFO • Store page numbers in a FIFO queue • When a page is swapped in, it is placed in the rear of the queue • When a page is to be swapped out, use the page at the front of the queue • Simple to implement but usually gives inefficient results (often, a swapped out page will be needed again shortly) • Using the example reference string (bottom of page 303), causes 15 faults out of 20 page references

  14. Belady’s Anomaly • An interesting phenomenon can arise using FIFO • Consider the string 1,2,3,4,1,2,5,1,2,3,4,5 • If there are 3 frames available, the number of page faults is 9, • If there are 4 frames available, the number of page faults increases to 10! Why? • We would expect that as available frames increases, page faults should decrease

  15. Optimal Algorithm • The best solution is to find the page that is furthest away from being used again and pick it for replacement • Using the string example string, after the third page fault (next page is 2), • looking ahead, 7 is not referred to for a long time, and therefore 7 is the most optimal page to be replaced • This strategy would always result in the fewest page faults (9 in the example string) but it is difficult to accurately predict when a page will be needed again

  16. Least Recently Used • Since predicting when a page will next be used is difficult, instead use the idea that currently used pages will be used in the near future, or the least recently used page will probably not be used in the near future • Therefore, replace the page which has not been used in the longest amount of time • How do we keep track of time?

  17. Keeping Track of Time for LRU • Use a counter • each page-table entry has a time-of-use field which is updated every time that page is used (read from or written to) • Two problems: this requires searching each time field and requires a mechanism for clock overflow • Use a stack • every time a page is used, push it on a stack, the bottom of the stack should represent the most distantly used page • this is not strictly a stack but should be implemented as a linked list

  18. LRU Approximation • Add a reference bit to each page table entry • If the page is accessed, set the bit • To replace a page, find a page whose reference bit is 0 • Reset all reference bits on occasion • This idea can be expanded to many bits -- for instance, use 8 bits and occasionally shift all 8-bit references to the right • Find the smallest 8-bit reference (00000000 would represent a page not used for the last 8 shift sequences)

  19. Second Chance • In addition to the reference bit, add a FIFO queue • Combine the FIFO and reference bit strategy with the addition of a second chance • If the first page in the queue has a set reference bit, clear it and place the page back in the queue • If the first page in the queue has a cleared reference bit, select it for replacement • Thus we are giving pages which have been used in the recent past a second chance

  20. Favoring Read pages to Written pages • If a page has been written too, when swapping to disk, the page must be written to disk before a new page can be loaded however, if the page has not been written too, it can be discarded without first being written to disk • When swapping a page to disk, it requires two access, one to write the page, one to read the new page

  21. Enhanced Second Chance • Page tables already store a modify bit, use this with the reference bit as follows • If the reference, modify bits are • (0, 0) - best to replace, not used recently and not modified • (0, 1) - not used recently but has been modified • (1, 0) - used recently but not modified (slightly better to replace than 0,1) • (1, 1) - used recently and modified (worst to replace)

  22. Counting Algorithms • By counting the number of references made to each page, we could replace the • least frequently used page • if a page has not been used often, chances are it won’t be used much again • most frequently used page • a lfu page might have just been swapped into memory and therefore hasn’t had a chance to be accessed much yet

  23. Page Buffering • By always keeping a pool of free frames, a page being swapped into memory does not have to wait for the OS to decide which page to swap out and then perform it • always keep some small number of frames available and after using one, then the OS decides what to swap out and does that independently of the running process • by keeping a copy of a swapped out page in memory, if the OS mistakenly swaps the wrong page out, it is still there until overwritten (strategy used by VMS/VAX)

  24. Allocation of Frames • How many frames should be allocated to each process? • Single user system - easy, OS gets a set number and the user gets the rest • Multi user system - different strategies • Minimum Number - the architecture (size and type of instructions) will dictate a minimum number -- e.g. if instructions are 2 bytes long with up to 3 2-byte operands, then one process will need at least 8 frames!

  25. Frame Allocation Algorithms • Even split among processes • n processes and m frames means m/n frames each • Proportional allocation is more reasonable • based on the size of each program • Priority allocation might be used • if some jobs (such as real-time jobs) are more important than others • Global vs local allocation • the replaced page might be from the current process (local) or any process (global)

  26. Thrashing • What happens if a process does not have enough frames allocated to it? • The process might cause a page fault • The OS loads in a new page, possibly replacing one of the pages of the current process • If the process has too few frames, then it might find that it needs the page swapped out in addition to the page swapped in • A process is thrashing if it is spending more time paging than executing

  27. System Thrashing • Consider the following scenario • CPU utilization is down • OS decides to increase multiprogramming by running more processes • In loading more process pages into memory • the OS must remove something from memory (pages from other processes) • More processes require paging, which requires more disk accesses (a bottleneck) • More processes are requiring disk access, less are running • Less processes running means CPU utilization is down and the OS decides to increase multiprogramming even more ... • See figure 9.14 p. 318

  28. Preventing Thrashing • Use only local replacement of pages • Provide a process with as many frames as it will require to run without thrashing • Working-Set Strategy can be used to predict how many frames a process needs to execute

  29. Other Considerations • Prepaging • load several initial pages when a process first begins or is restarted to limit the amount of page faults • Page size • determine an optimal page size • there really isn’t one, see the discussion in the book • Program structure • by rewriting a 2-d array access, paging becomes much easier (see page 325) • I/O Interlocks to disallow certain pages from being swapped out

  30. Demand Segmentation • Similar to Demand Paging except that segments are used instead of pages • Use similar strategies for segment swapping and similar hardware • One problem is fragmentation • Memory compaction can be used whenever there is insufficient free memory for a new segment • Segmentation requires more overhead than paging (because of fragmentation) but this is a better approach than overlays or not having virtual memory

More Related