150 likes | 252 Views
Operating System Concepts and Techniques Lecture 11. Memory Management-4 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and Techniques, First ed., iUniverse Inc., 2011. To order: www.iUniverse.com , www.barnesandnoble.com , or www.amazon.com.
E N D
Operating System Concepts and Techniques Lecture 11 Memory Management-4 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and Techniques, First ed., iUniverse Inc., 2011. To order: www.iUniverse.com, www.barnesandnoble.com, or www.amazon.com
Optimal page replacement policy • Although not practical, there is an optimal algorithm • Removes the page which will be referred to in the farthest future Optimal with a main memory of three frames, H=4, M=7, h=4/11, and m=7/11
Multilevel Page-Table • So far page table was exempted from page removal • With large page tables memory efficiency reduces • We have to have a way of knowing which part (page) of page table is in main memory and which is not • Two level page table serves this purpose • Will discuss details in Windows case study
CPU (registers) Cache memory Main memory Secondary storage Cache memory management • Cache is a read/write Random access memory similar to main memory but faster • It sits between CPU and main memory • It is a temporary place for some main memory locations Can be write through or non-write through
12 bits 13 bits 7 bits Row number Column number Offset Cache memory management… • Most main memory management policies can be adopted to cache • However, there are special policies too • Information transfer to cache is in block unit, a block is much smaller than a page • Let’s assume we have 1Mega bytes of cache, block size of 128 bytes • Main memory can be divided into blocks of size 128 bytes • An address in main memory can be shown as
Cache memory 0 . . . 213 -1 1 0 1 2 . . . 212-1 Main memory 2 Yes, address in cache Register array Are inputs equal? Physical address No, address not in cache Cache management • With this policy, only one block of each column of main memory is in cache The structure to check whether a physical address is in cache or not
Effective access time • Suppose cache success rate is 0.9 • Therefore, in only 10% of the times an actual access to main memory will take place. • If cache access time is say 20 nanoseconds and main memory access time is 50 nanoseconds, on the average, it will take 0.9*20 + 0.1*(x + 50) nanoseconds to access what we need • x is the time that takes to fine out what we need to access is in the cache or not. • For x=5 nanosecons effective main memory access time will become 0.9*20 + 0.1*(5 + 50) = 23.5 nanoseconds; much better than 50 nanoseconds
31 30 … 22 21 20 … 12 11 10 … 2 1 0 Partial Page table Page number Offset Windows case study • Two-level page table • Page size of 4K bytes • An address looks like • It uses dynamic working set, with default value at the beginning • It uses per process page removal • Page removal algorithm is FIFO • It prefetches pages of main memory
Partial page table number Page number Offset Logical address . . . . . . Present? Yes No Interrupt Directory . . . The partial page table corresponding to left most 10 bits of the logical address . . . No Present? Yes Interrupt Page frame number Offset Physical address Address translation in Windows
31 30 … 22 21 20 … 12 11 10 … 2 1 0 Page number Offset Unix case study • One-level page table • Page size of usually 1K bytes • An address looks like • It uses dynamic working set, with default value at the beginning • It uses global page removal • Page removal algorithm is a modified version of clock
Unix page removal algorithm • UNIX likes to have a certain minimum number of free page frames all the times, minfree (or low water level) • If it becomes lower, page daemon agent will try to free page frames until the number of page frames reaches lotsfree (or high water level) • Think of filling up a water tank when the water level is low • It uses a modified version of the clock algorithm to decide which pages to remove • more than one page is removed every time page daemon agent is activated
Unix page removal algorithm.. • If many frames are scanned by page daemon, but the number of free frames have not reached lotsfree the process swapper complements the page removal process • Process swapper will swap out some processes • These processes will be swapped back when the situation improves • With this view, take one more look at Unix state transition diagram
Summary • Cache memory is part of all computers, MM would be incomplete without talking about cache and the way it is managed • It improves effective memory access time depending on cache hit ratio • Two case studies were discussed • UNIX uses special concepts such as process swapping, lotsfree and minfree • Windows, on the other hand, uses two-level page table memory management which allows virtual memory for page tables
Find out • A page replacement policy which always gives the highest page faults • The total size of directory and partial page tables for a 4giga bytes program in Windows • Effective memory access in Windows MM with two-level page table and cache memory for address traslation • In UNIX, what are the rationales for stopping the page daemon agent when “enough” frames are scanned but not enough pages are freed • The criteria for selecting a process to swap out in Unix and vice versa