380 likes | 457 Views
Virtual Memory (Ch. 9). Look at two logically identical java demo programs: workset1 and workset2. Can d o same programs using Eclipse. Run each and explain the difference. Array accesses are via paging techniques described previously:
E N D
Look at two logically identical java demo programs: workset1 and workset2. • Can do same programs using Eclipse. • Run each and explain the difference
Array accesses are via paging techniques described previously: • However, when the page table is accessed there are 2 possibilities: • valid bit set to 1 and entry contains frame#: • proceed as before
valid bit set to 0; • page not in memory; • page fault; • OS must do an I/O to get page and store into memory. • Fig. 9.5
Page fault causes the following: • OS trap • save registers and process state • determine cause of interruption
find page location on disk • initiate a read from the disk • request sits in queue until acted on • wait for disk movements • transfer data from disk to OS memory buffer
meantime allocate cpu to another process • get interrupt from controller (I/O done) • save registers and process state for currently running process (context switch) • determine cause of interrupt • update page table
wait for cpu to be allocated to this process • restore registers and process state. • Fig. 9.6
Demand paging: • pages fetched as needed (most common) • Anticipatory paging: • requested page and nearby ones all fetched.
effective access time (eat): • p=probability of a page fault. • eat=(1-p)*memory access time + p*page fault time. • Some numbers on page 357.
fork() command • traditionally copies all of parent's pages • copy on write: vfork() • initially parent and child share pages • if a shared page is written to, copy the page and map it to the virtual space. • look at demo fork.c: • Replace fork() with vfork() and note the difference.
page replacement: • During a page fault, an incoming page may have to replace a page currently in memory. • Which one? • Note: Replaced pages must be saved only if modified. • Page table has a "dirty bit" to indicate an updated page • (Figs 9.9-9.10)
Replacement strategies • FIFO: Figure 9.12 • does not always perform well (replaced pages could be ones used throughout the process) • more frames => fewer page faults in general: • empirical data demonstrates this (Fig 9.11)
However, it can NOT be proved • It CAN be disproven: • Belady’s anomaly: • Figure 9.12-9.13 • Another example: Assume 3 or 4 frames and examine page reference string 1-2-3-4-1-2-5-1-2-3-4-5. More faults with 4 frames!!
optimal replacement (replace page not needed for the longest time) • Fig 9.14 • difficult to implement • don’t care replacement (replace any page) • easy to implement. • No attempt at achieving better performance.
LRU replacement (replace page least recently used) • Fig. 9.15 • somewhat common • Can implement by storing a logical clock value (really just a counter) in page table entry. • Can implement by maintaining a stack of page numbers.
NUR (not used recently) approximation • Keep track of a reference (R) bit and a dirty (D) bit for each page. Replace according to: • R=0; D=0 • R=0; D=1 • R=1; D=0 • R=1; D=1 • Periodically reset R bit
multiple reference bits • Can keep an 8-bit byte of R-bits, once for each of the previous 8 time intervals. At each new interval, shift bits right. • lowest 8-bit number identifies pages used less recently.
Second chance • Use FIFO but if R bit is 1, go to next page in queue. (page gets a second chance) • Linux and XP use variations of this (depending on processor)
Mention least and most frequently used (LFU and MFU). Neither is common. • FIFO with released page going into free page queue. May reclaim before page is allocated to another. No physical I/O.
Allocation of frames • How many frames per process? • Too few and frequent page faults and poor performance • May lead to Thrashing (walking the disk drives-a term used when drives were big and heavy and excessive I/O caused the drives to vibrate and actually start moving). • Too many and other processes have less room.
Equal allocation: n processes get 1/n of the free memory • Proportional allocation: Let si be the size requirement of process i. Then S=s1+s2+…+sn. If m is the number of available frames, then allocate (si/S)m frames. • If s1=10 and s2=127 and m=62 then we allocate (10/137)62= (4 or 5) to first process and (127/137)62=57 or 58 to second
Working Set model • spatial and temporal locality patterns • Good use of locality poor use of locality
TLBs should accommodate the working set • See Fig 9.22 on p. 380
Page size? • larger page size means fewer frames, a smaller page table, but larger internal fragmentation. • smaller page size means more frames, a larger page table, but less internal fragmentation.
Common sizes are ½ K to about 4K. Larger memory sizes make smaller pages problematic; also fragmentation is less of a problem.
Memory-mapped files: • Map a disk block to a page in memory • First access results in page fault as usual. • Fault causes a page-sized portion of the file to be brought into physical memory.
Subsequent reads/writes are memory accesses. • (Figure 9.23 and Java program on Figure 9.25 and demo)
Kernel memory: • Available frames (for most processes) stored in a list maintained by the Kernel.
Kernel free-memory pool is separate. Reasons: • Kernel code often not subject to a paging system. Strive for efficiency and minimizing of internal fragmentation. • Some hardware driver must interact directly with physical memory (i.e. no virtual memory) • see page 384
Buddy system: • Allocate from a segment only chunks of size = 2p for some p. • Each segment consists of two buddies each half the segment size. • Each buddy divided into two buddies.
Figure 9.26. • Facilitates coalescing unused buddies into larger segments • Fragmentation is a problem. • Used in early Linux
Slab allocation • Slab: one or more physical contiguous pages. Usually one. • Cache: one or more slabs • one cache for each OS kernel structure (process descriptors, file objects, semaphores, shared memory, message queues, etc)
populated by objects of the associated type. • Slab allocator allocates a portion of a slab for a specified object • Slabs divided into chunks the size of the associated object – no fragmentation • Effective when many
Windows XP • Demand paging with clustering (retrieves faulting page AND a few pages after it) • Processes initially given a working set minimum and maximum (50 and 345 are cited) • Some book refs: p. 841, figs 22.3 and 22.4, p. 844
Linux • Some book refs: Figs 21.6 and 21.7, p. 805 • Look at vmstat command. Also, top, slabtop, free commands. Also, the Linux file /proc/slabinfo. Note: look for task_struct in the slabtop command results. • While the display is active, run a workset program and watch the values change.