290 likes | 306 Views
Ch. 4 Memory Mangement. Parkinson’s law: “Programs expand to fill the memory available to hold them.”. Memory hierarchy. Registers Cache RAM How can the OS help processes share RAM? Hard disk CD, DVD Tape. Basic memory management. Types:
E N D
Ch. 4 Memory Mangement Parkinson’s law: “Programs expand to fill the memory available to hold them.”
Memory hierarchy • Registers • Cache • RAM • How can the OS help processes share RAM? • Hard disk • CD, DVD • Tape
Basic memory management • Types: • Moves process back and forth between main memory and disk (swapping and paging). • Those that do not (move processes back and forth).
Basic memory management • Monoprogramming w/out swapping or paging • only 1 program runs at a time • OS is also in memory • load execute to completion load next Embedded systems, palmtop computers Early mainframes MS DOS
Basic memory management • Multiprogramming w/ fixed partitions • Divide memory into n fixed size partitions • All the same size or some larger, some smaller? • Single job queue or multiple job queues?
Basic memory management • Multiprogramming w/ fixed partitions issues: • Unused partition space is wasted. • Multiple queues • partitions (often larger) may go unused • Single queue • Large partitions wasted on small jobs • Or if we favor larger jobs, smaller (often interactive) jobs may be starved.
Modeling multiprogramming Let p be the probability that a process waits on I/O. Given n such processes, what is the probability that all n processes are waiting on I/O at the same time? p1 * p2 * … * pn = pn Therefore, for a given p and n, the CPU utilization = 1-pn (Assumes that all n processes are independent but that is not the case for 1 CPU or if we need exclusive I/O! But we’ll employ the “ostrich algorithm and live with it!)
CPU utilization • Given that we have enough memory to support 10 processes and each process spends 80% of its time doing I/O, what’s CPU utilization?
CPU utilization • Given that we have enough memory to support 10 processes and each process spends 80% of its time doing I/O, what’s CPU utilization? • Given n=10, p=0.80 • So CPU utilization = 1-0.810 • (about 0.90 or 90%)
CPU utilization • Suppose we have 32MB. The OS uses 16MB. Each user program uses 4MB and has an 80% I/O wait. • How many users, n, can we support in memory at once? • Given the above n, what is our CPU utilization?
CPU utilization • Suppose we have 32MB. The OS uses 16MB. Each user program uses 4MB and has an 80% I/O wait. • How many users, n, can we support in memory at once? • 4 = (32-16)/4 • Given the above n, what is our CPU utilization? • (1-0.804)=0.60 or 60% • What is our CPU utilization if we add 16M?
CPU utilization • Now we have 48MB. The OS uses 16MB. Each user program uses 4MB and has an 80% I/O wait. • How many users, n, can we support in memory at once? • 8 = (48-16)/4 • Given the above n, what is our CPU utilization? • (1-0.808)=0.83 or 83% • So we went from 60% to 83% with 16M more. • What is our CPU utilization if we add 16M?
CPU utilization • Now we have 64MB. The OS uses 16MB. Each user program uses 4MB and has an 80% I/O wait. • How many users, n, can we support in memory at once? • 12 = (64-16)/4 • Given the above n, what is our CPU utilization? • (1-0.8012)=0.93 or 93% • So we went from 83% to 93% with 16M more.
Relocation and protection • Relocation – a program should be able to execute in any partition of memory (starting at any physical address) • Protection – a process should have read/write access to data memory, read access to its own code memory, read access to some parts of the OS, and no access to other parts • Base/limit registers = early method
swapping • We want more processes than memory! 2 solutions: • Swapping • bring in each process in its entirety • run it for a while • put it back on disk • Virtual memory (paging)
swapping • Memory compaction (like disk fragmentation) • memory may become fragmented into little pieces so we may have to more all processes down into lowest memory.
swapping • What if the memory needs of a process changes over time?
swapping • Memory management – How do we keep track of what memory is being used and what memory is available? • Bitmaps • Linked lists
Swapping: memory management w/ bitmaps • Divide memory into equal size allocation units (e. g., 1K “chunks”). • Bit = 0 means the chunk is free; bit = 1 means that chunk is in use. • Small chunks -> large bitmap • Large chunks -> small bitmap • Large chunks -> waste
Swapping: memory management w/ linked lists • Linked list of allocated and free memory segments. • Segment = memory used by process or hole (free memory) between processes • Usually sorted by address • May be implemented as one list (of both used and free) as two separate lists
Swapping: memory management w/ linked lists • Allocation methods: • First fit • Next fit • Best fit • Worst fit • Quick fit
Swapping: memory management w/ linked lists • Allocation methods: • First fit • start at beginning and use the first one that fits. • simple • fast • leaves large holes • Next fit • Best fit • Worst fit • Quick fit
Swapping: memory management w/ linked lists • Allocation methods: • First fit • Next fit • continue searching from where FF last left off • slightly worse than FF • Best fit • Worst fit • Quick fit
Swapping: memory management w/ linked lists • Allocation methods: • First fit • Next fit • Best fit • find the closest match (leave the smallest hole) • slower than FF • wastes memory = leaves many small, useless holes • Worst fit • Quick fit
Swapping: memory management w/ linked lists • Allocation methods: • First fit • Next fit • Best fit • Worst fit • always leave largest hole • not very good • Quick fit
Swapping: memory management w/ linked lists • Allocation methods: • First fit • Next fit • Best fit • Worst fit • Quick fit • keeps separate lists of common hole sizes • search is extremely fast • But when a process terminates (or is swapped out), merging neighboring holes is expensive. • If merging is not done, then fragmentation occurs.