410 likes | 529 Views
EC503 - OPERATING SYSTEMS. TOPIC 3 - MEMORY MANAGEMENT. SUB-TOPICS. MEMORY MANAGEMENT. What is Memory Management (MM)?. Why Manage Memory?. Routines. A routine is any sequence of codes that is intended to be called and used repeatedly during the execution of a program / process.
E N D
EC503 - OPERATING SYSTEMS TOPIC 3 - MEMORY MANAGEMENT
Routines • A routine is any sequence of codes that is intended to be called and used repeatedly during the execution of a program / process. • Routines can be divided into two, namely: • Resident Routines • Transient Routines. • A resident routine is a library routine (function) that is linked with an application, which includes initialization routines and callable service stubs. This routine resides in the memory. • A transient routine is a library routine that is loaded at run time, such as Dynamic Link Libraries (DLL) routines for interacting with I/O devices. This routine is only loaded when needed.
FIXED PARTITION MM • Equal-size partitions: • Any process whose size is less than or equal to the partition size can be loaded into an available partition. • If all partitions are full, the operating system can swap a process out of a partition. • A program may not fit in a partition. • The programmer must design the program with overlays. • Main memory use is inefficient. • Any program, no matter how small, occupies an entire partition. • This is called internal fragmentation.
DYNAMIC MM • Partitions are of variable length and number. • Process is allocated exactly as much memory as required. • Eventually get holes in the memory. This is called external fragmentation. • Must use compaction to shift processes so they are contiguous and all free memory is in one block.
SEGMENTATION MM • All segments of all programs do not have to be of the same length. • There is a maximum segment length. • Addressing consist of two parts - a segment number and an offset. • Since segments are not equal, segmentation is similar to dynamic partitioning.
PAGING MM • Partition memory into small equal fixed-size chunks and divide each process into the same size chunks. • The chunks of a process are called pages and chunks of memory are called frames. • Operating system maintains a page table for each process: • Contains the frame location for each page in the process. • Memory address consist of a page number and offset within the page.
VM MODELS • VM models can be divided into three: • Demand Paging. • Swapping. • Shared VM.
Demand Paging • Bring a page into memory only when it is needed: • Less I/O needed. • Less memory needed. • Faster response. • More users. • The first reference to a page will trap to OS with a page fault. • OS looks at another table to decide: • Invalid reference - abort • Just not in memory.
Swapping • Processes can be swapped temporarily out of memory to a backing store and then brought back into memory for continued execution: • Backing Store - fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. • Roll out, roll in - swapping variant used for priority based scheduling algorithms; lower priority process is swapped out, so higher priority process can be loaded and executed. • Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped. • Modified versions of swapping are found on many systems, i.e. UNIX and Microsoft Windows.
Swapping • Memory allocation changes as • processes come into memory • leave memory • Shaded regions are unused memory
Swapping • Allocating space for growing data segment • Allocating space for growing stack & data segment
Shared Memory • Virtual memory makes it easy for several processes to share memory. • All memory access are made via page tables and each process has its own separate page table. • For two processes sharing a physical page of memory, its physical page frame number must appear in a page table entry in both of their page tables. • The shared physical page does not have to exist at the same place in virtual memory for any or all of the processes sharing it.
PAGED VM • Paged VM can be explained by: • Page Tables. • Dynamic Address Translation. • Paging Supervisor.
Page Tables • Internal operation of MMU with 16 4 KB pages
Page Tables • 32 bit address with 2 page table fields • Two-level page tables
Page Tables • Typical page table entry
Dynamic Address Translation • The relation between virtual addresses and physical memory addresses given by page table - address translation
Paging Supervisor • Is a part of the OS that creates and manages page tables. • If the hardware raises a page fault exception, the paging supervisor will: • Accesses the secondary storage. • Returns the page that has the virtual address the page fault. • Updates the page tables to reflect the physical location of the virtual address. • Tells the translation mechanism to restart the request. • When all physical memory is already in use, the paging supervisor must free a page in primary storage to hold swapped-in page using replacement algorithms.
Cache Operation • CPU requests contents of memory location • Check cache for this data • If present, get from cache (fast) • If not present, read required block from main memory to cache • Then deliver from cache to CPU • Cache includes tags to identify which block of main memory is in each cache slot.
Types of CM • CM can be divided into: • CPU cache. • Disk cache. • Web cache. • CPU cache stores copies of data from the most frequently used main memory locations. • Disk cache is used as a buffer between the hard disk and the rest of the computer system.
Types of CM • Web cacheis a mechanism for the temporary storage (caching) of web documents, such as HTML pages and images, to reduce bandwidth usage, server load, and perceived lag. • A web cache stores copies of documents passing through it; subsequent requests may be satisfied from the cache if certain conditions are met. • For example, a Google's cache link in its search results provides a way of retrieving information from websites that have recently gone down and a way of retrieving data more quickly than by clicking the direct link.