1 / 42

Operating Systems

Operating Systems. Dr. Jerry Shiao, Silicon Valley University. Memory Management. For Multiprogramming to improve CPU utilization and user response time, the Operating System must keep several processes in memory Memory Management Strategies Hardware Support

lucus
Download Presentation

Operating Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operating Systems Dr. Jerry Shiao, Silicon Valley University SILICON VALLEY UNIVERSITY CONFIDENTIAL

  2. Memory Management • For Multiprogramming to improve CPU utilization and user response time, the Operating System must keep several processes in memory • Memory Management Strategies • Hardware Support • Bind Process Logical Address to Physical Address • Memory Management Unit ( MMU ) • Dynamic / Static Linking • Swapping: RR Scheduling Algorithm / Constraints • Sharing Operating System Space and User Space: Contiguous Memory Allocation Method • Problems: External and Internal Fragmentation • Paging: Non-Contiguous Address Space / Fixed Size Frames • Structure of the Page Table • Segmentation: Non-Contiguous Address Space / Var Size Frms • Structure of the Segment Table • Intel Pentium Segmentation / Paging Architecture • Linux SILICON VALLEY UNIVERSITY CONFIDENTIAL

  3. Memory Management • Basic Hardware • CPU Access • Hardware Registers • One clock cycle: Decode and operate on registers. • Main Memory • Many clock cycles: Transaction on memory bus. • Cache ( Fast Memory ) between CPU and main memory. • Separation of Memory Space for Processes • Base Register: Lowest physical memory address. • Limit Register: Size of the range. • Every address generated in user mode compared against Base and Limit Register. 0 256000 300040 420940 1024000 Operating System Process Base 300040 Loaded by Operating System in Kernel Mode. Trap when User Mode access outside of range. Process Limit 120900 . . . SILICON VALLEY UNIVERSITY CONFIDENTIAL

  4. Source Program Memory Management Compile Time Compiler or Assembler • Address Binding • Process in Input Queue • Operating System Loads Process into Memory: When does binding Occur? Object Module Load Time Compile Time: physical memory location known. Absolute code (MS-DOS) Other Object Modules Linkage Editor Load Time: physical memory location NOT known until Loader. Relocatable code. Loader: Final binding. Load Module System Library Execution Time: physical memory location not known until run time. Common method. NOTE: Require hardware assist. Dynamically Loaded System Libary Loader Execution Time In-Memory Binary memory Image SILICON VALLEY UNIVERSITY CONFIDENTIAL

  5. Memory Management • Logical Versus Physical Address Space • Logical Address Space: All logical addresses generated by process. • AKA Virtual Address • Physical Address Space: All physical addresses corresponding to logical addresses ( Loaded into Memory-Address Register ). • Memory Management Unit ( MMU ) run-time mapping from virtual to physical addresses • Relocation Register (i.e. Base Register). • Added to every memory address generated by a process. Logical Address: Base to Max Physical Address: ( Relocation + Base ) to ( Relocation + Max ) SILICON VALLEY UNIVERSITY CONFIDENTIAL

  6. Memory Management MMU Relocation Register Memory . . . • Memory Management Unit ( MMU ) • Divides Virtual Address Space into Pages ( 512Bytes to 16 Mbytes ) • Paging: Memory Management scheme permits physical address space to be noncontiguous. • Translation Lookaside Buffer: Cache for logical to physical address translation. Logical Address: <346> Physical Address: <14336> 14000 CPU + Page # 1 Page Table Page # 2 Page # 3 Page # 4 SILICON VALLEY UNIVERSITY CONFIDENTIAL

  7. Memory Management • Dynamic Loading • Program routines does NOT reside in memory when referenced. • Relocatable routine loaded when referenced. • Relocatable Linking Loader loads into memory. • Dynamic Linking and Shared Libraries • Static Linking • Modules from system libraries are handled similar to process modules. • Combined by the loader into the binary program image. • Large image, but portable (all modules self-contained). • Dynamic Linking • Linking occurs during execution time. • Stub code placed in program code to resolve program library reference or load the library containing the referenced code. • Stub replaced with address of the referenced routine. • Shared routine accessed by multiple processes. • Library Updates • Once shared library replaced, processes will use updated routine. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  8. Memory Management • Swapping • Process swapped from memory to disk (backing store). • Round Robin Scheduling Algorithm swap out process when quantum expires and swaps in higher priority process from Operating System Ready Queue. • BBa Operating System Backing Store ( Disk ) User Space Swap Out Process P1 Swap In Process P2 SILICON VALLEY UNIVERSITY CONFIDENTIAL

  9. Memory Management • Swapping • Swap Considerations: • Address Binding Method • Load time or compile time: swap process into same memory location. • Execution time: swap process into different memory location. • Backing Store ( Disk ) • Copies of all memory images • Operating System Ready Queue: Processes with memory images in Backing Store or in memory. • Context-Switch Time • Major latency is transfer time, proportional to amount of memory swapped. • Reduce swap time, dynamic memory requirements only request memory needed and release used memory. • I/O Operations into Operating System buffers. • Process waiting for I/O operations is swapped out. • Transfer between Operating System buffers and process memory occur when process swapped in. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  10. Memory Management MMU maps logical address dynamically using relocation register and validates address range with limit registers. • Contiguous Memory Allocation • Memory Shared by Operating System and User Process • Memory Divided into Two Partitions: • Resident Operating System • Low Memory (Reside with Interrupt Vector). • User Processes • Each process in single contiguous section of memory. • Memory Mapping and Protection Limit Register Relocation Register Memory Physical Address Logical Address < Yes CPU + No Trap: Addressing Error SILICON VALLEY UNIVERSITY CONFIDENTIAL

  11. Memory Management • Contiguous Memory Allocation • Memory Allocation: Loading Process into Memory • Operating System Evaluate Memory Requirements of Process and Amount of Available Memory Space. • Operating System Initially Consider all Available Memory One Large Memory Block ( i.e. a Hole ) for User Process. • Eventually, memory contains holes of various sizes. • Dynamic Storage Allocation Problem: • Satisfy request of size “n” from list of free memory blocks. • First Fit: Allocate the first memory block large enough for process. • Best Fit: Allocate the smallest memory block that is large enough. • Worst Fit: Allocate the largest memory block. • First Fit and Best Fit faster, better in managing fragmentation than Worst Fit. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  12. Memory Management • Contiguous Memory Allocation • Fragmentation • Process loaded/removed from memory, memory space is broken into non-contiguous memory blocks. • External Fragmentation: • Occurs when TOTAL memory space exist to satisfy request, but available memory is non-contiguous. • 50-Percent Rule: • Statiscally, given N blocks, .5N blocks lost to fragmentation. • Compaction for External Fragmentation ( Expensive ) • Shuffle memory contents to place free memory in one large block. • Only possible if relocation is dynamic and done at execution time. • Change Relocation Register after compaction. • Another solution: Permit logical address of process to be non-contiguous ( Paging and Segmentation ). • Internal Fragmentation • Physical memory divided into fixed size blocks. • Memory blocks allocated to a process is larger than the requested memory, unused memory that is internal to a memory partition. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  13. Memory Management • Paging • Permits the Physical Address Space of Process to be Non-contiguous. • Avoids Memory External Fragmentation and Compaction. • Avoids Backing Store Fragmentation. • Frames: Physical Memory partitioned into fixed-size blocks ( Frames ). • Page: Logical Memory partitioned into fixed-size blocks ( Pages ). Physical Memory Logical Memory Backing Store Page 0 Frame 0 Block 0 Page1 Frame 1 Block 1 Page 2 Frame 2 Block 2 Page 3 Block 3 Frame 3 SILICON VALLEY UNIVERSITY CONFIDENTIAL

  14. Physical Memory Memory Management Logical Address Physical Address • Paging • Paging Hardware CPU F 00 … 00 Frame F Page Offset Offset F 11 … 11 Page Index • Every Logical Address generated by CPU contains two parts: Page Number and Page Offset. • The Page Number indexes into Page Table, which has the base address of each page (frame) in Physical Memory. Frame 0 Frame 0 F Frame 1 3) The base address is combined with the Page Offset to generate the Physical Memory Address. Physical Memory Frame 2 Frame 2 Frame 3 Frame 3 Page Table SILICON VALLEY UNIVERSITY CONFIDENTIAL

  15. Physical Memory Memory Management Frame Number 0 1 2 3 4 5 6 7 Logical Address • Paging • Page Table Maps Logical Memory To Physical Memory Page 0 Page 0 Page 1 Offset Page 1 Page 2 0 1 2 3 Page 3 Page 2 Frame 1 Logical Memory Page 1 Page Index Frame 4 Frame 3 Frame 7 Page 3 Physical Memory Page Table SILICON VALLEY UNIVERSITY CONFIDENTIAL

  16. Memory Management • Paging • Page Size Defined by Hardware • Power of 2: Typically 4K Bytes and 8 Kbytes • Logical Address = 2ᵐ, Page Size = 2ⁿ, Page Number = 2ᵐ־ⁿ • M = 4, Logical Address = 2^M=2^4=16, n =2 Page Size = 2^N=2^2=4, Page Number = 2^(4-2)= 4 Physical Memory = 32 Frame Size = 4 Logical Memory Page Size = 4 Page Table Separation of user’s view of logical contiguous memory and physical non-contiguous memory. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  17. Memory Management • Paging • Address Translation Hardware: Uses Page Table to Reconcile User Process View of Memory (Contiguous Logical Address Space) and Physical Memory. • Prevents User Process from accessing memory outside its logical address space. • Operating System Manages Physical Memory • Frame Table: Allocation details of physical memory. • Frames Allocated • Frame Available • Total Frames • Entry for each physical page frame: Allocated, Free, Page of Process using frame. • Operating System maintains copy of Page Table for each process. • When Operating System Translates Process Logical Address to Physical Address. • Used to update Frame Table when Process context switch into memory. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  18. Memory Management • Paging • Hardware Support • Page Table must be fast, every access to memory goes through Page Table. • Page Table can be implemented as set of registers. • Small Page Table ( 256 entries ). • Page Table implemented in memory with Page-Table Base Register (PTBR) • Large Page Table: PTBR in Memory is slow. • Page Table in Fast-Lookup Hardware Cache. • Translation Look-Aside Cache ( TLB ). • High-Speed Memory, contains few Page Table entries (64 to 1024 Entries). • TLB Miss: Access Page Table in memory. • TLB Replacement policy ( LRU, random ). • Some TLB entries are “Wired Down” (Kernel Code). • Each process Address Space Identifies (ASIDs) stored in TLB entry. • Allows multiple processes to use TLB. • Hit Ratio: Percentage of times that a page is found in the TLB. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  19. Memory Management Logical Address CPU • Paging • TLB Hardware Support Physical Address Page Frame Number Number Frame n Offset Page n Offset TLB Hit Physical Memory Translation Look-Aside Buffer TLB Miss Page Table SILICON VALLEY UNIVERSITY CONFIDENTIAL

  20. Memory Management • Paging • Memory Protection with Protection Bits • Read-Write, Read-Only, Execute-Only • Valid-Invalid: • Tests whether the associated page is in the process’s logical address space. • OS allow or disallow access to the page. • Hardware Trap to Operating System SILICON VALLEY UNIVERSITY CONFIDENTIAL

  21. Memory Management • Paging • Shared Pages: Sharing Common Code • One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems, run-time libraries). • Shared code must appear in same location in the logical address space of all processes. • Each process page table maps onto the same physical copy of the shared code. • Sharing memory among processes using shared pages. • Private Code and Data • Each process has its own copy of registers. • Each process keeps a separate copy of the code and data. • The pages for the private code and data can appear anywhere in the logical address space. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  22. Memory Management • Paging • Shared Pages: Sharing Common Code ed 1 ed 2 ed 3 SILICON VALLEY UNIVERSITY CONFIDENTIAL

  23. Memory Management • Structure of the Page Table • Hierarchical Page Tables • Problem: • Logical Address: 32 Bits • Page Size: 4KBytes (2^12 = 4096) • Page Table Entries: 2^(32 – 12) = 2^20 = 1M Entries • Page Table Size: 1M Entries X 4 Bytes/Entry = 4MBytes Page Table per Process • Two-Level Page Table Scheme • P1 = Index into the Outer Page Table • P2 = Displacement within the page of the outer Page Table • d = Physical Page offset Page Number Page Offset P1 P2 d 10 10 12 32 Bit Logical Address SILICON VALLEY UNIVERSITY CONFIDENTIAL

  24. Memory Management • Structure of the Page Table • Hierarchical Page Tables • Two-Level Page Table Scheme 2^10 = 1024 2^10 = 1024 2^12 = 4096 SILICON VALLEY UNIVERSITY CONFIDENTIAL

  25. Memory Management 64 Bit Logical Address will use 2^42. CANNOT use Two-Level Page Table. Outer Page Table would have to be partitioned further. • Hierarchical Page Tables • N-Level Page Tables 64 Bit Logical Address P1 P2 d 42 10 12 SILICON VALLEY UNIVERSITY CONFIDENTIAL

  26. Memory Management • Structure of the Page Table • Hashed Page Tables • Handle Logical Address Space Greater Than 32 Bits. 1) Virtual Page (p) is hashed into a Hash Page Table. 2) Hash Page Table entry is chained elements. Virtual Page (p) is compared with each element. Each Hash Page Table entry contains linked list of all Virtual Pages that hash to same hash index. The entry contains the Frame number, which forms the Physical Address SILICON VALLEY UNIVERSITY CONFIDENTIAL

  27. Memory Management • Structure of the Page Table • Inverted Page Tables • Page Table has an entry for each page that the process uses, since the process references pages through the virtual address. • Page Table is large, millions of entries. • Inverted Page Table is a table of real page or frame of memory. • Only one Real Page Table, NOT one Page Table per process. • Virtual Address: < process-id, page-number, offset > • Process-id is the address-space-identifier. • < process-id, page-number> used to search the Inverted Page Table. • When match is found, then the offset, < i >, represent the physical page offset in physical memory. • Problems: • Search for <pid, page> could take whole table. Use Hash Table to minimize search entries. • Cannot easily share memory between processes because one virtual page for every physical page. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  28. Memory Management • Segmentation • Memory viewed as collection of variable-sized segment, with no ordering among segments. • Each segment has specific purpose: • Main program, subroutine, stack, symbol table, library function. • Logical Address Space: Collection of Segments • Segment Logical Address is two parts: • < segment – number, offset > • Segment-number: Identifies the segment • Offset: Offset within the segment • Compiler creates segments: • Code, Global Variables, Heap, Stacks, Standard C Library • Loader takes segments and assign segment numbers <segment-number, offset> SILICON VALLEY UNIVERSITY CONFIDENTIAL

  29. 1 4 2 3 Memory Management • Segments does not have to be contiguous in memory. • Code sharing at segment level ( same base in two segment tables ). • Segment has Protection Bits: • Read-Only Segment (code) • Read-Write Segment (data, heep, stack) • Problems: • Complex memory allocation (i.e. First-Fit). • External Fragmentation. • Segmentation 1 2 3 4 user space physical memory space SILICON VALLEY UNIVERSITY CONFIDENTIAL

  30. Memory Management • Segmentation • Segment Table • Segment Base: Starting physical address where the segment resides in memory. • Segment Limit: Length of the segment. • < segment – number, offset > • Segment-number is the index into the segment table. • Offset is the offset into the segment. • Offset between 0 and the Segment Limit. • Offset added to the Segment Base to produce the physical address in memory. • Segment Table Base Register ( STBR ) • Segment Table’s location in physical memory. Points to STBR saved in Process Control Block. • Segment Table Length Register ( STLR ) • Number of segments used by a program. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  31. Memory Management • Segmentation SILICON VALLEY UNIVERSITY CONFIDENTIAL

  32. Memory Management • Segmentation Logical Address = < 2, 53 > Base = 4300: Physical Address = 4300 + 53 = 4353 Logical Address = < 3, 852 > Base = 3200: Physical Address = 3200 + 852 = 4052 SILICON VALLEY UNIVERSITY CONFIDENTIAL

  33. Memory Management • Segmentation • Intel Pentium • Supports Pure Segmentation and Segmentation with Paging • Logical Address to Physical Address Translation • CPU generates Logical Address. • Logical Address Passed to Segmentation Unit. • Segmentation Unit generates Linear Address. • Linear Address Passed to Paging Unit. • Paging Unit generates Physical Address in Memory. • One Segment Table Per Process • One Page Table Per Segment SILICON VALLEY UNIVERSITY CONFIDENTIAL

  34. Memory Management • Segmentation • Intel Pentium • Segment Maximum: 4 Gbytes • Segments per Process: 16K • Process Logical Address Space • First Partition (Private): 8K Segments • Local Descriptor Table ( LDT ) describes partition. • Second Partition (Shared): 8K Segments • Global Descriptor Table ( GDT ) describes partition. • LDT and GDT: Segment Descriptor with Base Location and Limit • Logical Address: < Selector, Offset > 13 1 2 s g p s : Segment Number g : GDT or LDT Segment p : Protection SILICON VALLEY UNIVERSITY CONFIDENTIAL

  35. Memory Management • Segmentation • Intel Pentium SILICON VALLEY UNIVERSITY CONFIDENTIAL

  36. Memory Management • Segmentation • Intel Pentium Paging • Page size of 4KBytes or 4MBytes • 4KBytes Page: Two-Level Paging Scheme • p1: Outermost Page Table or Page Directory Index. • p2: Inner Page Table Index. • d: Offset in the 4 Kbyte Page • Page Directory Table • Page Size Flag: Set = 4 Mbytes Page Frame, Not Set = 4 Kbytes Page Frame • 4 Mbyte Page Frame bypasses Inner Page Table and Offset is 22 bits. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  37. Memory Management • Segmentation • Intel Pentium Paging SILICON VALLEY UNIVERSITY CONFIDENTIAL

  38. Memory Management • Segmentation • Intel Pentium Segmentation with Paging • Two Levels of Mapping: Segment Table and Page Tables • Process has variable-size segments. • Each Segment divided into small fixed-size pages. • Eliminates External Fragmentation problem with segmentation. • Logical Address: < Segment, Page, Offset > • One Segment Table per Process • One Page Table per Segment • Two Levels of Sharing: Segment and Page • Share physical frame by having same frame reference in two page tables. • Share segment by having same base in two segment tables. • Segment protection bits ( Sharing, ReadOnly, ReadWrite ). SILICON VALLEY UNIVERSITY CONFIDENTIAL

  39. Memory Management • Segmentation • Intel Pentium Linux • Linux Uses 6 Segments: • Kernel Code Segment • Kernel Data Segment • User Code Segment • Shared by all processes in user mode. • All processes uses same logical address space. • Segment Descriptors in Global Descriptor Table ( GDT ). • User Data Segment • Task-State Segment ( TSS ) • Store the hardware context of each process during context switches. • Default Local Descriptor Table( LDT ) • Linux Uses Three-Level Paging for 32-Bit or 64-Bit Architectures. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  40. Memory Management • Segmentation • Intel Pentium Linux • Each Task has own set of Page Tables. • CR3 Register points to Global Directory for task currently executing. • CR3 Register saved in TSS Segments of the task during context switch. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  41. Memory Management • Summary • Different Memory-Management Algorithms: • Contiguous Allocation: Each process in single contiguous section of memory. • Paging • Segmentation • Segmentation with Paging • Comparison Considerations: • Hardware Support: Base-Limit Register pair. MMU for Paging and Segmentation. • Performance: More complex scheme, mapping logical address to physical address takes longer. Translation Look-Aside Buffer (TLB) fast lookup hardware cache to map page to frame number. • Fragmentation: Fixed size allocation (Paging) has internal fragmentation. Variable size allocation (Segmentation) has external fragmentaiton. • Relocation: Compaction for external fragmentation. SILICON VALLEY UNIVERSITY CONFIDENTIAL

  42. Memory Management • Summary • Comparison Considerations: • Swapping: Essential for allowing more processes to run than can fit into memory at one time. • Sharing: Sharing code and data among different users through shared segments or shared pages. • Protection: For paging or segmentation, different sections of a user program can be declared execute-only, read-only, or read-write. • Linux does not rely on segmentation and uses it minimally. • Segment for Kernel Code, Kernel Data. • Segment for User Code, User Data. • Task-State Segment (TSS). • Default LDT Segment. • Linux mainly uses paging: • User Code and User Data Segment shared by all processes. • All Segment Descriptors are in the Global Descriptor Table. • TSS stores hardware context of each process during context switch. • Default LDT Segment not used. SILICON VALLEY UNIVERSITY CONFIDENTIAL

More Related