1 / 49

CSCI 4334 Operating Systems Review: Final Exam

CSCI 4334 Operating Systems Review: Final Exam. Review. Chapters 1 ~ 12 in your textbook Lecture slides In-class exercises 1~10 (on the course website) Review slides Review for 2 mid-term exams Review for final exam. Review. 5 questions (100 points) + 1 bonus question (20 points)

johnkcooper
Download Presentation

CSCI 4334 Operating Systems Review: Final Exam

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSCI 4334 Operating Systems Review: Final Exam

  2. Review • Chapters 1 ~ 12 in your textbook • Lecture slides • In-class exercises 1~10 (on the course website) • Review slides • Review for 2 mid-term exams • Review for final exam

  3. Review • 5 questions (100 points) + 1 bonus question (20 points) • Process scheduling, concurrent processing • Memory allocation, memory management policy • I/O device management • Question types • Q/A

  4. Time & Place & Event • 1:15pm ~ 3pm, May 8 (Tuesday) • ENGR 1.272 • Closed-book exam

  5. File Manager Process, Thread & Resource Manager Memory Manager Device Manager Processor(s) Main Memory Devices Basic OS Organization

  6. Chapter 5: Device Management • Abstract all devices (and files) to a few interfaces • Device management strategies • Buffering & Spooling • Access time of disk device (see lecture slides and exercises) • FCFS, SSTF, Scan/Look, Circular Scan/Look • See examples in Exercise (2)

  7. Randomly Accessed Devices Cylinder (set of tracks) Track (Cylinder) Sector (a) Multi-surface Disk (b) Disk Surface (b) Cylinders

  8. Optimize Access Times • Several techniques used to try to optimize the time required to service an I/O request • FCFS – first come, first served • Requests are serviced in order received • No optimization • SSTF – shortest seek time first • Find track “closest” to current track • Could prevent distant requests from being serviced

  9. Optimize Access Times – cont. • Scan/Look • Starts at track 0, moving towards last track, servicing requests for tracks as it passes them • Reverses direction when it reaches last track • Look is variant; ceases scan in a direction once last track request in that direction has been handled • Circular Scan/Look • Similar to scan/look but always moves in same direction; doesn’t reverse • Relies on existence of special homing command in device hardware

  10. Chapter 6: Implementing Processes,Threads, and Resources • Modern process • Concurrent processing • Fully utilize the CPU and I/O devices • Address space • Context switching • State diagram

  11. Resource Manager Resource Manager Scheduler Resource Manager Process Manager Program Process Abstract Computing Environment Process Description File Manager Process Mgr Protection Deadlock Synchronization Device Manager Memory Manager Devices Memory CPU Other H/W

  12. Chapter 7: Scheduling • Scheduling methods (see lecture slides and exercises) • First-come, first served (FCFS) • Shorter jobs first (SJF) or Shortest job next (SJN) • Higher priority jobs first • Job with the closest deadline first

  13. Process Model and Metrics • P will be a set of processes, p0, p1, ..., pn-1 • S(pi) is the state of pi {running, ready, blocked} • τ(pi), the service time • The amount of time pi needs to be in the running state before it is completed • W (pi), the waiting time • The time pi spends in the ready state before its first transition to the running state • TTRnd(pi), turnaround time • The amount of time between the moment pi first enters the ready state and the moment the process exits the running state for the last time

  14. Chapter 8: Basic Synchronization • Critical sections • ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section, called mutual exclusion • Requirements for Critical-Section Solutions • Mutual Exclusion • Progress • Bounded Waiting

  15. Chapter 8: Basic Synchronization (cont'd) • Semaphore and its P() and V() functions • Definition • Usage

  16. The Critical-Section Problem – cont. • Structure of process Pi repeat entry section critical section exit section remainder section untilfalse;

  17. Requirements for Critical-Section Solutions • Mutual Exclusion. • If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. • Progress • If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. • Bounded Waiting • A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

  18. Requirements for Critical-Section Solutions • Mutual Exclusion. • If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. • Progress • If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. • Bounded Waiting • A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

  19. Dijkstra Semaphore • A semaphore, s, is a nonnegative integer variable that can only be changed or tested by these two indivisible (atomic) functions: V(s): [s = s + 1] P(s): [while(s == 0) {wait}; s = s - 1]

  20. Solving the Canonical Problem Proc_0() { proc_1() { while(TRUE) { while(TRUE { <compute section>; <compute section>; P(mutex);P(mutex); <critical section>; <critical section>; V(mutex);V(mutex); } } } } semaphore mutex = 1; fork(proc_0, 0); fork(proc_1, 0);

  21. Shared Account Balance Problem Proc_0() { proc_1() { . . . . . . /* Enter the CS */ /* Enter the CS */ P(mutex);P(mutex); balance += amount; balance -= amount; V(mutex);V(mutex); . . . . . . } } semaphore mutex = 1; fork(proc_0, 0); fork(proc_1, 0);

  22. Chapter 9: High-Level Synchronization • Simultaneous semaphore • Psimultaneous(S1, ...., Sn) • Event • wait() • signal() • Monitor • What is condition? Its usage? • What are 3 functions of the condition?

  23. Chapter 10: Deadlock • 4 necessary conditions of deadlock • Mutual exclusion • Hold and wait • No preemption • Circular wait • How to deal with deadlock in OS • Prevention • Avoidance • Recovery

  24. Chapter 11: Memory Management • Functions of memory manager • Abstraction • Allocation • Address translation • Virtual memory • Memory allocation (see lecture slides and exercises) • Fixed partition method • First-fit • Next-fit • Best-fit • Worst-fit • Variable partition method

  25. Chapter 11: Memory Management (cont'd) • Dynamic address space binding • Static binding • Dynamic binding

  26. Load Primary & Secondary Memory CPU • CPU can load/store • Ctl Unit executes code from this memory • Transient storage Primary Memory (Executable Memory) e.g. RAM Secondary Memory e.g. Disk or Tape • Access using I/O operations • Persistent storage Information can be loaded statically or dynamically

  27. Functions of Memory Manager • Allocate primary memory space to processes • Map the process address space into the allocated portion of the primary memory • Minimize access times using a cost-effective amount of primary memory • May use static or dynamic techniques

  28. Memory Manager • Only a small number of interface functions provided – usually calls to: • Request/release primary memory space • Load programs • Share blocks of memory • Provides following • Memory abstraction • Allocation/deallocation of memory • Memory isolation • Memory sharing

  29. Hardware Primary Memory Process Address Space Mapped to object other than memory Memory Abstraction • Process address space • Allows process to use an abstract set of addresses to reference physical primary memory

  30. Issues in a memory allocation algorithm • Memory layout / organization • how to divide the memory into blocks for allocation? • Fixed partition method: divide the memory once before any bytes are allocated. • Variable partition method: divide it up as you are allocating the memory. • Memory allocation • select which piece of memory to allocate to a request • Memory organization and memory allocation are close related

  31. Fixed-Partition Memory Mechanism Operating System pi needs ni units Region 0 N0 pi ni Region 1 N1 N2 Region 2 Region 3 N3

  32. Which free block to allocate • How to satisfy a request of size n from a list of free blocks • First-fit: Allocate the first hole that is big enough • Next-fit: Choose the next block that is large enough • Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole. • Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole.

  33. + Dynamic Address Relocation Performed automagically by processor CPU Relative Address 0x02010 0x12010 Relocation Register 0x10000 load R1, 0x02010 MAR • Program loaded at 0x10000  Relocation Register = 0x10000 • Program loaded at 0x04000  Relocation Register = 0x04000 We never have to change the load module addresses!

  34. Swapping – cont. Image for pi Swap pi out Swap pj in Image for pj

  35. Virtual Memory • A characteristic of programs that is very important to the strategy used by virtual memory systems is spatial reference locality • Refers to the implicit partitioning of code and data segments due to the functioning of the program (portion for initializing data, another for reading input, others for computation, etc.) • Can be used to select which parts of the process should be loaded into primary memory

  36. Chapter 12: Virtual Memory • Address translation between virtual memory and physical memory • Terms • Page • Page frame • Paging algorithms • Static allocation • Dynamic allocation

  37. Chapter 12: Virtual Memory (cont'd) • Fetch policy • when a page should be loaded • On demand paging • Replacement policy (see lecture slides and exercises) • which page is unloaded • Policies • Random • Balady’s optimal algorithm* • LRU* • LFU • FIFO • LRU approximation • Placement policy • where page should be loaded

  38. Chapter 12: Virtual Memory (cont'd) • Thrashing • Load control

  39. Virtual Memory Organization Primary Memory Secondary Memory Memory Image for pi

  40. Virtual Memory – cont • Since binding changes with time, use a dynamic virtual address map, Yt Virtual Address Space Yt

  41. Reference to Address k in Page i (User space) • Lookup (Page i, Addr k) • Reference (Page Frame j, Addr k) • Translate (Page i, Addr k) to (Page Frame j, Addr k) Windows NT Paging System Virtual Address Space Primary Memory Paging Disk (Secondary Memory) Supv space User space

  42. Size of Blocks of Memory • Virtual memory system transfers “blocks” of the address space to/from primary memory • Fixed size blocks: System-defined pages are moved back and forth between primary and secondary memory • Variable size blocks: Programmer-defined segments – corresponding to logical fragments – are the unit of movement • Paging is the commercially dominant form of virtual memory today

  43. Paging • A page is a fixed size, 2h, block of virtual addresses • A page frame is a fixed size, 2h, block of physical memory (the same size as a page) • When a virtual address, x, in page i is referenced by the CPU • If page i is loaded at page frame j, the virtual address is relocated to page frame j • If page is not loaded, the OS interrupts the process and loads the page into a page frame

  44. Address Translation (cont) g bits h bits Virtual Address Page # Line # “page table” Missing Page Yt j bits h bits Physical Address Frame # Line # CPU Memory MAR

  45. Paging Algorithms • Two basic types of paging algorithms • Static allocation • Dynamic allocation • Three basic policies in defining any paging algorithm • Fetch policy – when a page should be loaded • Replacement policy –which page is unloaded • Placement policy – where page should be loaded

  46. Fetch Policy • Determines when a page should be brought into primary memory • Usually don’t have prior knowledge about what pages will be needed • Majority of paging mechanisms use a demand fetch policy • Page is loaded only when process references it

  47. Replacement Policy • When there is no empty page frame in memory, we need to find one to replace • Write it out to the swap area if it has been changed since it was read in from the swap area • Dirty bit or modified bit • pages that have been changed are referred to as “dirty” • these pages must be written out to disk because the disk version is out of date • this is called “cleaning” the page • Which page to remove from memory to make room for a new page • We need a page replacement algorithm

  48. Thrashing Diagram • There are too many processes in memory and no process has enough memory to run. As a result, the page fault is very high and the system spends all of its time handling page fault interrupts.

  49. Good Luck! Q/A

More Related