1 / 110

CS 414 Review

Understand the basics of operating systems, including virtual memory, file systems, process management, and device drivers. Dive into OS structure, process states, scheduling, threads, and CPU scheduling techniques. Explore race conditions, atomicity, and solutions like semaphores.

srawlings
Download Presentation

CS 414 Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 414 Review

  2. Operating System: Definition Definition An Operating System (OS) provides a virtual machine on top of the real hardware, whose interface is more convenient than the raw hardware interface. Applications OS interface Operating System Physical machine interface Hardware Advantages Easy to use, simpler to code, more reliable, more secure, … You can say: “I want to write XYZ into file ABC”

  3. What is in an OS? Quake Sql Server Applications Windowing & graphics System Utils Shells OS Interface Naming Windowing & Gfx Operating System Services Networking Virtual Memory Access Control Generic I/O File System Process Management Device Drivers Memory Management Physical m/c Intf Interrupts, Cache, Physical Memory, TLB, Hardware Devices Logical OS Structure

  4. Crossing Protection Boundaries • User calls OS procedure for “privileged” operations • Calling a kernel mode service from user mode program: • Using System Calls • System Calls switches execution to kernel mode User Mode Mode bit = 1 Resume process User process System Call Trap Mode bit = 0 Kernel Mode Mode bit = 0 Return Mode bit = 1 Save Caller’s state Execute system call Restore state

  5. What is a process? • The unit of execution • The unit of scheduling • Thread of execution + address space • Is a program in execution • Sequential, instruction-at-a-time execution of a program. The same as “job” or “task” or “sequential process”

  6. Process State Transitions interrupt New Exit admitted done Ready dispatch Running I/O or event completion I/O or event wait Waiting • Processes hop across states as a result of: • Actions they perform, e.g. system calls • Actions performed by OS, e.g. rescheduling • External actions, e.g. I/O

  7. Context Switch • For a running process • All registers are loaded in CPU and modified • E.g. Program Counter, Stack Pointer, General Purpose Registers • When process relinquishes the CPU, the OS • Saves register values to the PCB of that process • To execute another process, the OS • Loads register values from PCB of that process • Context Switch • Process of switching CPU from one process to another • Very machine dependent for types of registers

  8. Threads and Processes • Most operating systems therefore support two entities: • the process, • which defines the address space and general process attributes • the thread, • which defines a sequential execution stream within a process • A thread is bound to a single process. • For each process, however, there may be many threads. • Threads are the unit of scheduling • Processes are containers in which threads execute

  9. Schedulers • Process migrates among several queues • Device queue, job queue, ready queue • Scheduler selects a process to run from these queues • Long-term scheduler: • load a job in memory • Runs infrequently • Short-term scheduler: • Select ready process to run on CPU • Should be fast • Middle-term scheduler • Reduce multiprogramming or memory consumption

  10. CPU Scheduling • FCFS • LIFO • SJF • SRTF • Priority Scheduling • Round Robin • Multi-level Queue • Multi-level Feedback Queue

  11. Race conditions • Definition: timing dependent error involving shared state • Whether it happens depends on how threads scheduled • Hard to detect: • All possible schedules have to be safe • Number of possible schedule permutations is huge • Some bad schedules? Some that will work sometimes? • they are intermittent • Timing dependent = small changes can hide bug

  12. The Fundamental Issue: Atomicity • Our atomic operation is not done atomically by machine • Atomic Unit: instruction sequence guaranteed to execute indivisibly • Also called “critical section” (CS) • When 2 processes want to execute their Critical Section, • One process finishes its CS before other is allowed to enter

  13. Critical Section Problem • Problem: Design a protocol for processes to cooperate, such that only one process is in its critical section • How to make multiple instructions seem like one? CS1 Process 1 Process 2 CS2 Time  Processes progress with non-zero speed, no assumption on clock speed Used extensively in operating systems: Queues, shared variables, interrupt handlers, etc.

  14. Solution Structure Shared vars: Initialization: Process: . . . . . . Entry Section Critical Section Exit Section Added to solve the CS problem

  15. Solution Requirements • Mutual Exclusion • Only one process can be in the critical section at any time • Progress • Decision on who enters CS cannot be indefinitely postponed • No deadlock • Bounded Waiting • Bound on #times others can enter CS, while I am waiting • No livelock • Also efficient (no extra resources), fair, simple, …

  16. Semaphores • Non-negative integer with atomic increment and decrement • Integer ‘S’ that (besides init) can only be modified by: • P(S) or S.wait(): decrement or block if already 0 • V(S) or S.signal(): increment and wake up process if any • These operations are atomic semaphore S; P(S) { while(S ≤ 0) ; S--; } V(S) { S++; }

  17. Semaphore Types • Counting Semaphores: • Any integer • Used for synchronization • Binary Semaphores • Value 0 or 1 • Used for mutual exclusion (mutex) Process i P(S); Critical Section V(S); Shared: semaphore S Init: S = 1;

  18. Mutexes and Synchronization semaphore S; P(S) { while(S ≤ 0) ; S--; } Init: S = 0; Init: S = 1; Process i P(S); Code XYZ V(S); Process j P(S); Code ABC V(S); V(S) { S++; }

  19. Classical Synchronization Problems using Semaphores

  20. Producer-Consumer Problem Shared: Semaphores mutex, empty, full; Init: mutex = 1; /* for mutual exclusion*/ empty = N; /* number empty buf entries */ full = 0; /* number full buf entries */ 607-256-4000 Producer do { . . . // produce an item in nextp . . . P(empty); P(mutex); . . . // add nextp to buffer . . . V(mutex); V(full); } while (true); Consumer do { P(full); P(mutex); . . . // remove item to nextc . . . V(mutex); V(empty); . . . // consume item in nextc . . . } while (true);

  21. Readers-Writers Problem • Courtois et al 1971 • Models access to a database • A reader is a thread that needs to look at the database but won’t change it. • A writer is a thread that modifies the database • Example: making an airline reservation • When you browse to look at flight schedules the web site is acting as a reader on your behalf • When you reserve a seat, the web site has to write into the database to make the reservation

  22. Readers-Writers Shared variables: Semaphore mutex, wrl; integer rcount; Init: mutex = 1, wrl = 1, rcount = 0; Writer do { P(wrl); . . . /*writing is performed*/ . . . V(wrl); }while(TRUE); Reader do { P(mutex); rcount++; if (rcount == 1) P(wrl); V(mutex); . . . /*reading is performed*/ . . . P(mutex); rcount--; if (rcount == 0) V(wrl); V(mutex); }while(TRUE);

  23. Readers-Writers Notes • If there is a writer • First reader blocks on wrl • Other readers block on mutex • Once a reader is active, all readers get to go through • Which reader gets in first? • The last reader to exit signals a writer • If no writer, then readers can continue • If readers and writers waiting on wrl, and writer exits • Who gets to go in first? • Why doesn’t a writer need to use mutex?

  24. Does this work as we hoped? • If readers are active, no writer can enter • The writers wait doing a P(wrl) • While writer is active, nobody can enter • Any other reader or writer will wait • But back-and-forth switching is buggy: • Any number of readers can enter in a row • Readers can “starve” writers • With semaphores, building a solution that has the desired back-and-forth behavior is really, really tricky! • We recommend that you try, but not too hard…

  25. Common programming errors Whoever next calls P() will freeze up. The bug might be confusing because that other process could be perfectly correct code, yet that’s the one you’ll see hung when you use the debugger to look at its state! A typo. Process J won’t respect mutual exclusion even if the other processes follow the rules correctly. Worse still, once we’ve done two “extra” V() operations this way, other processes might get into the CS inappropriately! A typo. Process I will get stuck (forever) the second time it does the P() operation. Moreover, every other process will freeze up too when trying to enter the critical section! Process i P(S) CS P(S) Process j V(S) CS V(S) Process k P(S) CS

  26. More common mistakes • Conditional code that can break the normaltop-to-bottom flow of codein the critical section • Often a result of someonetrying to maintain aprogram, e.g. to fix a bugor add functionality in codewritten by someone else P(S) if(something or other) return; CS V(S)

  27. What’s wrong? Shared: Semaphores mutex, empty, full; Init: mutex = 1; /* for mutual exclusion*/ empty = N; /* number empty bufs */ full = 0; /* number full bufs */ Producer do { . . . // produce an item in nextp . . . P(mutex); P(empty); . . . // add nextp to buffer . . . V(mutex); V(full); } while (true); Consumer do { P(full); P(mutex); . . . // remove item to nextc . . . V(mutex); V(empty); . . . // consume item in nextc . . . } while (true); Oops! Even if you do the correct operations, the order in which you do semaphore operations can have an incredible impact on correctness What if buffer is full?

  28. Language Support for Concurrency

  29. Monitors • Hoare 1974 • Abstract Data Type for handling/defining shared resources • Comprises: • Shared Private Data • The resource • Cannot be accessed from outside • Procedures that operate on the data • Gateway to the resource • Can only act on data local to the monitor • Synchronization primitives • Among threads that access the procedures

  30. Synchronization Using Monitors • Defines Condition Variables: • condition x; • Provides a mechanism to wait for events • Resources available, any writers • 3 atomic operations on Condition Variables • x.wait(): release monitor lock, sleep until woken up  condition variables have waiting queues too • x.notify(): wake one process waiting on condition (if there is one) • No history associated with signal • x.broadcast(): wake all processes waiting on condition • Useful for resource manager • Condition variables are not Boolean • If(x) then { } does not make sense

  31. Types of Monitors What happens on notify(): • Hoare: signaler immediately gives lock to waiter (theory) • Condition definitely holds when waiter returns • Easy to reason about the program • Mesa: signaler keeps lock and processor (practice) • Condition might not hold when waiter returns • Fewer context switches, easy to support broadcast • Brinch Hansen: signaler must immediately exit monitor • So, notify should be last statement of monitor procedure

  32. Monitor Semantics • Monitors guarantee mutual exclusion • Only one thread can execute monitor procedure at any time • “in the monitor” • If second thread invokes monitor procedure at that time • It will block and wait for entry to the monitor  Need for a wait queue • If thread within a monitor blocks, another can enter • Effect on parallelism?

  33. Structure of a Monitor Monitormonitor_name { // shared variable declarations procedure P1(. . . .) { . . . . } procedure P2(. . . .) { . . . . } . . procedure PN(. . . .) { . . . . } initialization_code(. . . .) { . . . . } } For example: Monitorstack { int top; void push(any_t *) { . . . . } any_t * pop() { . . . . } initialization_code() { . . . . } } only one instance of stack can be modified at a time

  34. Condition Variables & Semaphores • Condition Variables != semaphores • Access to monitor is controlled by a lock • Wait: blocks on thread and gives up the lock • To call wait, thread has to be in monitor, hence the lock • Semaphore P() blocks thread only if value less than 0 • Signal: causes waiting thread to wake up • If there is no waiting thread, the signal is lost • V() increments value, so future threads need not wait on P() • Condition variables have no history • However they can be used to implement each other

  35. Monitor Solutions to Classical Problems

  36. Producer Consumer using Monitors Monitor Producer_Consumer { any_t buf[N]; int n = 0, tail = 0, head = 0; condition not_empty, not_full; void put(char ch) { if(n == N) wait(not_full); buf[head%N] = ch; head++; n++; signal(not_empty); } } char get() { if(n == 0) wait(not_empty); ch = buf[tail%N]; tail++; n--; signal(not_full); return ch; }

  37. Readers and Writers Monitor ReadersNWriters { int WaitingWriters, WaitingReaders,NReaders, NWriters; Condition CanRead, CanWrite; Void BeginWrite() { if(NWriters == 1 || NReaders > 0) { ++WaitingWriters; wait(CanWrite); --WaitingWriters; } NWriters = 1; } Void EndWrite() { NWriters = 0; if(WaitingReaders) Signal(CanRead); else Signal(CanWrite); } Void BeginRead() { if(NWriters == 1 || WaitingWriters > 0) { ++WaitingReaders; Wait(CanRead); --WaitingReaders; } ++NReaders; Signal(CanRead); } Void EndRead() { if(--NReaders == 0) Signal(CanWrite); }

  38. Deadlocks Definition: Deadlock exists among a set of processes if • Every process is waiting for an event • This event can be caused only by another process in the set • Event is the acquire of release of another resource Kansas 20th century law: “When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone”

  39. Four Conditions for Deadlock • Coffman et. al. 1971 • Necessary conditions for deadlock to exist: • Mutual Exclusion • At least one resource must be held is in non-sharable mode • Hold and wait • There exists a process holding a resource, and waiting for another • No preemption • Resources cannot be preempted • Circular wait • There exists a set of processes {P1, P2, … PN}, such that • P1 is waiting for P2, P2 for P3, …. and PN for P1 All four conditions must hold for deadlock to occur

  40. Dealing with Deadlocks • Proactive Approaches: • Deadlock Prevention • Negate one of 4 necessary conditions • Prevent deadlock from occurring • Deadlock Avoidance • Carefully allocate resources based on future knowledge • Deadlocks are prevented • Reactive Approach: • Deadlock detection and recovery • Let deadlock happen, then detect and recover from it • Ignore the problem • Pretend deadlocks will never occur • Ostrich approach

  41. Safe State • A state is said to be safe, if it has a process sequence {P1, P2,…, Pn}, such that for each Pi, the resources that Pi can still request can be satisfied by the currently available resources plus the resources held by all Pj, where j < i • State is safe because OS can definitely avoid deadlock • by blocking any new requests until safe order is executed • This avoids circular wait condition • Process waits until safe state is guaranteed

  42. Banker’s Algorithm • Decides whether to grant a resource request. • Data structures: n: integer # of processes m: integer # of resources available[1..m] available[i] is # of avail resources of type i max[1..n,1..m] max demand of each Pi for each Ri allocation[1..n,1..m] current allocation of resource Rj to Pi need[1..n,1..m] max # resource Rj that Pi may still request let request[i] be vector of # of resource Rj Process Pi wants

  43. Basic Algorithm • If request[i] > need[i] then error (asked for too much) • If request[i] > available[i] then wait (can’t supply it now) • Resources are available to satisfy the request Let’s assume that we satisfy the request. Then we would have: available = available - request[i] allocation[i] = allocation [i] + request[i] need[i] = need [i] - request [i] Now, check if this would leave us in a safe state: if yes, grant the request, if no, then leave the state as is and cause process to wait.

  44. gcc Memory Management Issues • Protection: Errors in process should not affect others • Transparency: Should run despite memory size/location Translation box (MMU) legal addr? Illegal? Physical address Load Store Physical memory virtual address CPU fault data How to do this mapping?

  45. Segmentation • Processes have multiple base + limit registers • Processes address space has multiple segments • Each segment has its own base + limit registers • Add protection bits to every segment Real memory gcc 0x1000 0x3000 0x5000 0x6000 0x2000 0x8000 0x6000 Text seg r/o Base&Limit? Stack seg r/w How to do the mapping?

  46. fault Virtual addr no mem yes Seg table ? 3 128 + 0x1000 Prot base len Seg#offset 128 seg r 0x1000 512 Mapping Segments • Segment Table • An entry for each segment • Is a tuple <base, limit, protection> • Each memory reference indicates segment and offset

  47. Fragmentation • “The inability to use free memory” • External Fragmentation: • Variable sized pieces  many small holes over time • Internal Fragmentation: • Fixed sized pieces  internal waste if entire piece is not used External fragmentation gcc Word ?? emacs Unused (“internal fragmentation”) allocated stack doom

  48. Paging • Divide memory into fixed size pieces • Called “frames” or “pages” • Pros: easy, no external fragmentation Pages typical: 4k-8k gcc emacs internal frag

  49. Mapping Pages • If 2m virtual address space, 2n page size • (m - n) bits to denote page number, n for offset within page Translation done using a Page Table Virtual addr mem ((1<<12)|128) 3 128 (12bits) 0x1000 VPN page offset 128 page table seg Prot VPN PPN ? PPN “invalid” r 3 1

  50. Seg # page # (8 bits) page offset (12 bits) (4 bits) Paging + Segmentation • Paged segmentation • Handles very long segments • The segments are paged • Segmented Paging • When the page table is very big • Segment the page table • Let’s consider System 370 (24-bit address space)

More Related