420 likes | 514 Views
CS 311- Fall 2010. Midterm Review. Agenda. Part 1 – quick review of resources, I/O, kernel, interrupts Part 2 – processes, threads, synchronization, concurrency Part 3 – specific synchronization considerations, concerns and techniques
E N D
CS 311- Fall 2010 Midterm Review
Agenda • Part 1 – quick review of resources, I/O, kernel, interrupts • Part 2 – processes, threads, synchronization, concurrency • Part 3 – specific synchronization considerations, concerns and techniques • Part 4 – classic synchronization problems, examples, and algorithms • Part 5 – Review of old exams
Abstract View of System User Space Application Programming Interface O/S Space
Topics • Basic functions of an OS • Dev mgmt • Process & resource mgmt • Memory mgmt • File mgmt • Functional organization • General implementation methodologies • Performance • Trusted software • UNIX & WindowsNT organization
Design Constraints • Performance • Security • Correctness • Maintainability • Cost and "sell-ability" • Standards • Usability
Resource Management • Resources • Memory • CPU cycles • I/O • Includes networks, robot arms, motors • That is, any means of getting information (or signals) into or out of the computer
Resource Sharing • Why do we need to share? • Greater throughput • Lowers cost of resources • Allows more resources to be available
Executing User Programs • Batch Programming (olden days) • Scheduled tasks • Maximize throughput • Multiprogramming (modern OS) • Multiple user programs • Timesharing • Maximize response time
I/O Techniques • Programmed I/O • Processor repeatedly check I/O status register • Interrupt-Driven I/O • I/O interrupts processor when I/O is ready • Processor interrupted and involved in every word of data in the Read/Write • DMA • Processor delegates the work to the I/O device • I/O interrupts processor only upon completion
Memory • Hierarchy • registers <- L1 cache <- L2 cache <- main memory <- disk • data moves up the hierarchy step-by-step • access time slows down the processor • Memory Access • Locality of reference • Temporal Locality: recently used locations • Spatial Locality: clustered locations
The Kernel • Implements O/S functions • Privileged, non-interruptible • Sometimes reduced to a "micro-kernel" • Absolutely minimal set of functions required to be in privileged mode • Micro-kernel does NOT include : • Device drivers • File services • Process server • Virtual memory mgmt
Modes of Execution • Processor modes • Supervisor or Kernel mode • User mode • Supervisor or Kernel mode • Can execute all machine instructions • Can reference all memory locations • User mode • Can only execute a subset of instructions • Can only reference a subset of memory locations
Modes-3 • Mechanisms for getting into Kernel space • Call to function that issues a "trap" or "supervisor call" instruction • "Send" message to the Kernel • Effectively issues a "trap" • Interrupts • H/W sets mode bit to 1 • Next inst is in kernel at interrupt handler code • No "call" or "send" required
Modes-4 system call example Traptable fork (My_fork_loc); { ● ● ● trap (FORK, *My_fork_loc); } My_fork_loc:…; K_fork(loc) { ● ● start_process( loc); mode=0; return; } *K_fork Kernel space K_fork is entry # "FORK"
Interrupt Handler • Saves user state • IC • Registers • Stack • Mode (kernel/user) • Switches to device-handler • Restores user's state • Returns to user with interrupts enabled • Might NOT be atomic • Allows new interrupt before switching
Trap or Supervisor Call Instruction • Atomic operation (4 parts) • Memory protection • Switches to privileged mode • Sets the interrupt flag • Sets IC to common interrupt handler in O/S
Key concepts • CPU cycles are wasted during a wait • Devices are independent • Multitasking (or threading) is possible • Why not overlap I/O with CPU • Threads can decide when to wait • Non-thread programs can also decide • System throughput is increased
Agenda • Part 1 – quick review of resources, I/O, kernel, interrupts • Part 2 – processes, threads, synchronization, concurrency • Part 3 – specific synchronization considerations, concerns and techniques • Part 4 – classic synchronization problems, examples, and algorithms • Part 5 – Review of old exams
Processes vs User Threads • Processes • Inter-process communication requires kernel interaction • Switching between processes is more expensive • Copy the PCB (identifier, state, priority, PC, memory pointers, registers, I/O status, open files, accounting information) • PCB is larger: more expensive to create, switch, and terminate • User Threads • share address space and resources (code, data, files) • Inter-thread communication does not require kernel interaction • Switching between threads is less expensive • No need to save/restore shared address space and resources • Copy the TCB (identifier, state, stack, registers, accounting info) • TCB is smaller; why? • Less expensive all-around: to create, switch, and terminate
Context Switching - 3 • The actual Context Switch: • Save all user state info: • Registers, IC, stack pointer, security codes, etc • Load kernel registers • Access to control data structures • Locate the interrupt handler for this device • Transfer control to handler, then: • Restore user state values • Atomically: set IC to user location in user mode, interrupts allowed again
Questions to ponder • Why must certain operations be done atomically? • What restrictions are there during context switching? • What happens if the interrupt handler runs too long? • Why must interrupts be masked off during interrupt handling?
Concurrency • The appearance that multiple actions are occurring at the same time • On a uni-processor, something must make that happen • A collaboration between the OS and the hardware • On a multi-processor, the same problems exist (for each CPU) as on a uni-processor
The Problem • Given: "i" is global • i++; expands into: LDA i ADA i,1 STA i • What if interrupt occurs DURING 1 or 2? • This is a “Critical Section” • Incorrect values of "i" can result • How do we prevent such errors
Strategies • User-only mode software • Disabling interrupts • H/W & O/S support
Agenda • Part 1 – quick review of resources, I/O, kernel, interrupts • Part 2 – processes, threads, synchronization, concurrency • Part 3 – specific synchronization considerations, concerns and techniques • Part 4 – classic synchronization problems, examples, and algorithms • Part 5 – Review of old exams
SynchronizationConcerns & Considerations • What is the critical section? • Who accesses it? Reads? Writes? • Can there be race conditions? • Is there an order for access? • Can data be overwritten? • Solutions must have: • Mutual exclusion in the critical section • Progress/No Deadlock • No starvation
System Approaches • Prevention • Avoidance • Detection & Recovery • Manual mgmt
Conditions for Deadlock • Mutual exclusion on R1 • Hold R1 & request on R2 • Circularity • No preemption – once a R is requested, the request can't be retracted (because the app is now blocked! • All 4 must apply simultaneously • Necessary, but NOT sufficient
Semaphores • Uses semWait/semSignal to coordinate access to the critical section • An integer counter for each semaphore must be initialized and is used to coordinate • semWait – decrements the counter , then is blocked if counter is < 0 • semSignal – increments the counter and unblocks the next thread in the blocked queue • The one who locks is not necessarily the one who unlocks – potential pitfall • Can have more than one semaphore – more complex synchronization • Those blocked-waiting are always queued
Binary Semaphores • Binary Semaphores • Counter can only be one or zero • Access to the critical section is one at a time • Similar to a mutex lock
Counting Semaphores • Counter can be any integer at any time • More complex synchronization • Used for multiple concurrent threads • Examples • Prioritizing access to the critical section • Tracking the bound-buffer in a Producer/Consumer model • Multiple counting semaphores can be used to coordinate multiple Readers/Writers
Monitors • Used to encapsulate synchronization management • Private condition variables, semaphores, locks, etc • Public interfaces • Replace spaghetti semaphores with simple function calls provided by the monitor • Producer/Consumer Example • Create a C++ class • The class has two public functions: append and take • The condition variables and bound buffer are private data • In the producer and consumer code, you need only call append or take, the monitor does the rest
Agenda • Part 1 – quick review of resources, I/O, kernel, interrupts • Part 2 – processes, threads, synchronization, concurrency • Part 3 – specific synchronization considerations, concerns and techniques • Part 4 – classic synchronization problems, examples, and algorithms • Part 5 – Review of old exams
The Banker's Algorithm • maxc [ i, j ] is max claim for Rj by pi • alloc [ i, j ] is units of Rj held by pi • cj is the # of units of j in the whole system • Can always compute • avail [ j ] = cj - S0 i< nalloc [ i, j ] • and hence Rj available • Basically examine and enumerate all transitions Classic avoidance algorithm
Banker's Algorithm - Steps 1& 2 • // 4 resource types • C=# avail=<8, 5, 9, 7> • Compute units of R still available (C - col_sum) • avail [0] = 8 - 7 = 1 • avail [1] = 5 - 3 = 2 • avail [2] = 9 - 7 = 2 • avail [3] = 7 - 5 = 2 Step 1:allocalloc' Step 2:computations above yield: avail=<1,2,2,2> Current (safe) Allocation
Banker's Algorithm - Step 3 • Avail=<1,2,2,2> = # currently available for all Rj • Compute: maxc - alloc for each Pi (look for any satisfiable) • alloc' for P2 is <4,0,0,3> (from prev. table) • maxc[2, 0] - alloc'[2,0] = 5 - 4 = 1 ≤ avail[0] ≡ 1 • maxc[2, 1] - alloc'[2,1] = 1 - 0 = 1 ≤ avail[1] ≡ 2 • etc If no Pi satisfies: maxc - alloc' <= avail,then unsafe <stop> If alloc'=0 for all Pi <Stop> Maximum Claims
Banker's algorithm for P0 • maxc[0, 0] - alloc'[0,0] = 3 - 2 = 1 ≤ avail[0] ≡ 1 • maxc[0, 1] - alloc'[0,1] = 2 - 0 = 1 ≤ avail[1] ≡ 2 • maxc[0, 2] - alloc'[0,2] = 1 - 1 = 0 ≤ avail[2] ≡ 2 • maxc[0, 3] - alloc'[0,3] = 4 - 1 = 3 ≤ avail[3] ≡ 2 • Therefore P0 cannot make a transition to a safe state from the current state. • Likewise for P1
Banker's Algorithm - Step 4 • So P2 can claim, use and release all its Rigiving a new availability vector: avail2[0]=avail[0]+alloc'[2,0]=1+4=5 avail2[1]=avail[1]+alloc'[2,1]=2+0=2 avail2[2]=avail[2]+alloc'[2,2]=2+0=2 avail2[3]=avail[3]+alloc'[2,3]=2+3=5 avail2=<5,2,2,5> so at least one P can get its max claim satisfied
Agenda • Part 1 – quick review of resources, I/O, kernel, interrupts • Part 2 – processes, threads, synchronization, concurrency • Part 3 – specific synchronization considerations, concerns and techniques • Part 4 – classic synchronization problems, examples, and algorithms • Part 5 – Review of old exams