220 likes | 238 Views
Review: Chapters 1 – 7. Chapter 1:. OS is a layer between user and hardware to make life easier for user and use hardware efficiently Control program or resource allocator Computer organization CPU(s), memory, and I/O devices connect to a common bus
E N D
Chapter 1: • OS is a layer between user and hardware to make life easier for user and use hardware efficiently • Control program or resource allocator • Computer organization • CPU(s), memory, and I/O devices connect to a common bus • Devices request CPU attention through interrupts • Storage hierarchy: speed, cost, volatility • Caching: copy frequently used data to faster storage • Multiprogramming: multiple jobs in memory efficiency • Timesharing: frequently switch between jobs interactive, short response time users get the impression that each has his/her own computer • Dual mode operation: user and kernel modes • Protect OS and users from each other • Privileged instructions executed only in kernel mode • Timer to prevent processes from holding resources forever
Operating-System Operations • OS is interrupt driven: it sits idle till something happens • Interrupts are generated by devices (hardware) • Traps (or exceptions) are software-generated interrupts due to • software errors, e.g., divide by zero • Request for operating system services (system calls) • Dual-mode operation allows OS to protect itself and other system components • User mode and kernel mode • Mode bit provided by hardware • Provides ability to distinguish when system is running user code or kernel code • Some instructions designated as privileged, only executable in kernel mode • System call changes mode to kernel, return from call resets it to user
Chapter: OS Services and Structures • OS provides two sets of services for • user convenience and • efficient use of resources • System calls: programming interface to OS services • Typically used through APIs for portability and ease • OS structures • monolithic • layered • microkernel • modular
Chapter 3: Processes • Process is a program in execution • OS maintains process info in PCB • Process State diagram • Creating and terminating processes (fork) • Process scheduling • Long-, short-, and medium-term schedulers • Scheduling queues
Jobs Midterm sched. Disk Job sched. CPU sched. Scheduling: The Big Picture (cont’d) In most small and interactive systems (UNIX, WinXP, …), only the CPU scheduler exists
CPU Switch From Process to Process(Context Switch) • When switching occurs, kernel • Saves state of P0 in PCB0 (in memory) • Loads state of P1 from PCB1 into registers • State = values of the CPU registers, including the program counter, stack pointer
Interprocess Communications Models Message Passing Shared Memory
Chapter 4: Threads • A thread is a basic unit of CPU utilization, a process is composed of one or more threads • Each thread has: Program counter, stack, registers • Threads share: code, data, OS resources (e.g., open files and signals)
Shared among threads Single and Multithreaded Processes
Chapter 5: CPU Scheduling • Process execution: cycle of CPU bursts and I/O bursts • CPU bursts lengths: many short bursts, and few long ones • Scheduler selects one process from ready queue • Dispatcher performs the switching • Scheduling criteria (usually conflicting) • CPU utilization, waiting time, response time, throughput, … • Scheduling Algorithms • FCFS, SJF, Priority, RR, Multilevel Queues, …
P1 P2 P3 0 24 27 30 First-Come, First-Served (FCFS) Scheduling ProcessBurst Time P1 24 P2 3 P3 3 • Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: • Waiting time for P1 = 0; P2 = 24; P3 = 27 • Average waiting time: (0 + 24 + 27)/3 = 17
CPU Scheduling • Multiprocessor Scheduling • Processor affinity vs. load balancing • Evaluation of Algorithms • Modeling, simulation, implementation
Chapter 6: Synchronization • Processor Synchronization • Techniques to coordinate access to shared data • Race condition • Multiple processes manipulating shared data and result depends on execution order • Critical section problem • Three requirements: mutual exclusion, progress, bounded waiting • Software solution: Peterson’s Algorithm • Hardware support: TestAndSet(), Swap() • Busy waiting (or spinlocks) • Semaphores: • Not busy waiting • wait(), signal() must be atomic moves the CS problem to kernel
Synchronization • Some classical synchronization problems • Consumer-producer • Dining philosopher • Readers-writers
Chapter 7: Deadlock • A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set • Four necessary (but not sufficient) conditions • Mutual exclusion: only one process at a time can use a resource • Hold and wait: a process holding at least one resource and is waiting to acquire additional resources held by other processes • No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task • Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0
Deadlock Handling • Prevention: ensure that at least one of the necessary conditions does not hold • Avoidance: decide for each request whether or not the issuing process should wait to avoid leaving the system in unsafe state • Resource-allocation graph: single instance of a resource type • Banker’s algorithm: multiple instances of a resource type • Detection and Recovery • Detection algorithm • Recovery: process termination or resource preemption