230 likes | 238 Views
Dive into the world of concurrency, processes, and threads in computer systems. Explore the collaboration between OS and hardware, multiprogramming, CPU resources, memory management, process life cycle, and thread models.
E N D
Concurrency • The appearance that multiple actions are occurring at the same time • On a uni-processor, something must make that happen • A collaboration between the OS and the hardware • On a multi-processor, the same problems exist (for each CPU) as on a uni-processor
Multiprogramming • Combines multiplexing types: • Space-multiplexing - Physical Memory • Time-multiplexing - Physical Processor Process0 Process1 Processn …
Multiprogramming-2 • Multiprogramming • N programs apparently running simultaneously • space-multiplexed in executable memory • time-multiplexed across the central processor • Reason why desired • Greater throughput (work done per unit time) • More work occurring at the same time • Resources required • CPU • Memory
The CPU • Instruction cycles • Access memory and/or registers • Sequential flow via "instruction register" • One instruction-completion at a time • (Pipelines only increase the # of completions per time unit). They are still sequential! • Modes of execution • Privileged (System) • Non-privileged (User )
Memory • Sequential addressing (0 – n) • Partitioned • System • Inaccessible by user programs • User • Partitioned for multiple users • Accessible by system programs
Processes-1 • A Process is • A running program & its address space • A unit of resource management • Independent of other processes • NO sharing of memory with other processes • May share files open at Fork time • One program may start multiple processes, each in its own address space
Processes-2 Abstraction Memory Process-1 Process-n Instruction stream CPU Data stream Operating System
Resources Resources Resources Process & Address Space Data Code Stack Abstract Machine Environment Address Space
Processes-3 • The Process life-cycle • Creation • User or scheduled system activation • Execution • Running • Performing instructions (using the ALU) • Waiting • Resources or Signals • Ready • All resources available except memory and ALU • Termination • Process is no longer available
Processes-4 • Space multiplexing • Each process operates in its own"address space" • Address space is a sequence of memory locations (addresses) from 0 to 'n' as seen by the application • Process addresses must be "mapped" to real addresses in the real machine • More on this later
Processes-5 • Time multiplexing • Each process is given a small portion of time to perform instructions • O/S controls the time per process and which process gets control next • Many algorithms for this • No rules (from user's/programmer's view) on which process will run next or for how long • Some OS's dynamically adjust both time and sequence
Processes-7 • FORK (label) • Starts a process running from the labeled instruction – gets a copy of address space • QUIT() • Process terminates itself • JOIN (count) (an atomic operation) • Merges >=2 processes • Really more like "quit, unless I'm the only process left"
Threads-1 • A unit of execution withina process(like a lightweight process – an "lwp")also called a "task" • Share address space, data and devices with other threads within the process • Private stack, status (IC, state, etc) • Multi-threading • >1 thread per process • Limited by system to some max # • Per system • Per process
Thread Models JRE DOS Classic UNIX WinXX, Solaris, Linux, OS/2
Threads-2 • Several thread API's • Solaris: kernel-level threads & pthreads • Windows: kernel-level threads & pthreads • OS/2: kernel-level threads • Posix (pthreads) – full set of functions • #include <pthread.h> // for C, C++ • Allows porting without re-coding • Java threads implemented in JVM, independent of OS support • Like multiprogramming implementation in Win3.1 • Uses underlying kernel support where available
Threads-3 • Windows (native) • CreateThread( DWORD dwCreateFlags = 0, UINT nStackSize = 0, LPSECURITY_ATTRIBUTES lpSecurityAttrs = NULL ); • POSIX (Linux, Solaris, Windows) • iret1 = pthread_create( &thread1, NULL, (void*)&print_message_function, (void*) message1);
Threads-4 • Advantages of kernel-supported threads: • May request resources with or without blocking on the request • Blocked thread does NOT block other threads • Inexpensive context switch • Utilize MP architecture • Thread library for user threads is in user space • Thread library schedules user threads onto LWP’s • LWP’s are: • implemented by kernel threads • scheduled by the kernel.
Notes on Java • The JVM • uses monitors for mutual exclusion • provides wait and notify for cooperation
Java & Threads-1 • Thread creation – 2 ways • import java.lang.*;public class Counter extends Thread { public void run() //overrides Thread.run { .... }} extension from the Thread class
Java & Threads-2 • import java.lang.*;public class Counter implements Runnable{ Thread T; public void run() { .... }} • Instance of the Thread class as a variable of the Counter class – creates an interface • Can still extend the Counter class
Java & Threads-3 • Difference between the two methods • Implementing Runnable, -> greater flexibility in the creation of the class counter • Thread class also implements the Runnable interface
Wait & Signal - semaphores • Classical definitions • Wait - P (s) // make me wait for something • DO WHILE (s<=0) • END • s=s-1 // when s becomes > 0, decrement it • Signal - V (s) // tell others: my critical job is done • s=s+1 • These MUST appear as ATOMIC operations to the application