720 likes | 815 Views
User-level Techniques on uniprocessors and SMPs Focusing on Thread Scheduling. CSM211 Kurt Debattista. Literature. Any good operating system book – dinosaur book Vahalia U. Unix Internals. Prentice Hall, 1996
E N D
User-level Techniques on uniprocessors and SMPsFocusing on Thread Scheduling CSM211 Kurt Debattista
Literature • Any good operating system book – dinosaur book • Vahalia U. Unix Internals. Prentice Hall, 1996 • Moores J. CCSP - A portable CSP-based run-time system supporting C and occam. Volume 57, Concurrent Systems Engineering Series, pages 147-168, IOS Press, April 1999 • System’s software research group http://www.cs.um.edu.mt/~ssrg Literature on thread scheduling on uniprocessors, SMPs, avoiding blocking system calls (Vella, Borg, Cordina, Debattista)
Overview • Processes • Threads • Kernel threads • User threads • User-level Memory Management • User-level Thread Scheduling • Multiprocessor hardware • SMP synchronisation • SMP thread schedulers
Processes • UNIX like operating systems provide for concurrency by means of time-slicing for long-lived, infrequently communicating processes • Concurrency is usually an illusion (uniprocessor) • Parallelism (SMPs, distributed systems) • Within a single application?
Processes – Vertical switch • Every time we enter the kernel (system call) we incur a vertical switch • Save current context • Change to kernel stack • Change back
Processes – Horizontal switch • Context switch from one processes to another – horizontal switch • Enter kernel (vertical switch) • Dispatch next process • Change memory protection boundaries • Restore new process context
Processes - Creation • Creating a new process • Enter kernel (vertical switch) • Allocating memory for all process structures • Init new tables • Update file tables • Copy parent process context • All operations on processes in the order of hundreds of microseconds and sometimes milliseconds
Multithreading • Programming style used to represent concurrency within a single application • A thread is an independent instance of execution of a program represented by a PC, a register set (context) and a stack
Multithreading (2) • In multithreaded applications threads co-exist in the same address space • A traditional process could be considered an application composed of a single thread • Example – web server • Share the same data between threads • Spawn threads • Communicate through shared memory
Multithreading (3) • Two types of threads • Kernel-level threads • User-level threads
Kernel threads • Kernel threads are threads that the kernel is aware of • The kernel is responsible for the creation of the threads and schedules them just like any other process
Advantages Concurrency within a single application No memory boundaries IPC through shared memory avoiding kernel access Kernel interaction Disadvantages Thread creation (vertical switch) Context switch (horizontal switch) Kernel interaction Kernel threads (2)
Kernel threads (3) • Thread management in the order of tens of microseconds (sometimes hundreds) • Creation • Context switch • Kernel based IPC • Fast shared memory IPC in the order of hundreds (even tens) of nanoseconds
User threads • Kernel is unaware of threads • All scheduling takes place at the user level • All scheduler data structures exist in the user-level address space
User-level thread scheduling • Library • Pre-emption and cooperative multithreading • Pre-emption (like UNIX) • Cooperative (think Windows 3.1) • Performance in the order of tens of nanoseconds
User-level thread library • Application is linked with a thread library that manages threads at run-time • Library launches the main() function and is responsible for all thread management
Fast multithreaded C library • Scheduler initialisation and shutdown • Structures for scheduler • Run queue(s) • Thread descriptor • Communication constructs • Provide functions for thread creation, execution, IPC, yield, destroy
Scheduler intialisation • Traditional main() function is part of library, the application programmer uses cthread_main() as main function • Initialise all scheduler data structures • cthread_main() is in itself a thread
Scheduler structures • Thread run queue • Fast FIFO queue • Priority based scheduling • Multiple queues • Priority queue • Thread descriptor • PC and set of registers (jmp_buf) • Stack (pointer to chunk of memory)
Communication structures • Structures for communications • Semaphores • Channels • Barrier • others
Thread API • Provide functions for threads • Initialisation - cthread_init() • Execution – cthread_run() • Yield – cthread_yield() • Barrier – cthread_join() • Termination - automatic
Context switch • Use functions setjmp() and longjmp() to save and restore context (jmp_buf) • setjmp() saves the current context • longjmp() restores the context
User-level Thread Scheduling • Thread scheduling is abstracted from the kernel • Thread management occurs at the user level • No expensive system calls • Mini-kernel on top of the OS kernel • Ideal for fine-grained multithreading
User-level Thread Schedulers • Successful user-level thread schedulers exist in the form of • CCSP • KRoC • MESH • Mach Cthreads • smash • Sun OS threads
Thread Management Issues • Interaction with operating system • Blocking kernel threads • Multiplexing kernel threads • Scheduler activations (Anderson) • System call wrappers • See Borg’s thesis for more info. (SSRG homepage) • Thread co-opeation • Automatic yield insertion (Barnes) • Active context switching (Moores)
Blocking Kernel Threads • In single kernel-threaded schedulers, when the kernel thread blocks on a blocking system call the entire scheduler blocks • Multiplexing kernel threads • Reduces the problem (though the problem is still there) • Increases the amount of horizontal switching
Blocking Kernel Threads (2) • Partial solutions • Allocate kernel thread for particular functions (e.g. Keyboard I/O) • Horizontal switching • System Call Wrappers • Wrap all (or required) blocking system calls with wrappers that launch a separate kernel thread • Require wrapper for each call • Ideal when wrappers already exist (e.g. occam) • Incurs horizontal switching overhead • Blocking system call might NOT block
Scheduler Activations • Scheduler activations offer interaction between user-level space and kernel space • Scheduler activations are the executing context on which threads run (like kernel threads) • When an activation blocks the kernel creates a new activation and informs the user space that an activation is blocked (an upcall) • Moreover a new activation is created to continue execution • One of the most effective solution but must be implemented at the kernel level • Also useful for removing the extended spinning problem on multiprogrammed multiprocessor systems
User-level Memory Management • Memory management can also benefit from user-level techniques • Replace malloc()/free() with faster user-level versions • Remove system calls sbrk() • Remove page faults
User-level Memory Management (2) • Ideal for allocating/de-allocating memory for similar data structures • No fragmentation • Pre-allocate large chunk using malloc( ) whenever required • User-level data structures handle allocation/de-allocation • Results • malloc()/free 323ns • ul_malloc()/ul_free() 16ns
User-level Memory Management (3) • More complex allocation/de-allocation possible through building complete user-level memory manager • Avoid page faults (physical memory allocation) • Enable direct SMP support • Results • malloc()/free() 323ns • ul_malloc()/ul_free() 100ns (ca.)
Multiprocessor Hardware (Flynn) • Single Instruction Single Data (SISD) • Uniprocessor machines • Single Instruction Multiple Data (SIMD) • Array computers • Multiple Instruction Single Data (MISD) • Pipelined vector processors (?) • Multiple Instruction Multiple Data (MIMD) • General purpose parallel computers
Memory Models • Uniform Memory Access (UMA) • Each CPU has equal access to memory and I/O devices • Non-Uniform Memory Access (NUMA) • Each CPU has local memory and is capable of accessing memory local to other CPUs • No Remote Memory Access (NORMA) • CPUs with local memory connected over a high-speed network
Symmetric Multiprocessors • The UMA memory model is probably the most common • The tightly-coupled, shared memory, symmetric multiprocessor (SMP) (Schimmel) • CPUs, I/O and memory are interconnected over a high speed bus • All units are located at a close physical distance from each other
Symmetric Multiprocessors (2) • Main memory consists of one single global memory module • Each CPU usually has access to local memory in terms of a local cache • Memory access is symmetric • Fair access is ensured • Cache
Caching • Data and instruction buffer • Diminish relatively slow speeds between memory and processor • Cache consistency on multiprocessors • Write through protocol • Snoopy caches • False sharing
Mutual exclusion Lock-free Non-blocking Busy - waiting Blocking Wait-free Synchronisation • Synchronisation primitives serve to • provide access control to shared resources • event ordering • Valois describes the relationship of synchronisation methods
Synchronisation Support • Synchronisation on multiprocessors relies on hardware support • Atomic read and write instructions • “Read-modify-write” instructions • swap • test and set • compare and swap • load linked / store conditional • double compare and swap • Herlihy’s hierarchy
Mutual Exclusion • Blocking or busy-waiting • Rule of thumb is to busy-wait if time expected to wait is less than the time required to block and resume a process • In fine grain multithreaded environments critical sections are small so, busy-waiting is usually preferred
Spin locks • Spin locks are mostly likely the simplest locking primitives • A spin lock is a variable that is in one of two states (usually 1 or 0) • Two operations act on a spin lock • Spin / acquire lock • Spin release
Spin Lock Implementation Acquire spin lock spin: lock ; lock bus for btsl btsl lock, 0 ; bit test and set jnc cont ; continue if carry is 0 jmp spin ; else go to spin cont: Release spin lock lock ; lock bus for btrl btrl lock, 0 ; release lock
Test and Test and Set Lock • Segall and Rudolph present a spin lock that does not monopolise the bus Acquire spin lock – Segall and Rudolph spin: lock ; lock bus for btsl btsl lock, 0 ; bit test and set jnc cont ; continue if carry is 0 loop; btl lock, 0 ; test only jc loop ; loop if carry is set jmp spin ; else go to spin cont: