200 likes | 311 Views
Administrative Issues . Exam date candidates CW 7 * Feb 14th (Tue): 10-12 * Feb 16th (Thu): 10-12 CW 8 * Feb 24th (Fri): 14-16 CW 9 * Feb 28th (Tue): 14-16 * Feb 29th (Wed): 14-16 (cool date for an exam ) * March 1st (Thu): 10-12. Multiprocessing Support.
E N D
Administrative Issues • Exam date candidates • CW 7 * Feb 14th (Tue): 10-12 * Feb 16th (Thu): 10-12 • CW 8 * Feb 24th (Fri): 14-16 • CW 9 * Feb 28th (Tue): 14-16 * Feb 29th (Wed): 14-16 (cool date for an exam ) * March 1st (Thu): 10-12
Multiprocessing Support Tannenbaum (3. ed)Chapter 8 • Parallel computer architectures • OS design issues
Parallel computer architectures • Is parallel computing a good idea? • Which parallel computing designs exist?
P1 Input P2 V O/P P3 Parallel Processing Motivation • Single Processor is a Single Point Of Failure (SPOF) • Not tolerable in critical applications (e.g. crash while writing thesis ) • Redundancy helps, e.g. Triple Modular Redundancy (TMR) • Why not just double? • CPUs usually have other detection mechanisms for critical failures…
Parallel Processing Motivation Performance gain through clock rate Power consumption? Implications?
Parallel Processing Overview • Flynn’s Taxonomy • Single instruction single data (SISD) streamno parallel operation • Single instruction multiple data (SIMD) streamvector and array processors, MMX & SSE instructions • Multiple instruction single data (MISD) streamnever implemented • Multiple instruction multiple data (MIMD) streammulticore, multiprocessor
Parallel Processing Overview • Past:Multithreaded Processors • No real parallelism, just increased thread switching • Choose two threads • If one blocks, the other one runs after only • Present:Symmetric Multi-Processors SMPs • Uniform Memory Access (UMA) • Nonuniform Memory Access (NUMA) • Future: Heterogeneous MP • General purpose CPUs • General purpose GPUs • Specific coprocessors • Specific accelerators • Reconfigurable HW
SMPs Message-passingmulticomputer DistributedSystem Shared-memorymultiprocessor
SMPs: Bus-based UMA Caching +private memory No caching Caching Scalability?
SMPs: Crossbar-switched UMA Scalability?
SMPs: Multistage Network UMA Memory addressing scheme Network of 2x2 switches A X, X AA Y, Y AB X, X BB Y, Y B
SMPs: Multistage Network UMA 011 sends READ (a) to module 110001 sends READ (b) to module 001 Scalability?n CPUs x n memory units: n/2(log2n) switchesOther problems?
SMPs: Non-Uniform Memory Access (NUMA) • Single address space • Load/Store interface • Access times differ!
Parallel computer architectures • Is parallel computing a good idea? • At least there is no better idea right now... • Clock rate improvement is too expensive(heat dissemination) • Which parallel computing designs exist? • Flynn‘s taxonomy: SISD, SIMD, MISD, MIMD • MIMD: Multithreaded, SMPs, Heterogeneous Multiprocessors • SMPs: UMA, NUMA
Multiprocessor OS Types • Private OS per node • Master/Slave • Shared OS
Private OS per node • Shared HW, CPUs & memory partitioned statically • Soft Resources? Processes, pages, disk…
Master/Slave Central coordinating instance Pros? Cons?
Shared OS • Now every user process can invoke the OS locally • No master CPU bottleneck, yet resource sharing • Other problems?
Multiprocessor OS issues • Synchronization • Switching off interrupts? • TSL? • Covered in the DS part • Scheduling • What to run? • Where to run? • Which combinations? • Covered in the exercise