380 likes | 528 Views
WW Grid. Parallel Processing: Architecture Overview. Subject Code: 433-498. Gri d Computing and D istributed S ystems (GRIDS) Lab . The University of Melbourne Melbourne, Australia www.gridbus.org. Rajkumar Buyya. Overview of the Talk . Why Parallel Processing ?
E N D
WW Grid Parallel Processing: Architecture Overview Subject Code: 433-498 Grid Computing and Distributed Systems (GRIDS) Lab. The University of MelbourneMelbourne, Australiawww.gridbus.org Rajkumar Buyya
Overview of the Talk • Why Parallel Processing ? • Parallel Hardwares • Parallel Operating Systems • Parallel Programming Paradigms • Grand Challenges
Threads Interface Microkernel Multi-Processor Computing System . . P P P P P P P Processor Process Thread Computing Elements Applications Programming paradigms Operating System Hardware
Commercialization R & D Commodity Two Eras of Computing Architectures System Software/Compiler Applications P.S.Es Architectures System Software Applications P.S.Es Sequential Era Parallel Era 1940 50 60 70 80 90 2000 2030
History of Parallel Processing • PP can be traced to a tablet dated around 100 BC. • Tablet has 3 calculating positions. • Infer that multiple positions: • Reliability/ Speed
Motivating factors • Just as we learned to fly, not by constructing a machine that flaps its wings like birds, but by applying aerodynamics principles demonstrated by the nature... • We modeled PP after those of biological species.
Motivating Factors • Aggregated speed with which complex calculations carried out by neurons-individual response is slow (ms) – demonstrate feasibility of PP
Why Parallel Processing? • Computation requirements are ever increasing -- visualization, distributed databases, simulations, scientific prediction (earthquake), etc. • Sequential architectures reaching physical limitation (speed of light, thermodynamics)
Human Architecture! Growth Performance Vertical Horizontal Growth 5 10 15 20 25 30 35 40 45 . . . . Age
Computational Power Improvement Multiprocessor Uniprocessor C.P.I. 1 2 . . . . No. of Processors
Why Parallel Processing? • The Tech. of PP is mature and can be exploited commercially; significant R & Dwork on development of tools & environment. • Significant development in Networking technology is paving a way for heterogeneous computing.
Why Parallel Processing? • Hardware improvements like Pipelining, Superscalar, etc., are non-scalable and requires sophisticated Compiler Technology. • Vector Processing works well for certain kind of problems.
Parallel Program has & needs ... • Multiple “processes” active simultaneously solving a given problem, general multiple processors. • Communication and synchronization of its processes (forms the core of parallel programming efforts).
Processing Elements • Simple classification by Flynn: (No. of instruction and data streams) • SISD - conventional • SIMD - data parallel, vector computing • MISD - systolic arrays • MIMD - very general, multiple approaches. • Current focus is on MIMD model, using general purpose processors. (No shared memory)
Instructions Processor Data Output Data Input SISD : A Conventional Computer • Speed is limited by the rate at which computer can transfer information internally. Ex:PC, Macintosh, Workstations
Instruction Stream A Instruction Stream B Instruction Stream C Processor A Data Output Stream Data Input Stream Processor B Processor C The MISD Architecture • More of an intellectual exercise than a practicle configuration. Few built, but commercially not available
Instruction Stream Data Output stream A Data Input stream A Processor A Data Output stream B Processor B Data Input stream B Data Output stream C Processor C Data Input stream C SIMD Architecture Ex: CRAY machine vector processing, Thinking machine cm* Intel MMX (multimedia support) Ci<= Ai * Bi
MIMD Architecture Instruction Stream A Instruction Stream C Instruction Stream B Unlike SISD, MISD, MIMD computer works asynchronously. Shared memory (tightly coupled) MIMD Distributed memory (loosely coupled) MIMD Data Output stream A Data Input stream A Processor A Data Output stream B Processor B Data Input stream B Data Output stream C Processor C Data Input stream C
MEMORY MEMORY MEMORY BUS BUS BUS Shared Memory MIMD machine Processor A Processor B Processor C Comm: Source PE writes data to GM & destination retrieves it • Easy to build, conventional OSes of SISD can be easily be ported • Limitation : reliability & expandibility. A memory component or any processor failure affects the whole system. • Increase of processors leads to memory contention. Ex. : Silicon graphics supercomputers.... Global Memory System
MEMORY MEMORY MEMORY BUS BUS BUS Memory System A Memory System B Memory System C Distributed Memory MIMD IPC channel IPC channel Processor A Processor B Processor C • Communication : IPC on High Speed Network. • Network can be configured to ... Tree, Mesh, Cube, etc. • Unlike Shared MIMD • easily/ readily expandable • Highly reliable (any CPU failure does not affect the whole system)
C (speed = cost2) S S log2P P Laws of caution..... • Speed of computers is proportional to the square of their cost. i.e. cost = Speed • Speedup by a parallel computer increases as the logarithm of the number of processors. • Speedup = log2(no. of processors)
Caution.... • Very fast development in PP and related area have blurred concept boundaries, causing lot of terminological confusion : concurrent computing/ programming, parallel computing/ processing, multiprocessing, distributed computing, etc.
It’s hard to imagine a field that changes as rapidly as computing.
Caution.... Computer Science is Immature Science. (lack of standard taxonomy, terminologies)
Caution.... • Even well-defined distinctions like shared memory and distributed memory are merging due to new advances in technolgy. • Good environments for developments and debugging are yet to emerge.
Caution.... • There is no strict delimiters for contributors to the area of parallel processing : CA,OS, HLLs, databases, computer networks, all have a role to play. • This makes it a Hot Topic of Research
Types of Parallel Systems • Shared Memory Parallel • Smallest extension to existing systems • Program conversion is incremental • Distributed Memory Parallel • Completely new systems • Programs must be reconstructed • Clusters • Slow communication form of Distributed
Operating Systems for PP • MPP systems having thousands of processors requires OS radically different fromcurrent ones. • Every CPU needs OS : • to manage its resources • to hide its details • Traditional systems are heavy, complex and not suitable for MPP
Operating System Models • Frame work that unifies features, services and tasks performed • Three approaches to building OS.... • Monolithic OS • Layered OS • Microkernel based OS Client server OS Suitable for MPP systems • Simplicity, flexibility and high performance are crucial for OS.
Application Programs Application Programs User Mode Kernel Mode System Services Hardware Monolithic Operating System • Better application Performance • Difficult to extend Ex: MS-DOS
Layered OS Application Programs Application Programs User Mode • Easier to enhance • Each layer of code access lower level interface • Low-application performance Kernel Mode System Services Memory & I/O Device Mgmt Process Schedule Hardware Ex : UNIX
Application Programs Application Programs Traditional OS User Mode Kernel Mode OS Hardware OS Designer
New trend in OS design Servers Application Programs Application Programs User Mode Kernel Mode Microkernel Hardware
User Kernel Microkernel/Client Server OS(for MPP Systems) • Tiny OS kernel providing basic primitive (process, memory, IPC) • Traditional services becomes subsystems • Monolithic Application Perf. Competence • OS = Microkernel + User Subsystems Client Application Thread lib. File Server Network Server Display Server Microkernel Send Reply Hardware Ex: Mach, PARAS, Chorus, etc.
Few Popular Microkernel Systems • MACH, CMU • PARAS, C-DAC • Chorus • QNX, • (Windows)