400 likes | 528 Views
CT30A7001 Concurrent and Parallel Computing. Introduction to concurrent and parallel computing. Definitions. Concurrent computing
E N D
CT30A7001 Concurrent and Parallel Computing Introduction to concurrent and parallel computing Lappeenranta University of Technology / JP
Definitions • Concurrent computing • Concurrent computing is the concurrent (simultaneous) execution of multiple interacting computational tasks. Concurrent computing is related to parallel computing, but focuses more on the interactions between tasks. • Parallel computing • Parallel computing is a form of computation in which many instructions are carried out simultaneously. • Distributed computing • In distributed computing a program is split up into parts that run simultaneously on multiple computers communicating over a network. Distributed computing is a form of parallel computing, but parallel computing is most commonly used to describe program parts running simultaneously on multiple processors in the same computer. • Grid computing • Grid computing is a form of distributed computing whereby a "super and virtual computer" is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks. Lappeenranta University of Technology / JP
Motivation • The ever increasing need of computing power has traditionally been fulfilled by producing new more powerful processors => physical limits restrict the growth (Clock rates have increased from 40 MHz (1988) to over 2 GHz (2002); Multiple instructions/cycle, multiple processors) Lappeenranta University of Technology / JP
Performance – Cost ratio 1990s 1980s 1970s Performance 1960s Cost Lappeenranta University of Technology / JP
Can’t you go any faster ? • More power can be achieved by using multiple computers (parallel computing) => allows us to break the limits set for the single processor Lappeenranta University of Technology / JP
Trends ? • Faster processors will evolve !! • Moore’s law (statement) • Development of parallel applications take time ? • However • Memory access times lag behind (10% growth) • The data transfer speed lags behind the processing capacity • Software lags behind the hardware Lappeenranta University of Technology / JP
Moore’s law Lappeenranta University of Technology / JP
Parallel computing ? • Non-”von Neumann” type of computing • Some problem is solved at the same time in two or more locations (processors …) • Example: library workers • Problem need to have some inherent parallelism • Problem is solved in some parallel architecture • Problem is solved by some algorithm Lappeenranta University of Technology / JP
Library workers example • 1 or more workers return books into shelves • Aisles are divided according to the alphabet • A-E, F-J, K-O, P-T, U-Z • How to divide work among workers ? Lappeenranta University of Technology / JP
1 Worker Algorithm Work Lappeenranta University of Technology / JP
2 Workers Lappeenranta University of Technology / JP
2 Workers Lappeenranta University of Technology / JP
2 Workers K-O P-T F-J A-E U-Z Lappeenranta University of Technology / JP
Things to consider • The amount of work • When do we need several workers ? • The number of workers • How many workers is enough ? • Internal algorithm • How do we divide the work ? • External algorithm (communication) • How is the work is synchronized Lappeenranta University of Technology / JP
Utilization Lappeenranta University of Technology / JP
Inherent parallelism • Parallel computing can only be applied to those problems that have some inherent parallelism • Parallelism is an inherent property that cannot be increased ! • Some problems are naturally sequential • One can affect how well the parallelism is used Lappeenranta University of Technology / JP
Benefits of parallel computing • Many problems are naturally parallel although they are modeled sequentially • Interrupts, polling => loss of information • There are problems that cannot be solved sequentially • Performance may increase linearly (some times even more) • In future natural parallelism is commonly utilized • There are no sequential computers (inherent parallelism) Lappeenranta University of Technology / JP
Parallel computing goals • Different types of goals: • Reduce the execution time Example: weather forecasts • Solve bigger problems (more accurate models) Example: Mobile network simulations • Run multiple applications at the same time Multithreading • Natural approach Lappeenranta University of Technology / JP
Example: Weather forecast Compute weather forecast for 3000*3000 miles area up to 11 miles high Divide the model into parts of size 0.1*0.1*0.1 miles => 1011 parts Compute weather forecast for two days (48 hours) Computing one part takes ~100 operations and parts are computed ~100 times within these 48 hours => 1015 operations to compute If we have a single 1000 MIPS workstation computations take ~280 hours !!! If we use 1000 workstations capable of 100 MIPS the computation takes only ~3 hours Lappeenranta University of Technology / JP
Applicability ? • Parallel computing can be used for solving all those problems that contain some parallel properties • Time consuming task ! • Performance • Portability • Scalability Lappeenranta University of Technology / JP
Application areas • Technical applications • floating point operations, computing grids • Business applications • decision support systems, simulations, ... • Network applications • multimedia, video-on-demand, ... Lappeenranta University of Technology / JP
Parallel architectures • Problems are computed in parallel in parallel architectures • There exits several different approaches: • Massively parallel processors (MPP) • Symmetrical multiprocessor (SMP) • Vector -processors • Cluster machines Lappeenranta University of Technology / JP
Examples Cray-1 Avalon Cluster - 140 Lappeenranta University of Technology / JP
TOP500 - 2003 Lappeenranta University of Technology / JP
Top 500 - 2006 Lappeenranta University of Technology / JP
TOP 500 - 2010 Lappeenranta University of Technology / JP
No 54 Lappeenranta University of Technology / JP
GRID Message passing inside cluster Geographic size increases Number of possible users increases User interaction increases (requires knowledge) Use of threads within workstation Hyperthreading within processor Lappeenranta University of Technology / JP
Algorithms • Selection of the algorithm is important as different algorithms have different suitability • The fastest sequential algorithm may not be the best algorithm to be parallelized ! • The selection of algorithm may depend on the selected parallel architecture Lappeenranta University of Technology / JP
Example: Sorting a sequence • Traditional approach: O(nlogn) time • Simple Merge-Sort algorithm: O(logn loglogn) time • Pipelined sorting algorithm: O(logn) time Lappeenranta University of Technology / JP
Note ! • Parallel computing does NOT help if the bottleneck of the system is in memory, in disk system or in network ! • Parallel computing may produce surprises • resources are competed => winner is not known beforehand => non-deterministic behavior of the program => deadlock ? (mutual exclusion) Lappeenranta University of Technology / JP
Aspects slowing the future • Not all problems can be solved in parallel • Example: Digging a well vs. a dike • Parallel programming requires a new way of thinking (compare OO-programming) • Sequential algorithm may not be suitable for parallel execution • Partitioning and load balancing are not an easy task • Tools are to be developed at the moment Lappeenranta University of Technology / JP
Tools for parallel computing • Threads • http://www.sun.com/software/Products/Developer-products/threads/ • PVM (Parallel Virtual Machine) • http://www.epm.ornl.gov/pvm/ • MPI (Message Passing Interface) • http://www.epm.ornl.gov/~walker/mpi/ • http://www.netlib.org/utk/papers/mpi-book/mpi-book.html • OpenMP Lappeenranta University of Technology / JP