1 / 39

CT30A7001 Concurrent and Parallel Computing

CT30A7001 Concurrent and Parallel Computing. Introduction to concurrent and parallel computing. Definitions. Concurrent computing

lalo
Download Presentation

CT30A7001 Concurrent and Parallel Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CT30A7001 Concurrent and Parallel Computing Introduction to concurrent and parallel computing Lappeenranta University of Technology / JP

  2. Definitions • Concurrent computing • Concurrent computing is the concurrent (simultaneous) execution of multiple interacting computational tasks. Concurrent computing is related to parallel computing, but focuses more on the interactions between tasks. • Parallel computing • Parallel computing is a form of computation in which many instructions are carried out simultaneously. • Distributed computing • In distributed computing a program is split up into parts that run simultaneously on multiple computers communicating over a network. Distributed computing is a form of parallel computing, but parallel computing is most commonly used to describe program parts running simultaneously on multiple processors in the same computer. • Grid computing • Grid computing is a form of distributed computing whereby a "super and virtual computer" is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks. Lappeenranta University of Technology / JP

  3. Motivation • The ever increasing need of computing power has traditionally been fulfilled by producing new more powerful processors => physical limits restrict the growth (Clock rates have increased from 40 MHz (1988) to over 2 GHz (2002); Multiple instructions/cycle, multiple processors) Lappeenranta University of Technology / JP

  4. Performance – Cost ratio 1990s 1980s 1970s Performance 1960s Cost Lappeenranta University of Technology / JP

  5. Can’t you go any faster ? • More power can be achieved by using multiple computers (parallel computing) => allows us to break the limits set for the single processor Lappeenranta University of Technology / JP

  6. Trends ? • Faster processors will evolve !! • Moore’s law (statement) • Development of parallel applications take time ? • However • Memory access times lag behind (10% growth) • The data transfer speed lags behind the processing capacity • Software lags behind the hardware Lappeenranta University of Technology / JP

  7. Moore’s law Lappeenranta University of Technology / JP

  8. Parallel computing ? • Non-”von Neumann” type of computing • Some problem is solved at the same time in two or more locations (processors …) • Example: library workers • Problem need to have some inherent parallelism • Problem is solved in some parallel architecture • Problem is solved by some algorithm Lappeenranta University of Technology / JP

  9. Library workers example • 1 or more workers return books into shelves • Aisles are divided according to the alphabet • A-E, F-J, K-O, P-T, U-Z • How to divide work among workers ? Lappeenranta University of Technology / JP

  10. 1 Worker Algorithm Work Lappeenranta University of Technology / JP

  11. 2 Workers Lappeenranta University of Technology / JP

  12. 2 Workers Lappeenranta University of Technology / JP

  13. 2 Workers K-O P-T F-J A-E U-Z Lappeenranta University of Technology / JP

  14. Things to consider • The amount of work • When do we need several workers ? • The number of workers • How many workers is enough ? • Internal algorithm • How do we divide the work ? • External algorithm (communication) • How is the work is synchronized Lappeenranta University of Technology / JP

  15. Utilization Lappeenranta University of Technology / JP

  16. Inherent parallelism • Parallel computing can only be applied to those problems that have some inherent parallelism • Parallelism is an inherent property that cannot be increased ! • Some problems are naturally sequential • One can affect how well the parallelism is used Lappeenranta University of Technology / JP

  17. Benefits of parallel computing • Many problems are naturally parallel although they are modeled sequentially • Interrupts, polling => loss of information • There are problems that cannot be solved sequentially • Performance may increase linearly (some times even more) • In future natural parallelism is commonly utilized • There are no sequential computers (inherent parallelism) Lappeenranta University of Technology / JP

  18. Parallel computing goals • Different types of goals: • Reduce the execution time Example: weather forecasts • Solve bigger problems (more accurate models) Example: Mobile network simulations • Run multiple applications at the same time Multithreading • Natural approach Lappeenranta University of Technology / JP

  19. Example: Weather forecast Compute weather forecast for 3000*3000 miles area up to 11 miles high Divide the model into parts of size 0.1*0.1*0.1 miles => 1011 parts Compute weather forecast for two days (48 hours) Computing one part takes ~100 operations and parts are computed ~100 times within these 48 hours => 1015 operations to compute If we have a single 1000 MIPS workstation computations take ~280 hours !!! If we use 1000 workstations capable of 100 MIPS the computation takes only ~3 hours Lappeenranta University of Technology / JP

  20. Applicability ? • Parallel computing can be used for solving all those problems that contain some parallel properties • Time consuming task ! • Performance • Portability • Scalability Lappeenranta University of Technology / JP

  21. Application areas • Technical applications • floating point operations, computing grids • Business applications • decision support systems, simulations, ... • Network applications • multimedia, video-on-demand, ... Lappeenranta University of Technology / JP

  22. Lappeenranta University of Technology / JP

  23. Parallel architectures • Problems are computed in parallel in parallel architectures • There exits several different approaches: • Massively parallel processors (MPP) • Symmetrical multiprocessor (SMP) • Vector -processors • Cluster machines Lappeenranta University of Technology / JP

  24. Examples Cray-1 Avalon Cluster - 140 Lappeenranta University of Technology / JP

  25. TOP500 - 2003 Lappeenranta University of Technology / JP

  26. Top 500 - 2006 Lappeenranta University of Technology / JP

  27. TOP 500 - 2010 Lappeenranta University of Technology / JP

  28. No 54 Lappeenranta University of Technology / JP

  29. Lappeenranta University of Technology / JP

  30. Lappeenranta University of Technology / JP

  31. Lappeenranta University of Technology / JP

  32. Lappeenranta University of Technology / JP

  33. GRID Message passing inside cluster Geographic size increases Number of possible users increases User interaction increases (requires knowledge) Use of threads within workstation Hyperthreading within processor Lappeenranta University of Technology / JP

  34. Algorithms • Selection of the algorithm is important as different algorithms have different suitability • The fastest sequential algorithm may not be the best algorithm to be parallelized ! • The selection of algorithm may depend on the selected parallel architecture Lappeenranta University of Technology / JP

  35. Example: Sorting a sequence • Traditional approach: O(nlogn) time • Simple Merge-Sort algorithm: O(logn loglogn) time • Pipelined sorting algorithm: O(logn) time Lappeenranta University of Technology / JP

  36. Lappeenranta University of Technology / JP

  37. Note ! • Parallel computing does NOT help if the bottleneck of the system is in memory, in disk system or in network ! • Parallel computing may produce surprises • resources are competed => winner is not known beforehand => non-deterministic behavior of the program => deadlock ? (mutual exclusion) Lappeenranta University of Technology / JP

  38. Aspects slowing the future • Not all problems can be solved in parallel • Example: Digging a well vs. a dike • Parallel programming requires a new way of thinking (compare OO-programming) • Sequential algorithm may not be suitable for parallel execution • Partitioning and load balancing are not an easy task • Tools are to be developed at the moment Lappeenranta University of Technology / JP

  39. Tools for parallel computing • Threads • http://www.sun.com/software/Products/Developer-products/threads/ • PVM (Parallel Virtual Machine) • http://www.epm.ornl.gov/pvm/ • MPI (Message Passing Interface) • http://www.epm.ornl.gov/~walker/mpi/ • http://www.netlib.org/utk/papers/mpi-book/mpi-book.html • OpenMP Lappeenranta University of Technology / JP

More Related