1 / 4

Chapter 1

Chapter 1. Introduction and General Concepts. References. Selim Akl, Parallel Computation: Models and Methods, Prentice Hall, 1997, Updated online version available through website.

csilla
Download Presentation

Chapter 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 1 Introduction and General Concepts

  2. References • Selim Akl, Parallel Computation: Models and Methods, Prentice Hall, 1997, Updated online version available through website. • Selim Akl, The Design of Efficient Parallel Algorithms, Chapter 2 in “Handbook on Parallel and Distributed Processing” edited by J. Blazewicz, K. Ecker, B. Plateau, and D. Trystram, Springer Verlag, 2000. • Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar, Introduction to Parallel Computing, 2nd Edition, Addison Wesley, 2003. • Harry Jordan and Gita Alaghband, Fundamentals of Parallel Processing: Algorithms Architectures, Languages, Prentice Hall, 2003. • Michael Quinn, Parallel Programming in C with MPI and OpenMP, McGraw Hill, 2004. • Michael Quinn, Parallel Computing: Theory and Practice, McGraw Hill, 1994 • Barry Wilkenson and Michael Allen, Parallel Programming, 2nd Ed.,Prentice Hall, 2005.

  3. Outline • Need for Parallel & Distributed Computing • Flynn’s Taxonomy of Parallel Computers • Two Main Types of MIMD Computers • Examples of Computational Models • Data Parallel & Functional/Control/Job Parallel • Granularity • Analysis of Parallel Algorithms • Elementary Steps: computational and routing steps • Running Time & Time Optimal • Parallel Speedup • Speedup • Cost and Work • Efficiency • Linear and Superlinear Speedup • Speedup and Slowdown Folklore Theorems • Amdahl’s and Gustafon’s Law

  4. Reasons to Study Parallel & DistributedComputing • Sequential computers have severe limits to memory size • Significant slowdowns occur when accessing data that is stored in external devices. • Sequential computational times for most large problems are unacceptable. • Sequential computers can not meet the deadlines for many real-time problems. • Many problems are distributed in nature and natural for distributed computation

More Related