600 likes | 721 Views
Introduction. Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Modified by Rajeev Alur for CIS 640 at Penn, Spring 2009. Moore’s Law. Transistor count still rising. Clock speed flattening sharply. Still on some of your desktops: The Uniprocesor.
E N D
Introduction Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Modified by Rajeev Alur for CIS 640 at Penn, Spring 2009
Moore’s Law Transistor count still rising Clock speed flattening sharply Art of Multiprocessor Programming
Still on some of your desktops: The Uniprocesor cpu memory Art of Multiprocessor Programming
In the Enterprise: The Shared Memory Multiprocessor(SMP) cache cache cache Bus Bus shared memory Art of Multiprocessor Programming
Your New Desktop: The Multicore Processor(CMP) Sun T2000 Niagara All on the same chip cache cache cache Bus Bus shared memory Art of Multiprocessor Programming
Multicores Are Here “Intel's Intel ups ante with 4-core chip. New microprocessor, due this year, will be faster, use less electricity...” [San Fran Chronicle] “AMD will launch a dual-core version of its Opteron server processor at an event in New York on April 21.” [PC World] “Sun’s Niagara…will have eight cores, each core capable of running 4 threads in parallel, for 32 concurrently running threads. ….” [The Inquirer] Art of Multiprocessor Programming
Why do we care? Time no longer cures software bloat The “free ride” is over When you double your program’s path length You can’t just wait 6 months Your software must somehow exploit twice as much concurrency Art of Multiprocessor Programming
Traditional Scaling Process 7x Speedup 3.6x 1.8x User code Traditional Uniprocessor Time: Moore’s law Art of Multiprocessor Programming
Multicore Scaling Process 7x 3.6x Speedup 1.8x User code Multicore Unfortunately, not so simple… Art of Multiprocessor Programming
Real-World Scaling Process Speedup 2.9x 2x 1.8x User code Multicore Parallelization and Synchronization require great care… Art of Multiprocessor Programming
Multicore Programming: Course Overview Fundamentals Models, algorithms, impossibility Real-World programming Architectures Techniques Topics not in Textbook Memory models and system-level concurrency libraries High-level programming abstractions Art of Multiprocessor Programming
A Zoo of Terms • Concurrent • Parallel • Distributed • Multicore What do they all mean? How do they differ?
Concurrent Computing • Programs designed as a collection of interacting threads/processes • Logical/programming abstraction • May be implemented on single processor by interleaving or on multiple processors or on distributed computers • Coordination/synchronization mechanism in a model of concurrency may be realized in many ways in an implementation
Parallel Computing • Computations that execute simultaneously to solve a common problem (more efficiently) • Parallel algorithms: Which problems can have speed-up given multiple execution units? • Parallelism can be at many levels (e.g. bit-level, instruction-level, data path) • Grid computing: Branch of parallel computing where problems are solved on clusters of computers (interacting by message passing) • Multicore computing: Branch of parallel computing focusing on multiple execution units on same chip (interacting by shared memory)
Distributed Computing • Involves multiple agents/programs (possibly with different computational tasks) with multiple computational resources (computers, multiprocessors, network) • Many examples of contemporary software (e.g. web services) are distributed systems • Heterogeneous nature, and range of time scales (web access vs local access), make design/programming more challenging
Sequential Computation thread memory object object Art of Multiprocessor Programming
Concurrent Computation threads memory object object Art of Multiprocessor Programming
Asynchrony Sudden unpredictable delays Cache misses (short) Page faults (long) Scheduling quantum used up (really long) Art of Multiprocessor Programming
Model Summary Multiple threads Sometimes called processes Single shared memory Objects live in memory Unpredictable asynchronous delays Art of Multiprocessor Programming
Road Map Textbook focuses on principles first, then practice Start with idealized models Look at simplistic problems Emphasize correctness over pragmatism “Correctness may be theoretical, but incorrectness has practical impact” In course, interleaving of chapters from the two parts Art of Multiprocessor Programming
Concurrency Jargon Hardware Processors Software Threads, processes Sometimes OK to confuse them, sometimes not. Art of Multiprocessor Programming
Parallel Primality Testing Challenge Print primes from 1 to 1010 Given Ten-processor multiprocessor One thread per processor Goal Get ten-fold speedup (or close) Art of Multiprocessor Programming
Load Balancing Split the work evenly Each thread tests range of 109 1 1010 109 2·109 … … P0 P1 P9 Art of Multiprocessor Programming
Procedure for Thread i void primePrint { int i = ThreadID.get(); // IDs in {0..9} for(j = i*109+1, j<(i+1)*109; j++) { if(isPrime(j)) print(j); } } Art of Multiprocessor Programming
Issues Higher ranges have fewer primes Yet larger numbers harder to test Thread workloads Uneven Hard to predict Art of Multiprocessor Programming
Issues Higher ranges have fewer primes Yet larger numbers harder to test Thread workloads Uneven Hard to predict Need dynamic load balancing rejected Art of Multiprocessor Programming
Shared Counter 19 each thread takes a number 18 17 Art of Multiprocessor Programming
Procedure for Thread i int counter = new Counter(1); void primePrint { long j = 0; while (j < 1010) { j = counter.getAndIncrement(); if (isPrime(j)) print(j); } } Art of Multiprocessor Programming
Procedure for Thread i Counter counter = new Counter(1); void primePrint { long j = 0; while (j < 1010) { j = counter.getAndIncrement(); if (isPrime(j)) print(j); } } Shared counter object Art of Multiprocessor Programming
Where Things Reside cache cache cache Bus Bus void primePrint { int i = ThreadID.get(); // IDs in {0..9} for(j = i*109+1, j<(i+1)*109; j++) { if(isPrime(j)) print(j); } } Local variables code shared memory 1 shared counter Art of Multiprocessor Programming
Procedure for Thread i Counter counter = new Counter(1); void primePrint { long j = 0; while (j < 1010) { j = counter.getAndIncrement(); if (isPrime(j)) print(j); } } Stop when every value taken Art of Multiprocessor Programming
Procedure for Thread i Counter counter = new Counter(1); void primePrint { long j = 0; while (j < 1010) { j =counter.getAndIncrement(); if (isPrime(j)) print(j); } } Increment & return each new value Art of Multiprocessor Programming
Counter Implementation public class Counter{ private long value; public long getAndIncrement() { return value++; } } Art of Multiprocessor Programming
Counter Implementation public class Counter { private long value; public long getAndIncrement() { return value++; } } OK for single thread, not for concurrent threads Art of Multiprocessor Programming
What It Means public class Counter { private long value; public long getAndIncrement() { return value++; } } Art of Multiprocessor Programming
What It Means public class Counter { private long value; public long getAndIncrement() { return value++; } } temp = value; value = temp + 1; return temp; Art of Multiprocessor Programming
time Not so good… Value… 1 2 3 2 read 1 write 2 read 2 write 3 read 1 write 2 Art of Multiprocessor Programming
Is this problem inherent? write read read write If we could only glue reads and writes… Art of Multiprocessor Programming
Challenge public class Counter { private long value; public long getAndIncrement() { temp = value; value = temp + 1; return temp; } } Art of Multiprocessor Programming
Challenge public class Counter { private long value; public long getAndIncrement() { temp = value; value = temp + 1; return temp; } } Make these steps atomic (indivisible) Art of Multiprocessor Programming
Hardware Solution public class Counter { private long value; public long getAndIncrement() { temp = value; value = temp + 1; return temp; } } ReadModifyWrite() instruction Art of Multiprocessor Programming
An Aside: Java™ public class Counter { private long value; public long getAndIncrement() { synchronized{ temp = value; value = temp + 1; } return temp; } } Art of Multiprocessor Programming
An Aside: Java™ public class Counter { private long value; public long getAndIncrement() { synchronized{ temp = value; value = temp + 1; } return temp; } } Synchronized block Art of Multiprocessor Programming
An Aside: Java™ public class Counter { private long value; public long getAndIncrement() { synchronized { temp = value; value = temp + 1; } return temp; } } Mutual Exclusion Art of Multiprocessor Programming
Why do we care? We want as much of the code as possible to execute concurrently (in parallel) A larger sequential part implies reduced performance Amdahl’s law: this relation is not linear… Art of Multiprocessor Programming
Amdahl’s Law Speedup= …of computation given nCPUs instead of 1 Art of Multiprocessor Programming
Amdahl’s Law Speedup= Art of Multiprocessor Programming
Amdahl’s Law Parallel fraction Speedup= Art of Multiprocessor Programming
Amdahl’s Law Sequential fraction Parallel fraction Speedup= Art of Multiprocessor Programming
Amdahl’s Law Sequential fraction Parallel fraction Speedup= Number of processors Art of Multiprocessor Programming