1 / 42

Chapter 7

Chapter 7. Multicores, Multiprocessors, and Clusters. Objectives. The Student shall be able to: Define parallel processing, multicore, cluster, vector processing. Define SISD, SIMD, MISD, MIMD.

vartan
Download Presentation

Chapter 7

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 7 Multicores, Multiprocessors, and Clusters

  2. Objectives The Student shall be able to: • Define parallel processing, multicore, cluster, vector processing. • Define SISD, SIMD, MISD, MIMD. • Define multithreading, hardware multithreading, course-grained multithreading, fine-grained multithreading, simultaneous multithreading • Draw network configurations: bus, ring, mesh, cube. • Define how vector processing works: how instructions may look, and how they may be processed. • Define GPU and name 3 characteristics that differentiate a CPU and GPU. Chapter 7 — Multicores, Multiprocessors, and Clusters — 2

  3. What We’ve Already Covered • §4.10: Parallelism and Advanced Instruction-Level Parallelism • Pipelines and Multiple Issue • §5.8: Parallelism and Memory Hierarchies • Associative Memory, Interleaved Memory • §6.9: Parallelism and I/O: • Redundant Arrays of Inexpensive Disks Chapter 7 — Multicores, Multiprocessors, and Clusters — 3

  4. Parallel Processing Microsoft Word Matrix Multiply Editor Backup SpellCheck GrammarCheck Chapter 7 — Multicores, Multiprocessors, and Clusters — 4

  5. Parallel Programming Main Factor::child(int begin, int end) intval, i; for (val=begin; val<end; val++) { for (i=2; i<=end/2; i++) if (val % i == 0) break; if (i>val/2) cout<< "Factor:" << val << endl; } exit(0); cout << "Run Factor " << total << ":" << numChild << endl; Factor factor; // Spawn children for (i=0; i<numChild; i++) if (fork() == 0) { factor.child(begin, begin+range); begin += range + 1; } // Wait for children to finish for (i=0; i<numChild; i++) wait(&stat); cout << "All Children Done: " << numChild << endl; } Chapter 7 — Multicores, Multiprocessors, and Clusters — 5

  6. Introduction §9.1 Introduction • Goal: connecting multiple computersto get higher performance • Multiprocessors • Scalability, availability, power efficiency • Job-level (process-level) parallelism • High throughput for independent jobs • Parallel processing program • Single program run on multiple processors • Multicore microprocessors • Chips with multiple processors (cores) Chapter 7 — Multicores, Multiprocessors, and Clusters — 6

  7. Hardware and Software • Hardware • Serial: e.g., Pentium 4 • Parallel: e.g., quad-core Xeon e5345 • Software • Sequential: e.g., traditional program • Concurrent: e.g., operating system • Sequential/concurrent software can run on serial/parallel hardware • Challenge: making effective use of parallel hardware Chapter 7 — Multicores, Multiprocessors, and Clusters — 7

  8. Amdahl’s Law • Sequential part can limit speedup • Example: 100 processors, 90× speedup? • Tnew = Tparallelizable/100 + Tsequential • Solving: Fparallelizable = 0.999 • Need sequential part to be 0.1% of original time Chapter 7 — Multicores, Multiprocessors, and Clusters — 8

  9. Shared Memory • SMP: shared memory multiprocessor • Hardware provides single physicaladdress space for all processors • Synchronize shared variables using locks • Memory access time • UMA (uniform) vs. NUMA (nonuniform) §7.3 Shared Memory Multiprocessors Chapter 7 — Multicores, Multiprocessors, and Clusters — 9

  10. Example: Sum Reduction half = 100; repeat synch(); if (half%2 != 0 && Pn == 0) sum[0] = sum[0] + sum[half-1]; /* Conditional sum needed when half is odd; Processor0 gets missing element */ half = half/2; /* dividing line on who sums */ if (Pn < half) sum[Pn] = sum[Pn] + sum[Pn+half]; until (half == 1); Chapter 7 — Multicores, Multiprocessors, and Clusters — 10

  11. Cluster - Message Passing • Each processor (or computer) has private physical address space • Hardware sends/receives messages between processors §7.4 Clusters and Other Message-Passing Multiprocessors Chapter 7 — Multicores, Multiprocessors, and Clusters — 11

  12. Loosely Coupled Clusters • Network of independent computers • Each has private memory and OS • Connected using I/O system • E.g., Ethernet/switch, Internet • Suitable for applications with independent tasks • Web servers, databases, simulations, … • High availability, scalable, affordable • Problems • Administration cost (prefer virtual machines) • Low interconnect bandwidth • c.f. processor/memory bandwidth on an SMP Chapter 7 — Multicores, Multiprocessors, and Clusters — 12

  13. Grid Computing • Separate computers interconnected by long-haul networks • E.g., Internet connections • Work units farmed out, results sent back • Can make use of idle time on PCs • E.g., SETI@home, World Community Grid Chapter 7 — Multicores, Multiprocessors, and Clusters — 13

  14. Multithreading • Hardware Multithreading: Each thread has its own register file and PC • Fine-Grained = interleaved processing: switches between threads each instruction • Course-Grained: switches threads when stall required (memory access, wait) • Simultaneous Multithreading: uses dynamic scheduling to schedule multiple threads simultaneously Chapter 7 — Multicores, Multiprocessors, and Clusters — 14

  15. Multithreading • Performing multiple threads of execution in parallel • Replicate registers, PC, etc. • Fast switching between threads • Fine-grain multithreading • Switch threads after each cycle • Interleave instruction execution • If one thread stalls, others are executed • Coarse-grain multithreading • Only switch on long stall (e.g., L2-cache miss) • Simplifies hardware, but doesn’t hide short stalls (eg, data hazards) §7.5 Hardware Multithreading Chapter 7 — Multicores, Multiprocessors, and Clusters — 15

  16. Simultaneous Multithreading • In multiple-issue dynamically scheduled processor • Schedule instructions from multiple threads • Instructions from independent threads execute when function units are available • Within threads, dependencies handled by scheduling and register renaming • Example: Intel Pentium-4 HT • Two threads: duplicated registers, shared function units and caches Chapter 7 — Multicores, Multiprocessors, and Clusters — 16

  17. Multithreading Example Chapter 7 — Multicores, Multiprocessors, and Clusters — 17

  18. Future of Multithreading • Will it survive? In what form? • Power considerations  simplified microarchitectures • Simpler forms of multithreading • Tolerating cache-miss latency • Thread switch may be most effective • Multiple simple cores might share resources more effectively Chapter 7 — Multicores, Multiprocessors, and Clusters — 18

  19. Multithreading Lab In /home/student/Classes/Cs355/PrimeLab are 2 files: Factor.cpp runFactor • Copy them over to one of your directories (below called mydirectory): • cp Factor.cpp ~/mydirectory • cprunFactor ~/mydirectory • cd ~/mydirectory You want to observe the processor utilization (how busy the processor is). • Linux: Applications->System Tools->System Monitor->Resources Now compile Factor.cpp into executable Factor and run it using the command file runFactor (in Linux): • g++ Factor.cpp –o Factor • . / runFactor The file time.dat will contain the start and end time of the program, so you can calculate the duration. Now change in Factor.cpp the number of threads: numChild. Recompile and run. Create a matrix in Microsoft Excel or Open Office, with one column containing the number of children, the second column containing the seconds to complete. Label the second column: Delay. • Linux: Applications->Office->LibreOfficeCalc to show efficiency of multiple threads. Find the times for a range of thread counts: • 1,2,3,4,5,6,10,20(Whatever you have time for). Have your spreadsheet draw a graph with your data. Show me your data. Chapter 7 — Multicores, Multiprocessors, and Clusters — 19

  20. Instruction and Data Streams • An alternate classification §7.6 SISD, MIMD, SIMD, SPMD, and Vector • SPMD: Single Program Multiple Data • A parallel program on a MIMD computer • Conditional code for different processors Chapter 7 — Multicores, Multiprocessors, and Clusters — 20

  21. SIMD • Operate elementwise on vectors of data • E.g., MMX and SSE instructions in x86 • Multiple data elements in 128-bit wide registers • All processors execute the same instruction at the same time • Each with different data address, etc. • Simplifies synchronization • Reduced instruction control hardware • Works best for highly data-parallel applications Chapter 7 — Multicores, Multiprocessors, and Clusters — 21

  22. Vector Processors • Highly pipelined function units • Stream data from/to vector registers to units • Data collected from memory into registers • Results stored from registers to memory • Example: Vector extension to MIPS • 32 × 64-element registers (64-bit elements) • Vector instructions • lv, sv: load/store vector • addv.d: add vectors of double • addvs.d: add scalar to each element of vector of double • Significantly reduces instruction-fetch bandwidth Chapter 7 — Multicores, Multiprocessors, and Clusters — 22

  23. Example: DAXPY (Y = a × X + Y) • Conventional MIPS code l.d $f0,a($sp) ;load scalar a addiu r4,$s0,#512 ;upper bound of what to loadloop: l.d $f2,0($s0) ;load x(i) mul.d $f2,$f2,$f0 ;a × x(i) l.d $f4,0($s1) ;load y(i) add.d $f4,$f4,$f2 ;a × x(i) + y(i) s.d $f4,0($s1) ;store into y(i) addiu $s0,$s0,#8 ;increment index to x addiu $s1,$s1,#8 ;increment index to y subu $t0,r4,$s0 ;compute bound bne $t0,$zero,loop ;check if done • Vector MIPS code l.d $f0,a($sp) ;load scalar a lv $v1,0($s0) ;load vector x mulvs.d $v2,$v1,$f0 ;vector-scalar multiply lv $v3,0($s1) ;load vector y addv.d $v4,$v2,$v3 ;add y to product sv $v4,0($s1) ;store the result Chapter 7 — Multicores, Multiprocessors, and Clusters — 23

  24. Vector vs. Scalar • Vector architectures and compilers • Simplify data-parallel programming • Speed up processing since no loops • No data hazard within vector instruction • Benefit from interleaved and burst memory • Avoid control hazards by avoiding loops Chapter 7 — Multicores, Multiprocessors, and Clusters — 24

  25. Multimedia Improvements • Intel X86 (e.g., 80386) Architecture • MMX: MultiMedia Extensions • SSE: Streaming SIMD Extensions • A register can be subdivided into smaller units … or extended and subdivided 32 bits One ALU 16 bits 16 bits 8 bits 8 bits 8 bits 8 bits Chapter 7 — Multicores, Multiprocessors, and Clusters — 25

  26. Interconnection Networks • Network topologies • Arrangements of processors, switches, and links §7.8 Introduction to Multiprocessor Network Topologies Bus Ring N-cube (N = 3) 2D Mesh Fully connected Chapter 7 — Multicores, Multiprocessors, and Clusters — 26

  27. Multistage Networks Chapter 7 — Multicores, Multiprocessors, and Clusters — 27

  28. Network Characteristics • Performance • Latency (delay) per message • Throughput: messages/second • Congestion delays (depending on traffic) • Cost • Power • Routability in silicon Chapter 7 — Multicores, Multiprocessors, and Clusters — 28

  29. History of GPUs • Graphics Processing Units • Processors oriented to 3D graphics tasks • Vertex/pixel processing, shading, texture mapping,rasterization • Architecture • GPU memory optimized for bandwidth, not latency • Wider DRAM chips • Smaller memories, no multilevel cache • Simultaneous execution • Hundreds or thousands of threads • Parallel processing: SIMD + scalar • No double precision floating point §7.7 Introduction to Graphics Processing Units Chapter 7 — Multicores, Multiprocessors, and Clusters — 29

  30. Graphics in the System Chapter 7 — Multicores, Multiprocessors, and Clusters — 30

  31. GPU Architectures • Processing is highly data-parallel • GPUs are highly multithreaded • Use thread switching to hide memory latency • Less reliance on multi-level caches • Graphics memory is wide and high-bandwidth • Trend toward general purpose GPUs • Heterogeneous CPU/GPU systems • CPU for sequential code, GPU for parallel code • Programming languages/APIs • DirectX, OpenGL • C for Graphics (Cg), High Level Shader Language (HLSL) • Compute Unified Device Architecture (CUDA) Chapter 7 — Multicores, Multiprocessors, and Clusters — 31

  32. Example: NVIDIA Tesla Streaming multiprocessor 8 × Streamingprocessors Chapter 7 — Multicores, Multiprocessors, and Clusters — 32

  33. Example: NVIDIA Tesla • Streaming Processors (SP) • Single-precision FP and integer units • Each SP is fine-grained multithreaded • Warp: group of 32 threads • Executed in parallel,SIMD (or SPMD) style • 8 SPs× 4 clock cycles • Hardware contextsfor 24 warps • Registers, PCs, … Time Chapter 7 — Multicores, Multiprocessors, and Clusters — 33

  34. Classifying GPUs • Don’t fit nicely into SIMD/MIMD model • Conditional execution in a thread allows an illusion of MIMD • But with performance degredation • Need to write general purpose code with care Chapter 7 — Multicores, Multiprocessors, and Clusters — 34

  35. Roofline Diagram Attainable GPLOPs/sec = Max ( Peak Memory BW × Arithmetic Intensity, Peak FP Performance ) Chapter 7 — Multicores, Multiprocessors, and Clusters — 35

  36. Optimizing Performance • Choice of optimization depends on arithmetic intensity of code • Arithmetic intensity is not always fixed • May scale with problem size • Caching reduces memory accesses • Increases arithmetic intensity Chapter 7 — Multicores, Multiprocessors, and Clusters — 36

  37. Comparing Systems • Example: Opteron X2 vs. Opteron X4 • 2-core vs. 4-core, 2× FP performance/core, 2.2GHz vs. 2.3GHz • Same memory system • To get higher performance on X4 than X2 • Need high arithmetic intensity • Or working set must fit in X4’s 2MB L-3 cache Chapter 7 — Multicores, Multiprocessors, and Clusters — 37

  38. Optimizing Performance • Optimize FP performance • Balance adds & multiplies • Improve superscalar ILP and use of SIMD instructions • Optimize memory usage • Software prefetch • Avoid load stalls • Memory affinity • Avoid non-local data accesses Chapter 7 — Multicores, Multiprocessors, and Clusters — 38

  39. Four Example Systems 2 × quad-coreIntel Xeon e5345(Clovertown) §7.11 Real Stuff: Benchmarking Four Multicores … Chipset = Bus Fully-Buffered DRAM DIMMs 2 × quad-coreAMD Opteron X4 2356(Barcelona) Chapter 7 — Multicores, Multiprocessors, and Clusters — 39

  40. Four Example Systems 2 × oct-coreSun UltraSPARCT2 5140 (Niagara 2) Fine-grained Multithreading 2 × oct-coreIBM Cell QS20 SPE=Synergistic Proc. Element Have SIMD instr. set Chapter 7 — Multicores, Multiprocessors, and Clusters — 40

  41. Pitfalls • Not developing the software to take account of a multiprocessor architecture • Example: using a single lock for a shared composite resource • Serializes accesses, even if they could be done in parallel • Use finer-granularity locking Chapter 7 — Multicores, Multiprocessors, and Clusters — 41

  42. Concluding Remarks • Goal: higher performance by using multiple processors • Difficulties • Developing parallel software • Devising appropriate architectures • Many reasons for optimism • Changing software and application environment • Chip-level multiprocessors with lower latency, higher bandwidth interconnect • An ongoing challenge for computer architects! §7.13 Concluding Remarks Chapter 7 — Multicores, Multiprocessors, and Clusters — 42

More Related