590 likes | 604 Views
Learn about the concepts of MapReduce and how it enables parallel processing for achieving high-performance computing. Explore examples and the execution setup of MapReduce.
E N D
CS 61C: Great Ideas in Computer Architecture(Machine Structures) Map Reduce Instructors KrsteAsanovic, Randy H. Katz http://inst.eecs.Berkeley.edu/~cs61c/fa12 Fall 2012 -- Lecture #3
New-School Machine Structures(It’s a bit more complicated!) Today’s Lecture Software Hardware • Parallel Requests Assigned to computer e.g., Search “Katz” • Parallel Threads Assigned to core e.g., Lookup, Ads • Parallel Instructions >1 instruction @ one time e.g., 5 pipelined instructions • Parallel Data >1 data item @ one time e.g., Add of 4 pairs of words • Hardware descriptions All gates @ one time • Programming Languages SmartPhone Warehouse Scale Computer HarnessParallelism & Achieve HighPerformance Computer … Core Core Memory (Cache) Input/Output Core Functional Unit(s) Instruction Unit(s) A3+B3 A2+B2 A1+B1 A0+B0 Cache Memory Logic Gates Fall 2012 -- Lecture #3
Power Usage Effectiveness • Overall WSC Energy Efficiency: amount of computational work performed divided by the total energy used in the process • Power Usage Effectiveness (PUE):Total building power / IT equipment power • A commonly used power efficiency measure for WSC • Considers the relative overhead of datacenter infrastructure, such as cooling and power distribution • But does NOT consider the absolute efficiency of servers, networking gear • 1.0 = perfection (i.e., no building infrastructure overhead) Fall 2012 -- Lecture #3
Agenda • MapReduce Examples • Administrivia + 61C in the News + The secret to getting good grades at Berkeley • MapReduce Execution • Costs in Warehouse Scale Computer Fall 2012 -- Lecture #3
Agenda • MapReduce Examples • Administrivia + 61C in the News + The secret to getting good grades at Berkeley • MapReduce Execution • Costs in Warehouse Scale Computer Fall 2012 -- Lecture #3
Request-Level Parallelism (RLP) • Hundreds or thousands of requests per second • Not your laptop or cell-phone, but popular Internet services like Google search • Such requests are largely independent • Mostly involve read-only databases • Little read-write (aka “producer-consumer”) sharing • Rarely involve read–write data sharing or synchronization across requests • Computation easily partitioned within a request and across different requests Fall 2012 -- Lecture #3
Google Query-Serving Architecture Fall 2012 -- Lecture #3
MapReduce ProcessingExample: Count Word Occurrences • Pseudo Code: for each word in input, generate <key=word, value=1> • Reduce sums all counts emitted for a particular word across all mappers map(Stringinput_key, String input_value): // input_key: document name // input_value: document contents for each word w in input_value: EmitIntermediate(w, "1"); // Produce count of words reduce(Stringoutput_key, Iteratorintermediate_values): // output_key: a word // intermediate_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); // get integer from key-value Emit(AsString(result)); Fall 2012 -- Lecture #3
Another Example: Word Index (How Often Does a Word Appear?) Distribute that that is is that that is not is not is that it it is Reduce 1 Reduce 2 Map 1 Map 2 Map 3 Map 4 Is 1, that 1, that 1 is 1, is 1, not 1,not 1 is 1, that 1, that 1 is 1, is 1, it 1, it 1, that 1 Local Sort that 1, that 1, is 1 Is 1, that 1, that 1 is 1, that 1, it 1, it 1, is 1 is 1, not 1, is 1, not 1 Shuffle is 1,1 is 1 that 1,1 that 1,1,1,1 is 1,1,1,1,1,1 it 1,1 that 1,1,1,1,1 not 1,1 is 6; it 2 not 2; that 5 Collect is 6; it 2; not 2; that 5 Fall 2012 -- Lecture #3
Another Example: Word Index (How Often Does a Word Appear?) Distribute that that is is that that is not is not is that it it is Reduce 1 Reduce 2 Map 1 Map 2 Map 3 Map 4 Is 1, that 1, that 1 is 1, is 1, not 1,not 1 is 1, that 1, that 1 is 1, is 1, it 1, it 1, that 1 Local Sort Combine Shuffle is 1,1 is 1 that 2 that 2,2 is 1,1,2,2 It 2 that 2,2,1 not 2 is 6; it 2 not 2; that 5 Collect is 1, that 2 is 2, not 2 is 1, that2 is 2, it 2, that 1 is 6; it 2; not 2; that 5 Fall 2012 -- Lecture #3
Types • map (k1,v1) list(k2,v2) • reduce (k2,list(v2)) list(v2) • Input keys and values from different domain than output keys and values • Intermediate keys and values from same domain as output keys and values Fall 2012 -- Lecture #3
Execution Setup • Map invocations distributed by partitioning input data into M splits • Typically 16 MB to 64 MB per piece • Input processed in parallel on different servers • Reduce invocations distributed by partitioning intermediate key space into R pieces • E.g., hash(key) mod R • User picks M >> # servers, R > # servers • Big M helps with load balancing, recovery from failure • One output file per R invocation, so not too many Fall 2012 -- Lecture #3
MapReduce Processing Shuffle phase Fall 2012 -- Lecture #3
MapReduce Processing 1. MR 1st splits the input files into M “splits” then starts many copies of program on servers Shuffle phase Fall 2012 -- Lecture #3
MapReduce Processing 2. One copy—the master— is special. The rest are workers. The master picks idle workers and assigns each 1 of M map tasks or 1 of R reduce tasks. Shuffle phase Fall 2012 -- Lecture #3
MapReduce Processing (The intermediate key/value pairs produced by the map function are buffered in memory.) 3. A map worker reads the input split. It parses key/value pairs of the input data and passes each pair to the user-defined map function. Shuffle phase Fall 2012 -- Lecture #3
MapReduce Processing 4. Periodically, the buffered pairs are written to local disk, partitioned into R regions by the partitioning function. Shuffle phase Fall 2012 -- Lecture #3
MapReduce Processing 5. When a reduce worker has read all intermediate data for its partition, it bucket sorts using inter-mediate keys so that occur-rences of same keys are grouped together (The sorting is needed because typically many different keys map to the same reduce task ) Shuffle phase Fall 2012 -- Lecture #3
MapReduce Processing 6. Reduce worker iterates over sorted intermediate data and for each unique intermediate key, it passes key and corresponding set of values to the user’s reduce function. The output of the reduce function is appended to a final output file for this reduce partition. Shuffle phase Fall 2012 -- Lecture #3
MapReduce Processing 7. When all map tasks and reduce tasks have been completed, the master wakes up the user program. The MapReduce call in user program returns back to user code. Output of MR is in R output files (1 per reduce task, with file names specified by user); often passed into another MR job so don’t concatenate Shuffle phase Fall 2012 -- Lecture #3
Master Data Structures • For each map task and reduce task • State: idle, in-progress, or completed • Identify of worker server (if not idle) • For each completed map task • Stores location and size of R intermediate files • Updates files and size as corresponding map tasks complete • Location and size are pushed incrementally to workers that have in-progress reduce tasks Fall 2012 -- Lecture #3
Agenda • MapReduce Examples • Administrivia + 61C in the News + The secret to getting good grades at Berkeley • MapReduce Execution • Costs in Warehouse Scale Computer Fall 2012 -- Lecture #3
61C inthe News http://www.nytimes.com/2012/08/28/technology/active-in-cloud-amazon-reshapes-computing.html Fall 2012 -- Lecture #3
61C inthe News http://www.nytimes.com/2012/08/28/technology/ibm-mainframe-evolves-to-serve-the-digital-world.html Fall 2012 -- Lecture #3
Do I Need to Know Java? • Java used in Labs 2, 3; Project #1 (MapReduce) • Prerequisites: • Official course catalog: “61A, along with either 61B or 61BL, or programming experience equivalent to that gained in 9C, 9F, or 9G” • Course web page: “The only prerequisite is that you have taken Computer Science 61B, or at least have solid experience with a C-based programming language” • 61a + Python alone is not sufficient Fall 2012 -- Lecture #3
The Secret to Getting Good Grades • It’s easy! • Do assigned reading the night before the lecture, to get more value from lecture • (Two copies of the correct textbook now on reserve at Engineering Library) Fall 2012 -- Lecture #3
Agenda • MapReduce Examples • Administrivia + 61C in the News + The secret to getting good grades at Berkeley • MapReduce Execution • Costs in Warehouse Scale Computer Fall 2012 -- Lecture #3
MapReduce Processing Time Line • Master assigns map + reduce tasks to “worker” servers • As soon as a map task finishes, worker server can be assigned a new map or reduce task • Data shuffle begins as soon as a given Map finishes • Reduce task begins as soon as all data shuffles finish • To tolerate faults, reassign task if a worker server “dies” Fall 2012 -- Lecture #3
Show MapReduce Job Running • ~41 minutes total • ~29 minutes for Map tasks & Shuffle tasks • ~12 minutes for Reduce tasks • 1707 worker servers used • Map (Green) tasks read 0.8 TB, write 0.5 TB • Shuffle (Red) tasks read 0.5 TB, write 0.5 TB • Reduce (Blue) tasks read 0.5 TB, write 0.5 TB Fall 2012 -- Lecture #3
MapReduce Failure Handling • On worker failure: • Detect failure via periodic heartbeats • Re-execute completed and in-progress map tasks • Re-execute in progress reduce tasks • Task completion committed through master • Master failure: • Could handle, but don't yet (master failure unlikely) • Robust: lost 1600 of 1800 machines once, but finished fine Fall 2012 -- Lecture #3
MapReduce Redundant Execution • Slow workers significantly lengthen completion time • Other jobs consuming resources on machine • Bad disks with soft errors transfer data very slowly • Weird things: processor caches disabled (!!) • Solution: Near end of phase, spawn backup copies of tasks • Whichever one finishes first "wins" • Effect: Dramatically shortens job completion time • 3% more resources, large tasks 30% faster Fall 2012 -- Lecture #3
Impact on Execution of Restart, Failure for 10B record Sort using 1800 servers Kill 200 workers (5% Longer) No Backup Tasks (44% Longer) Fall 2012 -- Lecture #3
MapReduce Locality Optimization during Scheduling • Master scheduling policy: • Asks GFS (Google File System) for locations of replicas of input file blocks • Map tasks typically split into 64MB (== GFS block size) • Map tasks scheduled so GFS input block replica are on same machine or same rack • Effect: Thousands of machines read input at local disk speed • Without this, rack switches limit read rate Fall 2012 -- Lecture #3
Question: Which statements are NOT TRUE about about MapReduce? MapReduce divides computers into 1 master and N-1 workers; masters assigns MR tasks ☐ Towards the end, the master assigns uncompleted tasks again; 1st to finish wins ☐ Reducers can start reducing as soon as they start to receive Map data ☐ Reduce worker sorts by intermediate keys to group all occurrences of same key ☐
Question: Which statements are NOT TRUE about about MapReduce? MapReduce divides computers into 1 master and N-1 workers; masters assigns MR tasks ☐ Towards the end, the master assigns uncompleted tasks again; 1st to finish wins ☐ Reducers can start reducing as soon as they start to receive Map data ☐ Reduce worker sorts by intermediate keys to group all occurrences of same key ☐
Agenda • MapReduce Examples • Administrivia + 61C in the News + The secret to getting good grades at Berkeley • MapReduce Execution • Costs in Warehouse Scale Computer Fall 2012 -- Lecture #3
Design Goals of a WSC • Unique to Warehouse-scale • Ample parallelism: • Batch apps: large number independent data sets with independent processing. Also known as Data-Level Parallelism • Scale and its Opportunities/Problems • Relatively small number of these make design cost expensive and difficult to amortize • But price breaks are possible from purchases of very large numbers of commodity servers • Must also prepare for high component failures • Operational Costs Count: • Cost of equipment purchases << cost of ownership Fall 2012 -- Lecture #3
WSC Case StudyServer Provisioning Internet L3 Switch L2 Switch … TOR Switch Server Rack … Fall 2012 -- Lecture #3
Cost of WSC • US account practice separates purchase price and operational costs • Capital Expenditure (CAPEX) is cost to buy equipment (e.g.. buy servers) • Operational Expenditure (OPEX) is cost to run equipment (e.g., pay for electricity used) Fall 2012 -- Lecture #3