290 likes | 551 Views
Reining in the Outliers in MapReduce Jobs using Mantri. Ganesh Ananthanarayanan † , Srikanth Kandula*, Albert Greenberg*, Ion Stoica † , Yi Lu*, Bikas Saha*, Ed Harris* † UC Berkeley * Microsoft. MapReduce Jobs. Basis of analytics in modern Internet services
E N D
Reining in the Outliers in MapReduce Jobs using Mantri Ganesh Ananthanarayanan†, Srikanth Kandula*, Albert Greenberg*, Ion Stoica†, Yi Lu*, Bikas Saha*, Ed Harris* † UC Berkeley * Microsoft
MapReduce Jobs • Basis of analytics in modern Internet services • E.g., Dryad, Hadoop • Job {Phase} {Task} • Graph flow consists of pipelines as well as strict blocks
Phase Example Dryad Job Graph Pipeline Blocked until input is done Distr. File System Distr. File System EXTRACT EXTRACT Map.1 Map.2 AGGREGATE_PARTITION AGGREGATE_PARTITION Reduce.1 Reduce.2 FULL_AGGREGATE FULL_AGGREGATE PROCESS Join COMBINE PROCESS Distr. File System
Log Analysis from Production • Logs from production cluster with thousands of machines, sampled over six months • 10,000+ jobs, 80PB of data, 4PB network transfers • Task-level details • Production and experimental jobs
Outliers hurt! • Tasks that run longer than the rest in the phase • Median phase has 10% outliers, running for >10x longer • Slow down jobs by 35% at median • Operational Inefficiency • Unpredictability in completion times affect SLAs • Hurts development productivity • Wastes compute-cycles
Why do outliers occur? Read Input Execute Input Unavailable Network Congestion Local Contention Workload Imbalance Mantri: A system that mitigates outliers based on root-cause analysis
Mantri’s Outlier Mitigation • Avoid Recomputation • Network-aware Task Placement • Duplicate Outliers • Cognizant of Workload Imbalance
Recomputes: Illustration (a) Barrier phases (b) Cascading Recomputes Actual Actual Inflation Inflation Ideal Ideal Recompute task Normal task
What causes recomputes? [1] • Faulty machines • Bad disks, non-persistent hardware quirks Set of faulty machines varies with time, not constant (4%)
What causes recomputes? [2] • Transient machine load • Recomputes correlate with machine load • Requests for data access dropped
Replicatecostly outputs MR: Recompute Probability of a machine Task1 Task 2 Task 3 MR3 MR2 TRecomp = ((MR3*(1-MR2)) * T3 Recompute only Task3 or both Task3 as well as Task2 + Replicate (TRep) (MR3 * MR2) (T3+T2) TRep < TRecomp REPLICATE
Transient Failure Causes • Recomputes manifest in clutches • Machine prone to cause recomputes till the problem is fixed • Load abates, critical process restart etc. • Clue:At least r recomputes within t time window on a machine
Speculative Recomputes • Anticipatorily recompute tasks whose outputs are unread Task Input Data (Read Fail) Speculative Recompute Speculative Recompute Unread Data
Mantri’s Outlier Mitigation • Avoid Recomputation • Preferential Replication + Speculative Recomp. • Network-aware Task Placement • Duplicate Outliers • Cognizant of Workload Imbalance
Reduce Tasks • Tasks access output of tasks from previous phases • Reduce phase (74% of total traffic) Distr. File System Local Map Network Reduce Outlier!
Variable Congestion Reduce task Rack Map output Smart placement smoothens hotspots
Traffic-based Allotment Goal: Minimize phase completion time For every rack: • d : data • u : available uplink bandwidth • v : available downlink bandwidth • Solve for task allocation fractions, ai
Local Control is a good approx. Goal: Minimize phase completion time • Let rack i have ai fraction of tasks • Time uploading, Tu = di (1 - ai) / ui • Time downloading, Td = (D – di) ai / vi • Timei = max {Tu, Td} For every rack: • d : data, D: data over all racks • u : available uplink bandwidth • v : available downlink bandwidth Link utilizations average out in long term, are steady on the short term
Mantri’s Outlier Mitigation • Avoid Recomputation • Preferential Replication + Speculative Recomp. • Network-aware Task Placement • Traffic on link proportional to bandwidth • Duplicate Outliers • Cognizant of Workload Imbalance
Contentions cause outliers • Tasks contend for local resources • Processor, memory etc. • Duplicate tasks elsewhere in the cluster • Current schemes duplicate towards end of the phase (e.g., LATE [OSDI 2008]) • Duplicate outlier or schedule pending task?
Resource-Aware Restart Save time and resources: P(ctnew < (c + 1) trem) trem Running task Potential restart (tnew) time now • Continuously observe and kill wasteful copies
Mantri’s Outlier Mitigation • Avoid Recomputation • Preferential Replication + Speculative Recomp. • Network-aware Task Placement • Traffic on link proportional to bandwidth • Duplicate Outliers • Resource-Aware Restart • Cognizant of Workload Imbalance
Workload Imbalance • A quarter of the outlier tasks have more data to process • Unequal key partitions for reduce tasks • Ignoring these better than duplication • Schedule tasks in descending order of data to process • Time α (Data to Process) • [Graham ‘69] At worse, 33% of optimal
Mantri’s Outlier Mitigation • Avoid Recomputation • Preferential Replication + Speculative Recomp. • Network-aware Task Placement • Traffic on link proportional to bandwidth • Duplicate Outliers • Resource-Aware Restart • Cognizant of Workload Imbalance • Schedule in descending order of size • Predict to act early • Be resource-aware • Act based on the cause Reactive Proactive
Results • Deployed in production Bing clusters • Trace-driven simulations • Mimic workflow, failures, data skew • Compare with existing and idealized schemes
Jobs in the Wild • Act Early: Duplicates issued when task 42% done (77% for Dryad) • Light: Issues fewer copies (.47X as many as Dryad) • Accurate:2.8x higher success rate of copies Jobs faster by 32% at median, consuming lesser resources
Recomputation Avoidance Eliminates most recomputes with minimal extra resources (Replication + Speculation) work well in tandem
Network-Aware Placement Bandwidth approximations Mantri well-approximates the ideal
Summary • From measurements in a production cluster, • Outliers are a significant problem • Are due to an interplay between storage, network and map-reduce • Mantri, a cause-, resource-aware mitigation • Deployment shows encouraging results • “Reining in the Outliers in MapReduce Clusters using Mantri”, USENIX OSDI 2010