160 likes | 340 Views
On Object Maintenance in Peer-to-Peer Systems IPTPS 2006. Kiran Tati and Geoffrey M. Voelker UC San Diego. Object Maintenance. Storage is an essential service provided by p2p systems Challenge is maintaining object availability and reliability in face of node churn
E N D
On Object Maintenance in Peer-to-Peer SystemsIPTPS 2006 Kiran Tati and Geoffrey M. Voelker UC San Diego
Object Maintenance • Storage is an essential service provided by p2p systems • Challenge is maintaining object availability and reliability in face of node churn • Node unavailability and failures impact object availability • Various object maintenance strategies have been developed • Farsite, DHash, PAST, OceanStore, TotalRecall, etc. • Minimize maintenance bandwidth overhead using • Placement strategies (successors, leaf sets, random indirect) • Redundancy mechanisms (replication, erasure coding) • Repair strategies (react immediately, lazily to failures) IPTPS 2006
Goals • Revisit how churn impacts maintenance overheads • Provide insight into issues and approaches • Highlight differences in system environments • Separate impact of temporary vs. permanent churn • Implications on object maintenance overhead • What should strategies focus on optimizing? • Try to provide insight independent of a particular strategy IPTPS 2006
Churn Review • Temporary Churn • Node leaves but eventually comes back • Object data is not lost • Permanent Churn • Node departs and never shows up again • Data on these nodes lost forever • Churn varies depending on system environment • Wide-area platforms: PlanetLab • Corporate: FarSite • File sharing: Overnet IPTPS 2006
Churn Spectrum High File Sharing Permanent Churn PlanetLab Corporate Low Low High Temporary Churn IPTPS 2006
Temporary Churn • Let’s look at temporary churn effects • No permanent churn (no data loss) • No significant change in node availabilities • Idealistic, but provides useful intuition • Object maintenance strategies place redundant data on fixed set of nodes • Object availability depends on node availability • As nodes come and go, object availability can change • Let’s track the available nodes from a group of nodes that are online at some point in time IPTPS 2006
Node Churn If we have enough redundancy to survive the minimum point, we mask the temporary churn Initially number of available nodes in a group decreases as nodes leave the system After reaching a minimum value, it again increases as nodes rejoin the system This process continues till the object removed IPTPS 2006
Implications • Can “mask” temporary churn • Redundancy inversely proportional to minimum point in graph • Want to create object with this degree of redundancy • Proactive: Estimate using node availability history • Reactive: Extend redundancy “on-demand” • Once sufficient redundancy reached, maintenance overhead very low for this object due to temporary churn • React only to shifts in node availabilities, etc. • If not reached, overhead extremely high • Environments with low permanent churn (PlanetLab) • Redundancy to mask temporary churn dominates object maintenance overhead • Tuning such redundancy has biggest impact on overhead IPTPS 2006
Permanent Churn • Permanent churn induces repairs • Repairs replenish redundancy lost due to permanent failures • In practice, eventually every node fails • Repair frequency depends on • Amount of permanent churn • Redundancy used to create/repair an object IPTPS 2006
Repairs • At each repair, a strategy can choose • Large redundancy factor • Fewer repairs • Extra storage cost • Small redundancy factor • Extra bandwidth for each repair • Reduces storage cost • What should an object maintenance strategy choose? IPTPS 2006
Analysis • Consider repairing an object of size f • Nodes fail permanently at rate d (“half death time”) • Repair triggered when availability threatened • Threshold x is redundancy needed for temporary churn • Below this threshold, object not available • Restore redundancy onto N new nodes using chunks of size c • Repair cost is (f + Nc) * (x + N) / 2 N d • Minimize for N, # of additional nodes to place data • N ~ sqrt(x) • Optimum depends on amount of redundancy needed to mask temporary churn – not the degree of permanent churn IPTPS 2006
Incremental Repairs IPTPS 2006
Implications • Repairs dominate object maintenance overhead in environments with high permanent churn • Tuning repair strategy has biggest impact on overhead • PlanetLab: Low permanent churn, not the critical problem • Also depends on object lifetime longer lived, more repairs • Interestingly, choosing amount of redundancy to repair depends on temporary churn (not permanent) • Repairs triggered when object availability threatened • Better to make small repairs frequently • Also reduces storage IPTPS 2006
Capacity and Maintenance • Storage capacity also impacts object maintenance overhead • When a strategy creates/repairs, it stores redundant data on randomly chosen online nodes • Highly available node gets picked more often • Favoring highly available nodes reduces the redundancy required to mask temporary churn • Also less redundancy to cope with permanent churn • As storage utilization increases • More redundancy required to deal with churn • Higher per-object maintenance overheads as system fills IPTPS 2006
Implications IPTPS 2006
Conclusions • System environment matters • Different degrees of both temporary and permanent churn • Temporary churn • Can mask with sufficient redundancy • Dominant overhead is creating objects when low perm churn • Permanent churn • As permanent churn increases, repairs dominate overhead • Repair amount depends on degree of temporary churn • Storage • More redundancy, maintenance overhead as systems fills up IPTPS 2006