1 / 15

Using Distributed Data Structures for Cluster-Based Servers

This research paper explores the use of distributed data structures to build cluster-based servers. It discusses the motivation behind this approach, compiler-aided analysis for composing data structures, fault injection as a validation technique, and research issues related to maintaining usability, performance, and robustness. Java is used for prototyping, and preliminary work on sorted lists and replication is presented. The paper also discusses compiler analysis for finding system faults and fault injection for system validation. It concludes with future directions for adaptability, recovery, and composition of multiple structures.

creolaj
Download Presentation

Using Distributed Data Structures for Cluster-Based Servers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using Distributed Data Structures for Constructing Cluster-Based Servers Richard Martin, Kiran Nagaraja and Thu Nguyen Rutgers University Department of Computer Science EASY Workshop July 2001

  2. Motivation • Programming large scale internet services is difficult • Brittle, prone to failure • too many components and resulting glue • sub-systems not designed to anticipate failure • Large average-to-peak load difference • Difficult to shed load gracefully

  3. Approach • Build services from a set of Data Structures designed specifically for clusters • Menu of hashes, lists, and trees • Compiler-aided analysis for composition of data structures • Focus on run-time behavior • Fault-injection as validation technique • Observe reaction under controlled fault conditions

  4. Why Data Structures • "It is better to have 100 functions operate on one data structure than have 10 functions operate on 10 data structures" • -Alan Perlman? • Allow service programmer to build services around a few familiar data-structures • Allow system programmers to deal with hard issues such as replication, fault-tolerance and consistency • "Collections for a Cluster"

  5. Why Compiler Analysis • Compiler better at oberserving whole system than programmer • Encode logic to find dangerous conditions • Runtime dangers and static violations • Analogy: • Compiler encodes much performance logic • Compiler encodes fault logic as well • Reports back to programmer problem areas • Aid in composition of structures • E.g. deciding a good recovery point in program

  6. Why Fault Injection • Higher confidence in end-to-end system • Classic testing: • Correct input-> correct output • Incorrect Input -> report error in input • Design for faults, use injection to test design • Correct input + intermediate error->recovery or report error

  7. Data Structures: Research Issues • Can a data structure approach be "easy to use"? • Difficulty of maintaining uniprocessor abstraction a classic problem • trade offs between performance, robustness, uniformity • E.g. Hold a remote reference, then remote node dies • What abstractions balance performance, robustness and usability? • How to compose multiple data structures efficiently? • E.g., each structure individually implement a membership protocol?

  8. Data Structures: Prototyping Approach • Use java environment • Language and run-time system handle tedious programming tasks • Java introduces new challenges • how to control resources when system hides these details? • how to access resources in safe manner through uniform interfaces?

  9. Preliminary Work • Sorted list • Accessible by key & value • Iterate over items in sorted order • "Foundation": multiple B-trees per machine • Meta-data splitter array maintains range info for all nodes • fully replicated • TRM used to keep consistent

  10. Sorted list: Basic Node 1 Node 3 Global Value Range->node splitter Local Value Range->tree splitter local B-Trees

  11. Sorted list: Replication Node 1 Node 2 Global Value Range->node splitter Local Value Range->tree splitter Co-Authority local B-Trees Authority over range

  12. Sorted list: Load Balancing Node 1 Node 2 Global Value Range->node splitter Local Value Range->tree splitter local B-Trees

  13. Compiler Analysis • Types of information • uncaught exceptions • object escapes (like memory leaks) • RMI/JNI calls • thread creation/orphans • How to avoid runtime performance penalty? • combination of static analysis & dynamic profiling

  14. Fault Injection • Validate system using fault injection • add faults to system, observe response • What faults to model? • Where do components become "too detailed"? • Where to emulate faults? • E.g. a lose a packet in the: • Java runtime? Kernel? Wires?

  15. Future Directions • Adaptability • allow group to expand/contract • Recovery • What happens when a node recovers? • How long to wait before handing off data? • Composition • How to build multiple structures in a single app?

More Related