100 likes | 289 Views
Khazana: Flexible Wide Area Consistency Management . Sai Susarla John Carter School of Computing University of Utah. Wide Area Consistency Management. Wide area services are not alike Different consistency and availability requirements Ideal: Reuse mechanisms from existing systems
E N D
Khazana: Flexible Wide Area Consistency Management Sai Susarla John Carter School of Computing University of Utah
Wide Area Consistency Management Wide area services are not alike • Different consistency and availability requirements • Ideal: Reuse mechanisms from existing systems • Problem: Each system builds monolithic solution suited only to its own particular needs Khazana: Configurable caching infrastructure • Support mix of caching & availability mechanisms • When is access allowed to a replica (optimistic/pessimistic)? • Where are updates issued (single/multiple masters)? • How are updates propagated (push- or pull-based)? • What causes updates (timer, staleness, eager, lazy)? • Let application tune how its data is managed
Exploit Commonality System files, static web content Pull updates Strict pessimistic Single-master Scoreboard, Active Dir. schema config files, multimedia streaming Push updates Optimistic last-writer-wins, Bayou, Coda, calendar, sensor logging, mobile file access Pull updates Append-only Multi-master Push updates Active Directory data, chat, whiteboard Current Approach: • Implement one combination effectively to suit one app. • Restricts reusability Khazana Approach: • Implement common mechanisms in app-independent manner • Provide hooks to let applications customize their behavior
Khazana Approach • Wide-area peer-to-peer data store • File-like data abstraction, page-grained consistency • Performs caching, concurrency control • Asynchronous update notifications • Application controls per-file behavior • Consistency policy (last-writer-wins, append-only, or strict) • Control over per-replica quality • Optimistic vs. pessimistic access • Push vs. pull updates, single vs. multiple masters • Per-file dynamic cache hierarchy
Replicated Objects in Khazana . . . P S1 S2 S3 P: Primary replica Sn: Khazana server S4 S5 P . . . Object O1: multi-master, pull-updates optimistic consistency (e.g., shared file) Object O2: single-master, push-updates strict consistency (e.g., shared password db) Parent-child relationship between object replicas Client accesses replica
Replicated Objects in Khazana . . . S2 Crashes P S1 S2 S3 P: Primary replica Sn: Khazana server Link down O2 copy inaccessible Broken link S4 S5 P . . . Object O1: multi-master, pull-updates optimistic consistency (e.g., shared file) Object O2: single-master, push-updates strict consistency (e.g., shared password db) Parent-child relationship between object replicas Client accesses replica
Replicated Objects in Khazana . . . P S1 S2 S3 P: Primary replica Sn: Khazana server Link Up Broken link S4 S5 P . . . Object O1: multi-master, pull-updates optimistic consistency (e.g., shared file) Object O2: single-master, push-updates strict consistency (e.g., shared password db) Parent-child relationship between object replicas Client accesses replica
Replicated Objects in Khazana . . . S2 Recovers P S1 S2 S3 P: Primary replica Sn: Khazana server Broken link S4 S5 P . . . Object O1: multi-master, pull-updates optimistic consistency (e.g., shared file) Object O2: single-master, push-updates strict consistency (e.g., shared password db) Parent-child relationship between object replicas Client accesses replica
A Khazana-based Chat Room 3 callback(handle, newdata) { display(newdata); } main() { handle = kh_open(kid, "a+"); kh_snoop(handle, callback); while (! done) { read(&newdata); display(newdata); kh_write(handle, newdata); } kh_close(handle); } P 4 2 1 Sample Chat client code Chat transcript: multi-master, push updates, optimistic append-only consistency Update propagation path
Status • Prototype WAN implementation working • Supports last-writer-wins policy • Supports choice of • single- vs. multi-master updates, • push vs. pull-based transfer • Target wide-area services • DataStations:distributed file store that supports multiple file sharing patterns (implemented) • Chat room: concurrent appends to a transcript file • Scoreboard: broadcasts game state to registered listeners • Directory: Hash table implemented in a shared file