210 likes | 411 Views
Scalable Resource Information Service for Computational Grids Nian-Feng Tzeng Center for Advanced Computer Studies University of Louisiana at Lafayette. December 7, 2007. Computational Grids. Comp. Grid. Grid Resource Information Service. Computational Grid. Grid Resource Information Service.
E N D
Scalable Resource Information Servicefor Computational GridsNian-Feng TzengCenter for Advanced Computer StudiesUniversity of Louisiana at Lafayette December 7, 2007
Computational Grids Comp. Grid
Grid Resource Information Service Computational Grid
Grid Resource Information Service Core GIIS Core Grid
Grid Resource Information Service Under GRAM reporter of GT: – at least, one system-wide GIIS – each resource provider (e.g., cluster header) runs GRIS to provide queuing information and others
What is P2P? Client Client Client • A distributed system architecture: • No centralized control • Nodes are symmetric in function • Typically many nodes, but unreliable and heterogeneous Internet Client Client
Example P2P problem: lookup N2 N1 N3 Key=“title” Value=file data… Internet ? Client Publisher Lookup(“title”) N4 N6 N5 • publish/lookup at the heart of all P2P systems
Another approach: distributed hash tables (DHTs) • Nodes are the hash buckets • Key identifies data uniquely • DHT balances keys and data across nodes • DHT replicates, caches, routes lookups, etc. Distributed applications data Lookup (key) Insert(key, data) Distributed hash tables …. node node node
K5 K20 N105 Circular ID space N32 N90 K80 N60 Chord lookups • Map keys to nodes in a load-balanced way • Hash a node IP addr. into a long string of digit (node ID) • Hash a key to the same string length (key ID) • Assign hashed key to “closest” node (i.e., its successor) • Refer hashed node ID & key ID, as ID & key, respectively • Forward key lookup to a closer node • Insert: lookup + store • Join: insert node in ring
Chord’s routing table: fingers ½ ¼ 1/8 1/16 1/32 1/64 1/128 N80
Lookups take O(log(N)) hops N5 N10 N110 K19 N20 N99 N32 Lookup(K19) N80 N60 • Lookup: route to closest predecessor
held Steps for Node 3 to find successor of key = 1: a. key = 1 belongs to 3.finger[3].interval b. Node 3 checks its 3rd finger entry, succ. = 0 c. Node 0 checks its 1st finger entry, gets its succ. = 1 (key is within the smallest possible interval, 1st entry answer = succ.) held 3 keys existing, 1, 2, 6, held in different nodes held Figure. Finger tables at existing nodes 0, 1, 3, to specify subsequent intervals and their corresponding successors.
Prefix Hash Trees (PHTs) • Easy deployment using OpenDHT • 3 APIs – put, delete, get • Application – Place Lab • Range queries • Multiple attributes – combined using linearization • Hash on prefixes • Beacon ID hashed to SHA-1
PHT search takes O(log(log(N))) hops
Well data user attribute/query translation query well ID = 2701 well ID name bore pressure temp query lookups resource publishing get hash DHT (Chord) get hash internal nodes search PHT over DHT search PHT until finding leaf node user notification find successors until getting leaf node check node capacity update PHT Well ID = 2701 Name = geiser Bore = 89.99 Pressure = 28.8 Temp = 512 validate selection create leaf nodes update DHT Fast and Scalable Resource DiscoveryOur Workby Denvil Smith
Questions? Please Ask!