1 / 23

The SAND Framework

The SAND Framework. Manolis Sifalakis mjs@comp.lancs.ac.uk. What is SAND. Directory service for discovering active resources Active service Scalable Customisable to the deployment environment Distributed Dynamic operation. Define Active Resource. NodeOS platform EEs

gsabo
Download Presentation

The SAND Framework

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The SAND Framework Manolis Sifalakis mjs@comp.lancs.ac.uk

  2. What is SAND • Directory service for discovering active resources • Active service • Scalable • Customisable to the deployment environment • Distributed • Dynamic operation

  3. Define Active Resource • NodeOS platform • EEs • Specific network services • one component • a component composite • Loading mechanisms • Exported APIs or other run-time interfaces • System resources (mem, cpu, storage capacity) • S/w support facilities (e.g. code mobility support) • H/w support facilities (e.g. FPGAs, NP EEs) • Access Policies

  4. Discover Active Resources • Why • What can the network do for me (my flow) ? • Enable/better the e3e service provisioning • What can I do for the network ? • Virtualise my network role/identity – integrate new functions • Support the self-association process • Leverage the formation of autonomic topologies • Where • Along a data path • Along-side a data path • In a neighbourhood

  5. Expressibility of Client Interface • Variable service location/path – Fixed services/functions • Fixed service location/path – Variable services/functions • Fixed service location/path – Fixed services/functions but I need to discover them • Variable service location/path – Variable service functions  Not very common

  6. The Client Interface

  7. The Client Interface - Examples • On the data path: • FindResources ((L1 , L2 , L3), FilterSpec (EE = CANEs)) • On all existing data paths between two ends: • FindResources ((L1 ,*, L3), FilterSpec (EE = Java)) • Alongside the data path: • FindResources ((L1 , L2 , L3), FilterSpec (EE=CANEs & DistHop=5)) • In the neighbourhood: • FindResources (L3 , FilterSpec (Service = Ipv6_HA)) • FindResources (L3 , FilterSpec (Service = *))

  8. Breaking down the problem <P> Resource Discovery <FilterSpec> SAND Arch. Resource Specification (WHAT) Directory Layer Overlay Layer Location Specification (WHERE)

  9. Interpreting Service Locations • Overlay layer based on DHTs • Generic, Scalable lookup facility • Serves process of addressing/locating SAND entities • Network transport independence • “Late bind” location to node IDs (allow location/ID independence) • Structured or not ? • Unrestricted  Customisable to facilitate any DHT mechanism • Reference un-structured system under development a.t.m. • SAND S-Keys instead of Hash-Keys • Universal customisable abstraction for a SAND overlay ID • Adjust routing locality • Optimize address resolution process

  10. How S-keys work • S-function: Control transport ID - overlay ID relationship S (x) = aN(x)  (1 - a)H(x), 0 ≤ a ≤ 1 x: network transport address N(x): Normalizing function a: amortizing coefficient H(x): Randomizing function Advantages • Universal identifier space for all DHT systems • The task of improving locality is mitigated to the SAND system (removes dependency from p2p adhoc mechanisms) • The SAND framework perceives always the same interface for addressing and identifying SAND nodes

  11. S-Keys Example • IPv4 network transport, : arithmetic addition, H(x): returns a random number from an IP address, N(x): masks IP address to its /28 netmask. • For x a continuous block of IP addresses

  12. Listing Active Resources • Directory Layer based on ASN.1 OIDs • Data Model (LDAPv3) • Object-Oriented representation (everything is an object) • Extensible schema specification • Customisable Hash-Tree based data organisation (DIT) • Information Access Interface (LDAPv3) • Optimised for read access • Flexible and expressible (satisfies the client interface reqs) • Generic, Standardised • Information Aggregation & Indexing (Hash functions) • Indexing: Improve search performance • Aggregation: Improve storage requirements

  13. Structure of Resource Data • Organisation: Resource DIT in a SAND node • Representation: Objects in the DIT

  14. Indexing and Aggregation • Aggregation: • Collect Index data • Reduce Index datasets • Implicit Reduction • 1,2  Firewall Service • 1,2,3  Security Service • Explicit Reduction • plugins

  15. SAND Network-wide • The bottom layer • Common overlay routing system • Shared S-function (S-keys) • The top layer • Share distributed LDAP tree: Virtual DIT • Exchange via the DIT subschema • Index exchange mechanism • CIP • LDAP

  16. Does SAND Scale ? • Organisation of SAND nodes • Areas • Hyper Areas • Domains • Area members share: • Common Virtual DIT, overlay system, net transport, S-function • Common Area-ID encoded in the Area S-Key • Border Nodes (BNs) • SAND Nodes participating in >1 Areas • Area wide index topology rooted at the BN • SAND Domains • Define administrative boundaries for a set of Areas A = ni| | i |  1, niS Ad = Aid-1| | i |, | d-1 |  1 

  17. Areas, HyperAreas, Domains

  18. SAND Efficiency-Customisability • Effective resource representation/organisation • Extensible data description model (LDAP Objects) • Flexible information organisation (DIT) • Per-Area tuning of routing locality (S-Function) • Per-Area selection of overlay/underlay technology • Independence of network transport identifiers (S-Keys) • Information Aggregation & Indexing scheme for efficient search • CIP for the creation of multiple index topologies • Extensible index objects using the schema language • 2 flexible Reduce operations (eReduce/iReduce)

  19. SAND Operation – Init/Join • Init/Boot • Generate S-Keys from ID info for node/areas • Prepare DHT/DIT/Index structures • Start beaconing for member areas • Listen for join requests • Join • Intercept beacon • Do I want to join ? => Request JoinInfoObject (LDAP) • Am I allowed to join ? => Have my JoinInfoObject checked (LDAP) • Acquire S-Key and Virtual DIT for Area • Register active resources • Start as a replicator … then load balancing the area info • If BN establish rooted index topology and advertise Area S-key

  20. SAND Operation – Answer Queries Request ID ≡ location-ID :: resource-ID Location-ID ≡ s-key  Resolve DHT Layer Resource-ID ≡ resource OID  Resolve Directory Layer • Resource-ID has local scope within location-ID namespace • “Late-bound” to an OID after the location-ID resolved • Scalability, Flexibility • Given sufficient index info, succession of S-Key/OID lookup steps recurs within/across areas until request answered • Server side (at every step) can respond with • Referral to new location where search can continue or • Act as a proxy for next search on behalf of the servicee

  21. SAND - Under the hood

  22. Ongoing Work & Future Work • Ongoing implementation of prototype • Active service: .NET EE (Managed C++) • Test platform: LARA++ programmable node • Simulation underway to test performance • Reference unstructured overlay system • Pastry • Potential of test/deployment in autonomic infrastructures for further validation

  23. Thanks for your patience …

More Related