1 / 35

Managing Dynamic Metadata and Context

Managing Dynamic Metadata and Context. Mehmet S. Aktas Advisor: Prof. Geoffrey C. Fox. Context as Service Metadata. Context can be interaction-independent slowly varying, quasi-static service metadata interaction-dependent

hanzila
Download Presentation

Managing Dynamic Metadata and Context

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Managing Dynamic Metadata and Context Mehmet S. Aktas Advisor: Prof. Geoffrey C. Fox

  2. Context as Service Metadata • Context can be • interaction-independent • slowly varying, quasi-static service metadata • interaction-dependent • dynamically generated metadata as result of interaction of services • information associated to a single service, or a session (service activity) or both • Dynamic Grid/Web Service Collections • assembled to support a specific task • can be workflow and audio/video collaborative sessions • generate metadata and have limited life-time • these loosely assembled collections as "gaggles" 2 of 34

  3. Motivating Cases • Multimedia Collaboration domain • Global Multimedia Collaboration System- Global MMCS provides A/V conferencing system. • collaborative A/V sessions with varying types of metadata such as real-time metadata describing audio/video streams • characteristics: widely distributed services, metadata of events (archival data), mostly read-only • Workflow-style applications in GIS/Sensor Grids • Pattern Informatics (PI) is an earthquake forecasting system. • sensor grid data services generates events when a certain magnitude of event (such as fault displacement) occurs • firing off various services: filtering, analyzing raw data, generating images, maps • characteristics: any number of widely distributed services can be involved, conversation metadata, transient, multiple writers 3 of 34

  4. 1 WMS GUI WFS 2 http://..../..../..txt 4 5,6,7 Data Filter 3,9 HP Search PI Code 8 Data Filter http://..../..../tmp.xml Context Information Service <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns:soap="http://www.w3..."> <soap:Header encodingStyle=“WSCTX URL" mustUnderstand="true"> <context xmlns=“ctxt schema“ timeout="100"> <context-id>http..</context-id> <context-service> http.. </context-service> <context-manager> http.. </context-service> <activity-list mustUnderstand="true" mustPropagate="true"> <p-service>http://../WMS</p-service> <p-service>http://../HPSearch</p-service> </activity-list> </context> </soap:Header> ... What are the examples of dynamically generated metadata in a real-life example? • session associated dynamic metadata • user profile • activity associated dynamic metadata • service associated dynamically generated metadata • <context xsd:type="ContextType"timeout=“100"> • <context-id>http://../abcdef:012345<context-id/> • <context-service>http://.../HPSearch</ context-service> • <content>http://danube.ucs.indiana.edu:8080\x.xml</content> • </context> • <context xsd:type="ContextType"timeout=“100"> • <context-service>http://.../HPSearch</ context-service> • <content> HPSearch associated additional data generated during execution of workflow. </content> • </context> • <context xsd:type="ContextType"timeout=“100"> • <context-service>http://.../HPSearch</ context-service> • <parent-context>http://../abcdef:012345<parent-context/> • <content> shared data for HPSearch activity </content> • <activity-list mustUnderstand="true" mustPropagate="true"> • <service>http://.../DataFilter1</service> • <service>http://.../PICode</service> • <service>http://.../DataFilter2</service> • </activity-list> • </context> • <context xsd:type="ContextType"timeout=“100"> • <context-service>http://.../WMS</ context-service> • <activity-list mustUnderstand="true" mustPropagate="true"> • <service>http://.../WMS</service> • <service>http://.../HPSearch</service> • </activity-list> • </context> • <context xsd:type="ContextType"timeout=“100"> • <context-service>http://.../HPSearch</ context-service> • <parent-context>http://../abcdef:012345<parent-context/> • <content> profile information related WMS </content> • </context> activity shared state service associated user profile session SOAP header for Context 3,4: WMS starts a session, invokes HPSearch to run workflow script for PI Code with a session id 5,6,7: HPSearch runs the workflow script and generates output file in GML format (& PDF Format) as result 8: HPSearch writes the URI of the of the output file into Context 9: WMS polls the information from Context Service 10: WMS retrieves the generated output file by workflow script and generates a map 4 of 34

  5. Practical Problem • We need a Grid Information Service for managing all information associated with services in Gaggles for; • correlating activities of widely distributed services • workflow style applications • management of events in multimedia collaboration • providing information to enable • real-time replay/playback • session failure recovery • enabling uniform query capabilities • “Give me list of services satisfying C:{a,b,c..} QoS requirements and participating S:{x,y,z..} sessions” 5 of 34

  6. Motivations • Managing small scale highly dynamic metadata as in dynamic Grid/Web Service collections • Performance limitations in point-to-point based service communication approaches for managing stateful service information • Lack of support for uniform hybrid query capabilities to both static and dynamic context information • Lack of support for adaptation to instantaneous changes in client demands • Lack of support for distributed session management capabilities especially in collaboration domain 6 of 34

  7. Research Issues I • Performance • Efficient mediator metadata strategies for service communication: high performance and persistency • Efficient access request distribution • How to choose a replica server to best serve a client request? • How to provide adaptation to instantaneous changes in client demands? • Fault-tolerance • High availability of information • Efficient replica-content creation strategies 7 of 34

  8. Research Issues II • Consistency • Provide consistency across the copies of a replica • Flexibility • Accommodating broad range of application domains, such as read-dominated, read/write dominated • Interoperability • Being compatible with wide range of applications • Providing data models and programming interfaces • to perform hybrid queries over all service metadata • to enable real-time replay/playback or session recovery capabilities 8 of 34

  9. Proposed System:Hybrid WS-Context Service • Fault tolerant and high performance Grid Information Service • Caching module • Publish/Subscribe for fault tolerance, distribution, consistency enforcement • Database backend and Extended UDDI Registry • WS-I compatible uniform programming interface • Specification with abstract data models and programming interface which combines WS-Context and UDDI in one hybrid service to manage service metadata • Hybrid functions operate on both metadata spaces • Extended WS-Context functions operate on session metadata • Extended UDDI functions operate on interaction-independent metadata 9 of 34

  10. Client Client WSDL WSDL HTTP(S) WSDL WSDL HYBRID Grid Information Service (GIS) WSDL JDBC Extended UDDI WS Context Topic Based Publish-Subscribe Messaging System Replica Server-1 WSDL WSDL HYBRID GIS HYBRID GIS WS Context Extended UDDI WS Context Extended UDDI Subscriber Publisher Replica Server-2 Replica Server-N Distributed HYBRID Grid Information Services 10 of 34

  11. WS-Context Ext-UDDI JDBC Handlers Access Publisher Subscriber Querying Storage Expeditor and Sequencer Publishing WSDL WSDL Client Detailed architecture of the system HTTP(S) 11 of 34

  12. Key Design Features • External Metadata Service • Extended UDDI Service for handling interaction-independent metadata • Cache • Integrated Cache for all service metadata • Access • Redirecting client request to an appropriate replica server • Storage • Replicating data on an appropriate replica server • Consistency enforcement • Ensuring all replicas of a data to be the same 12 of 34

  13. Extended UDDI XML Metadata Service • An extended UDDI XML Metadata Service • Alternative to OGC Web Registry Services • It supports different types of metadata • GIS Metadata Catalog (functional metadata) • User-defined metadata ((name, value) pairs) • It provides unique capabilities • Up-to-date service registry information (leasing) • Dynamic aggregation of geospatial services • It enables advanced query capabilities • Geo-spatial queries • Metadata oriented queries • Domain independent queries 13 of 34

  14. TupleSpaces Paradigm and JavaSpaces • TupleSpaces [Gelernter-99] • a data-centric asynchronous communication paradigm • communication units are tuples (data structure) • JavaSpaces [Sun Microsystems] • java based object oriented implementation • spaces are transactional secure • mutual exclusive access to objects • spaces are persistent • temporal, spatial uncoupling • spaces are associative • content based search 14 of 34

  15. Publish/Subscribe Paradigm and NaradaBrokering • Publish-Subscribe communication paradigm • Message based asynchronous communication • Participants are decoupled both in space and in time • Open source NaradaBrokering software • topic based publish/subscribe messaging system • runs on a network of cooperating broker nodes. • provides support for variety of QoSs, such as low latency, reliable message delivery, support for multiple transfer protocols, security, and so forth. 15 of 34

  16. Caching Strategy • Integrated caching capability for both UDDI-type and WS-Context-type metadata • light-weight implementation of JavaSpaces • data sharing, associative lookup, and persistency • both WS-Context-type and common UDDI-type standard operations • The system stores all keys and modest size values in memory, while big size values are stored in the database. • We assume that today’s servers are capable of holding such small size metadata in cache. • All modest-size metadata accesses happen in memory • WS-Context type metadata is backed-up into MySQL database, while the UDDI-type metadata is stored into extended UDDI every so often for persistency 16 of 34

  17. Simulation Parameters Metadata size 1.7 KB Registry size 500 services Inquiry type UDDI-query Observation 200 Average±error (ms) Stddev (ms) Hybrid-WS-Context Inquiry 12.29±0.02 0.48 Extended UDDI Inquiry 17.68±0.06 0.84 Performance Model and Measurements P4, 3.4GHz, 1GB memory, Java SDK 1.4.2, both client and services on the same machine 17 of 34

  18. Simulation parameters Metadata size 1.7 Kbytes Observation 200 Hybrid WS-Context Caching ApproachPersistency investigation • The figure shows the average execution time for varying backup frequency. • The system shows a stable performance until after the backup frequency is every 10 seconds. 18 of 34

  19. Hybrid WS-Context Caching Approach Performance investigation Simulation parameters Backup frequency every 10 seconds Metadata size 1.7 Kbytes Registry size 5000 metadata Observation 200 • % 49 performance increase in inquiry % 53 performance gain in publication functions compared to database solution. • System processing overhead is less than 1 milliseconds. 19 of 34

  20. Simulation parameters Backup frequency every 10 seconds Metadata size 1.7 Kbytes Registry size 100 metadata Hybrid WS-Context Caching ApproachMessage rate scalability investigation • This figure shows the system behavior under increasing message rates. • The system scales up to 940 inquiry messages/second and 480 publication messages/second. 20 of 34

  21. Hybrid WS-Context Caching ApproachMessage size scalability investigation Simulation parameters Backup frequency every 10 seconds Registry size 5000 metadata Observation 200 • This figure shows the system behavior under increasing message sizes. • The system performs well for small size context. Performance remains same between 100Byte and 10KBytes context payloads. • This figure shows the system behavior under increasing message sizes between 10 KB and 100 KB. • The system spends an additional ~7 ms to store big size values in the database. 21 of 34

  22. Access: Request Distribution • Pub-sub system based message distribution • Broadcast-based request dissemination based on a hashing scheme • Keys are hashed to values (topics) that runs from 1 to 1000 • Each replica holder subscribes to topics (the hash values) of the keys they have • Each access request is broadcast on the topic correspond to the key. • Replica holders unicast a response with a copy of the context under demand • Advantages • does not flood the network with access request messages • does not keep track of locations of every single data 22 of 34

  23. Simulation parameters Backup frequency every 10 seconds Message size 2.7 Kbytes Access Distribution ExperimentTest Methodology Time = T1 + T2 + T3 T1 T2 T3 • The test consists of a NaradaBrokering server and two hybrid WS-Context instances for access request distribution. • We determine the time for avg. cost end-to-end metadata access. • We run the system for 25000 observations. • Gridfarm and Teragrid machines used for testing purposes. 23 of 34

  24. Distribution experiment result • The figure shows the time required for various activities of access request distribution. • The average overhead of distribution using the pub-sub system remains the same regardless of the network distances between nodes. • The figure shows average results for every 1000 observation. We have 25000 continuous observations. • The average transfer time shows the continuous access distribution operation does not degrade the performance. 24 of 34

  25. Optimizing Performance:Dynamic migration/replication • Dynamic migration/replication • A methodology for creating temporary copies of a context in the proximity of their requestors. • Autonomous decisions • replication decision belongs to the server • Algorithm based on [Rabinovich et al, 1999] • The system keeps the popularity (# of access requests) record for each copy and flush it on regular time intervals • The system checks local data every so often for dynamic migration or replication • Unpopular server-initiated copies are deleted • Popular copies are moved where they wanted • Very popular copies are replicated to where they wanted 25 of 34

  26. Simulation parameters message size / message rate 2.7 Kbytes / 10 msg/sec replication decision frequency every 100 seconds deletion threshold 0.03 request/second replication threshold 0.18 request/second registry size 1000 metadata in Indianapolis Dynamic Replication PerformanceTest Methodology Time = T1 + T2 + T3 T1 T2 T3 • The test consists of a NaradaBrokering server and two hybrid WS-Context instances for access request distribution. • We determine the time for mean end-to-end metadata access. • We run the system for app. 45 minutes on Gridfarm and complexity machines. 26 of 34

  27. The figure shows average results for every 100 seconds. • The decrease in average latency shows that the algorithm manages to move replica copies to where they wanted. 27 of 34

  28. Storage: Replica content placement • Pub-sub system for replica content placement • Each node keeps a Replica Server Map • The new coming node sends a multicast probe message when it joins a network • Each network node responds with a unicast message to make themselves discoverable • Selection of Replica Server(s) for content placement • Select a node based on proximity weighting factor • Sending storage request to selected replica servers • 1st step: initiator unicasts storage request to each selected replica server • 2nd step: recipient server stores the context and becomes subscriber to the topic of that context • 3rd step: an acknowledgement is sent (unicast) to the initiator 28 of 34

  29. Simulation parameters Backup frequency every 10 seconds Message size 2.7 Kbytes Fault-tolerance experiment Testing Setup • The test system consists of a NaradaBrokering server(s) and four hybrid WS-Context instances separated with significant network instances. • We determine the time for average end-to-end replica content creation. • We run the system continuously for 25000 observations. • Gridfarm and Teragrid machines used for testing purposes. 29 of 34

  30. Fault-tolerance experiment result • The figure shows average results for every 1000 observation. The system was continuously tested for 25000 observations. • The results indicate the continuous operation does not degrade the performance. • The figure shows the results gathered from fault-tolerance experiments data. • Overhead of replica creation increases in the order of milliseconds as the fault-tolerance level increase. 30 of 34

  31. Consistency enforcement • Pub-sub system for enforcing consistency • Primary-copy approach • Updates of a same data are carried out at a single server • Use of NTP protocol based synchronized timestamps to impose an order to write operations on the same data • Update distribution • 1st step: An update request is forwarded (unicast) to the primary copy holder by the initiator • 2nd step: The primary-copy holder performs the update request and returns an acknowledgement • Update propagation • The primary-copy pushes (broadcasts) updates of a context, • on the topic (hash value) correspond to the key of the context, • if the primary-copy realizes that there exist a stale copy in the system. 31 of 34

  32. Simulation parameters Backup frequency every 10 seconds Message size 2.7 Kbytes Consistency Enforcement ExperimentTest Methodology • The test system consists of a NaradaBrokering server and two hybrid WS-Context instances for access request distribution. • We determine the avg. time required for enforcing consistency. • We run the system for 25000 observations. • Gridfarm and Teragrid machines used for testing purposes. Time = T1 + T2 + T3 T1 T2 T3 32 of 34

  33. Consistency Enforcement Test Result • The figure shows the results gathered from consistency experiments data. • The results indicate that the overhead of consistency enforcement is in milliseconds and the cost remains the same regardless of distribution of the network nodes. • The figure shows average results for every 1000 observation. We have 25000 continuous observations. • The average transfer time shows the continuous operation does not degrade the performance. 33 of 34

  34. Comparison of Experiment Results • The figure shows the results gathered from the distribution, fault-tolerance and consistency experiments data. • The results indicate that the overhead of integrating JavaSpaces with pub-sub system for distribution, fault-tolerance, and consistency enforcement is in the order of milliseconds. 34 of 34

  35. Contribution • We have shown that communication among services can be achieved with efficient mediator metadata strategies. • Efficient mediator services allow us to perform collective operations such as queries on subsets of all available metadata in service conversation. • We have shown that efficient decentralized metadata system can be built by integrating JavaSpaces with Publish/Subscribe paradigm. • Fault-tolerance, distribution and consistency can be succeeded with few milliseconds system processing overhead. • We have shown that adaptation to instantaneous changes in client demands can be achieved in decentralized metadata management. • We have introduced data models and programming interfaces that provides uniform search interface to both interaction independent and conversation-based service metadata. 35 of 34

More Related