1 / 17

Inter-Operating Grids through Delegated MatchMaking

Explore how Inter-Connecting Grids can efficiently share computing power and handle demand surges. Learn about the Delegated MatchMaking mechanism, its benefits, and experimental results in achieving workload balance.

ocie
Download Presentation

Inter-Operating Grids through Delegated MatchMaking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 3rd Grid Initiative Summer School, EU-NCIT/UPB, Bucharest, Romania, 12 July 2007 Inter-Operating Grids throughDelegated MatchMaking Alexandru Iosup, Dick Epema, Hashim Mohamed,Mathieu Jan, Ozan Sonmez Parallel and Distributed Systems Group, TU Delft Todd Tannenbaum, Matt Farrellee, Miron Livny CS Department, U. Wisconsin-Madison

  2. Agenda • Context: our grid infrastructure • Inter-operating grids • Topology • Mechanisms • Conclusion

  3. Grid Infrastructure GEANT link 10Gb, Jun’07 • DAS-2 • Homogeneous • 5 sites, 5 clusters • 200 nodes, Dual 1GHz Pentium III • Myrinet and Surfnet • PBS, then SGE • DAS-3 • Heterogeneous • 4 sites, 5 clusters • 260 nodes, 2 Single/Dual core AMD Opteron • Myrinet/Gigabit Ethernet and Surfnet • SGE, then ? • Grid’5000 • Heterogeneous • 9 sites, 15 clusters • >2000 nodes • RENATER • OAR

  4. Inter-Operating Grids • More computing power • Handling demand surges • Good research environment

  5. Inter-Connecting Grids: Topology (1/4) • Grids not sharing load • User selects where to submit jobs • Q: Can the user efficiently select resources? What is the administrative overhead for multiple accounts?

  6. Inter-Connecting Grids: Topology (2/4) • Inter-connect any grid sites • Towards full-mesh topology • Q: Too many redundant paths? Routing loops?

  7. Inter-Connecting Grids: Topology (3/4) Root • Create grid root, which routes between grids • Fully hierarchical topology • Q: Who manages the Root? Etc.

  8. Inter-Connecting Grids: Topology (4/4) • Inter-connect existing grid roots • Not fully hierarchical topology

  9. Inter-Connecting Grids: Mechanisms (1/3) • Independent Koala schedulers, no load sharing • Two choices: • Let the user select the grid • Assist the user’s grid selection (observational scheduling)

  10. Inter-Connecting Grids: Mechanisms (2/3) • Policy: use remote grid only if local grid saturated • Operation choices: • Koala operates on top of local resource managers (OAR) • Koala deploys its own environment, by-passing OAR

  11. Inter-Connecting Grids: Mechanisms (3/3) • Policy: use remote grid only if local grid saturated • Dynamic environment build-up • Routing choices: • To parent, to children, to siblings

  12. Inter-Connecting Grids: Mechanisms (3/3)The Delegated MatchMaking Mechanism • Delegate resources, not jobs • Dynamic environment build-up • Peer-to-Peer exchange(negotiation?) JM-1 obtains RM-1 from SM-1 With M. Livny, T. Tannenbaum, M. Farrellee (U. Wisc.-Madison)

  13. Inter-Connecting Grids: Mechanisms (3/3)Why should DMM work? • Overall workload imbalance: normalized daily load (5:1) • Temporary workload imbalance: hourly load (1000:1) Overall imbalance Temporary imbalance

  14. Inter-Connecting Grids: Mechanisms (3/3)Experimental Results: Performance • DMM • High goodput • Low wait time • Finishes all jobs • Even better for load imbalance between grids

  15. Inter-Connecting Grids: Mechanisms (3/3)Experimental Results: Overhead • DMM • Overhead ~16% • 93% more control messages • Constant number of delegations per job until 80% load • DMM Threshold to control o’head.

  16. Conclusion Inter-operating DAS-2, DAS-3, and Grid’5000 • First steps accomplished • Koala (inter-)operates DAS-2 and DAS-3 • GRAM interface to OAR • Also… analysis of Grid’5000 usage patterns • On-going work • Koala operating over OAR • “DAS-2” image on Grid’5000: Globus, KOALA, OAR • DMM implementation and experimental validation • Future work • Experiments • DMM research, SLAs • Virtualization, more application types

  17. Information • Publications • see PDS publication database at www.pds.ewi.tudelft.nl • Web site • KOALA: www.st.ewi.tudelft.nl/koala

More Related