570 likes | 590 Views
This information dump provides an overview of the concepts and components of a Grid, including information models, the Glue 2.0 framework, the information system, and the new BDII. It also explores the challenges of cross-organizational grids, volunteer computing, campus grids, intra-organizational grids, clusters, cloud computing, data centers, and virtualization.
E N D
Information Dump White Areas Lecture Laurence Field 30th January 2009
Overview • What is a Grid? • Information Models • The Glue 2.0 • The Information System • The New BDII • GStat 2.0
What is a Grid? Cross-organizational Grids Volunteer Computing Campus Grids Intra-organizational Grids Clusters Cloud Computing Data Centers Virtualization
Organization B Organization A What is the problem? • Organization A and B are administrative domains • Independent policies, systems and authentication mechanisms • Users have local access to their local system using local methods • Users from A wish to collaborate with users from B • Pool the resources • Split tasks by specialty • Share common frameworks
Virtual Organization The Solution Organization B Organization A • The Users from A and B create a Virtual Organization • Users have a unique identify but also the identity of the VO • Organizations A and B support the Virtual Organization • Place “grid” interfaces at the organizational boundary • These map the generic “grid” functions/information/credentials • To the local security functions/information/credentials • Multi-institutional e-Science Infrastructures
The Information System Service Operations Users Information System Organization B Organization A
Information Model • Abstract description of data • Description of values which are identified by attributes • Description of attribute groupings • Description of relationships between groupings • Data → Information → Knowledge • Information model turns data into information • Existence, Description, State • Describes the components in a grid infrastructure • and hence the grid itself • The Data Model is the implementation • LDAP, XML, Relational etc.
The Original MDS 2.x Schema http://www.globus.org/toolkit/docs/2.4/mds/Schema.html
European DataGrid Project • Found that the MDS schema was not sufficient for their needs • Each functional area defined their own sub schema • Workload management, data management, fabric management • data storage and network monitoring. • Introduced the Computing Element (CE) entity which described • the GRAM endpoint • the batch system • state behind the endpoint • and a simple description of the resource (homogeneous cluster) • The Storage Element (SE) entity which describes • the storage endpoint. • the Storage Element Protocol entity
Nordugrid • The Nordugrid project started in May 2001 • Aimed to build a Nordic testbed • for wide-area computing and data handling http://www.nordugrid.org/documents/arc_infosys.pdf
The World Wide Testbed • A 2002 DataTAG initiative to create a worldwide Grid testbed • Comprised of • 8 European sites using the EDG 1.2 release • 9 U.S. sites using the VDT 1.1.3 release • The EDG release contained addition information providers • which were not available in the VDT release • The information was essential for the Resource Broker to function • The information providers were installed on all the US sites • An example of interoperability using the parallel deployment model
Origins and Aims • GLUE: Grid Laboratory Uniform Environment • Started in April 2002 • Join activity between EU-DataTAG, US-iVDGL and EDG • Focused on interoperability between US and EU HEP projects • Aimed to provide common schema to facilitate interoperations • Initial versions • v1.0 (released Nov 2002) • v1.1 (released April 2003) • HEP driven revisions • v1.2 (released Dec 2005) • v1.3 (released Oct 2006)
OSG and GLUE v1.2 • Both EGEE and OSG used GLUE v1.1 • OSG (MDS + GLUE + their own Grid3 schema) • EGEE (GLUE + their own extensions) • Relying on custom extensions breaks interoperability • Additional use cases need to be added to GLUE • A proposal for version GLUE v1.2 was discussed • An incremental approach taken • Only make the minimalchanges • Only solve problems found in deployment • Ensure backwards compatibility • For non-backwards compatible changes • Introduced the idea of defining Glue 2.0 at a future date
GLUE v1.3 • Last minute changes for LHC start-up. • Could not wait for Glue 2.0 • Main focus was SRM 2.x • Meeting in October 2006 to discuss proposed changes • 44 suggested changes ,30 accepted, 8 rejected and 5 duplicates • Version 1.3 deployed the being of 2007 • Ongoing migration with respect to usage • No requirement for v1.4 • Suggests that things are not too bad • No blocking issues that urgently require a schema change • Proved useful in interoperation activities • OSG, NDGF, gin-info, Unicore, NAREGI etc. • Interpretation of the schema has been tightened • The understanding of the schema has improved • Many additional documents describe usage.
Moving into the Open Grid Forum • Conceptual and structural changes left for Glue 2.0 • Discussion on GLUE 2.0 at the Oct 2006 meeting in London • Decision made to define Glue 2.0 within OGF • Improve the acceptance of GLUE by other communities • The OGF process should not create to much overhead • GLUE-WG started in Jan 2007 at OGF19 • Building on the 4 years of existing work • Positive Outcomes • GLUE widely accepted within OGF • Seen as an important contribution • Grid Forge helped the activity coordination • Broad view points limited assumptions • Increased participation from other projects • And hence acceptance by those projects.
Glue 2.0 Introduction • Glue Schema Working Group Created in the Open Grid Forum • Need demonstrated though the GIN activities. • Build upon existing experiences • Consolidate over 4 years of production feedback • Focus on use cases seen not envisaged • Cross-Grid use cases • Define an abstract Information Model • And a number of renderings; LDAP, XML, Relational, CIM etc. • Start with abstract core concepts • Evolve into specific service types • Ensure participation from existing production infrastructures
Glue 2.0 Key Concepts User Domain Admin Domain Provides Utilizes Resource
Glue 2.0 Key Concepts User Domain Admin Domain Negotiates Share with Provides Manager Manages Utilizes Share Resource Defined on
Glue 2.0 Key Concepts User Domain Admin Domain Negotiates Share with Provides Manager Contacts Manages End Point Share Resource Maps User to Defined on Has Access Policy Mapping Policy
Glue 2.0 Key Concepts User Domain Admin Domain Negotiates Share with Provides Service Manager Contacts Manages End Point Share Resource Maps User to Defined on Runs Has Access Policy Mapping Policy Activity
Glue 2.0 Computing Schema Computing Service Computing Manager Manages Application Environment Execution Environment Computing End Point Computing Share Maps User to Defined on Can use Mapped to Runs Computing Activity
Glue 2.0 Storage Schema Storage AccessProtocol Storage Capacity Has Offers Storage Service Storage Manager Manages Offers Storage End Point Storage Share Storage Resource Maps User to Defined on Share Capacity
Glue 2.0 Timeline • Oct 2006, Decision taken to move into OGF • Jan 2007 (OGF 19), First working group meeting • June 2008 (OGF 23), Spec. entered public comments • Aug 2008, Public comment period ended • Nov 2008, Started addressing comments • Jan 2009, Final Spec. ready? • Mar 2009, Glue 2.0 official OFG Specification? • 1st April 2009, Start work on Glue 2.1
Proposed Roll Out Plan • Create a hybrid schema file with both v1.3 and v2.0 • Deploy across the infrastructure • Should have negligible side effects • Est. 3 - 6 months after specification fixed • Update information providers • Publish Glue 2.0 information in addition to Glue 1.3 • Deploy across the infrastructure • Est. 4 - 12 months after specification fixed • Update software and tooling as necessary • Est. 6 - 36 months after specification fixed • Remove Glue 1.3 providers when no longer required • Est. 36 - ?? months after specification fixed
Some Statistics • 45 phone conferences • 1.5 hours each ~ 3 days talking • 5 people participating ~ 2 months FTE invested in total • Split between projects (EGEE, WLCG, Teragrid, Nordugrid, DEISA) • This does not include the time invested by editor (OMII-Europe) • 40 versions of the document • 347 days between first conference and initial specification • 46 pages, 12787 words • Document updated nearly every week • 254 Attributes • 28 Objects • Four different renderings • LDIF, XML, Relational and CIM
Globus MDS v2 • Metadata Directory Service (MDS) • http://www.globus.org/toolkit/docs/2.4/mds/ • Information Providers (IP) • Scripts that get the information and return LDIF • Grid Resource Information Service (GRIS) • Daemon that runs the IP and answers LDAP queries • Register to a GIIS • Grid Information Index Service (GIIS) • answers LDAP queries by querying registered GRIS’s or GIIS’s. • Both the GRIS and GIIS have a 30s cache • To reduce load and improve performance
Original MDS Deployment Top GIIS Query Site GIIS Site GIIS GRIS GRIS GRIS GRIS Provider Provider Provider Provider
The BDII • Berkeley Database Information Index. • Standard OpenLDAP server • Updated by a perl process. • Using LDAP URLs (ldapsearch) (GIIS mode) • From a script (Information Provider) (GRIS mode) • Why? • Because MDS didn’t work in a distributed environment. • Originally did not scale past 4 sites. • 1 broken work node could bring down the whole system! • MDS was the problem not LDAP. • BDII first used as top-level GIIS • Now used at the site and resource level
Information System Architecture Top BDII Query Site BDII Site BDII Resource BDII Resource BDII GRIS GRIS Provider Provider Provider Provider
BDII FCR Write to cache 2171 LDAP 2172 LDAP 2173 LDAP Write to cache Update DB & Modify DB Write to cache Write to cache Write to cache ldapsearch Swap DBs 2170 Port Fwd 2170 Port Fwd • Multiple DBs instances used to increase performance • Read only, write only and one spare for queries to finish. • This functionality is enabled by the port forwarder. • List of sources to query from local file • Can be updated from a web page. • More than one DBs is used, separate read and write. • Can also use a local LDIF file to modify DB after population. • Can be updated from a web page.
Load Balanced BDII BDII 2170 BDII 2170 BDII 2170 BDII 2170 BDII 2170 BDII 2170 DNS Round Robin Alias Queries
Freedom of Choice • Developed to meet a requirement from the VOs. • Modifies the information to their liking • White list and black list services. • Only the VO manger can white list and black list the services. • Generates an LDIF modify file. • Web based. • BDII can be configured to use this file • Will modify the database after population • For use only with top-level BDIIs • Linked with the Site Functional Tests Portal • Can automatically remove a site if it fails a functional tests • It’s the VOs choice.
Generic Information Provider GIP Provider Cache Plugin Config File LDIF • Provides information about the grid service. • Outputs LDIF information in accordance to the Glue Schema to stdout. • Information can be provided by, • dynamic providers from the providers directory. • static files from the ldif directory. • dynamic plugins from the plugin directory. • Cache used to improve efficiency and reduce load.
Generic Information Provider Read Config File Read Static LDIF Write to cache Write to cache Fork of providers and plugins Process will time out Write to cache Write to cache Write to cache Write to cache Wait (response time) Read provider and plugins from cache use cache if fresh LDAP_MODIFY Print to stdout
User Tools • lcg-infosites and lcg-info • Can be used to query the information system • For more information see the User Guide • https://edms.cern.ch/file/722398//gLite-3-UserGuide.pdf • lcg-ManageVoTag • Used by the Vos to publish software environment tags • Publishes to /opt/edg/var/info/<VO>/<VO>.list • Ensure the VO can write here! • Used by plugin glite-info-dynamic-software-wrapper
Observations Problems observed in information system • not always due to information system • It is just where the problem is visible • Many problems at the information providers level • Due to either poor configuration • Poor fabric management affecting information providers • Scalability and Stability • Top level BDIIs can become over subscribed • BDIIs take too much time and resources (CPU/RAM) to update • Production problems difficult to trace. • Requires more instrumentation in the code. • BDIIs don’t work with low bandwidth connections.
Investigations • Stress Testing • ldapbench
The New BDII v5 • Use only one LDAP database • Reduces complexity and relies on the stability of OpenLDAP • Only do differential updates • Reduce the write interaction and update time • Merge the GIP and the BDII • Only do LDAP_ADD and LDAP_MODIFY in one place • Remove all internal caches • The database is the cache! • Improved logging • Using the standard python logger • More stats which are available remotely • Do more with less (KISS)!
New Architecture Provider LDAP_ADD Merge New LDIF Plugin LDAP_ADD LDIF LDAP_MODIFY Query LDIF DIIF 2170 LDAP Update
Future Work • Reducing the network load • Investigate the use of syncrepl • Update static information less frequently • Reducing the query load • Query caching on the WN • lcg-utils and Service Discovery API • Failover queries • Local cache • Site level • Top levels
Information Validation • It is important that information is correct • Miss-configured sites have in the past • Stopped services to to run grid wide! • Caused black holes for job submission. • Information must agree with the Glue Schema • http://forge.gridforum.org/sf/projects/glue-wg • And be accurate • Grid Status (gstat) does basic sanity checks for the each site • http://goc.grid.sinica.edu.tw/gstat/ • Grid Wiki gives solutions to common problems • http://goc.grid.sinica.edu.tw/gocwiki/FrontPage
GStat 2.0 Core Concepts • Monitor and test the information system • Primary goals for GStat: • Detect faults in the information system • Validates the information content • Displays useful information with different views • Build a sustainable architecture • Enabling decentralized operations • In a federated environment • Redesign GStat in modular way • Reusable components reusable • Multi-location (site/roc) • Multi-application (certification/operations)