210 likes | 343 Views
OSG AuthZ components. Gabriele Carcassi Presented by Dane Skow. Status. PRIMA/GUMS included in OSG Release 0.2 as an optional function All OSG VOs recommended to enable VOMS proxy generation (not required)
E N D
OSG AuthZ components Gabriele Carcassi Presented by Dane Skow
Status • PRIMA/GUMS included in OSG Release 0.2 as an optional function • All OSG VOs recommended to enable VOMS proxy generation (not required) • Use and interpretation of VOMS proxies not widely understood. Heavily used in places (eg. ATLAS production)
VOMS Full privilege scenario voms-proxy-init Submission site User VOs Execution site site GUMSServer Gatekeeper PRIMA grid3-user…txt gums-host ü centralized mapping ü account pool ü/û dynamic mappings (broken by accounting) ü role/group based mappings
VOMSAdmin Compatibility scenario grid-proxy-init Submission site User VOs Execution site gums-host site GUMSServer Gatekeeper both maps grid-mapfile grid3-user…txt gums-host gums-host gums-host ü centralized mapping ü account pool û dynamic mappings û role/group based mappings
VOMSAdmin “Ye olde Grid3” setup grid-proxy-init Submission site User Execution site edg-mkgridmap Gatekeeper grid-mapfile VOs grid3-user…txt û centralized mapping û account pool û dynamic mappings û role/group based mappings
PRIMA module • It’s a C library that implement the gatekeeper callout • Gets the credentials • Validates certificate and attributes • Formats a SAML message and sends it to GUMS using OGSA-AuthZ protocol • Parses the response • Returns the uid to the gatekeeper • Distributed as part of VDT
Details • PRIMA currently sends only the first VOMS FQAN, not the whole list encoded in the certificate. • GUMS makes decisions only on one FQAN.
Attribute verification • PRIMA can verify the VOMS attributes, but typically we do not do that • In OSG we lack a mechanism to easily distribute the certificates of the VO servers • GUMS verifies the presence in the VO • periodically downloads the full list of users from the VO server (has to do that for maps generation) • prevents forging a fake VO • foresee to disable in case attribute verification is done at the gatekeeper end, and no maps are needed • Should attribute verification be delegated to the server?
PRIMA Complaints • Mainly about the log • Not clear error information (the actual GUMS errors are not passed through the protocol) • Lacks a one liner entry with all information when successful (there is one, but, for example, lacks the FQAN)
What is GUMS? • GUMS purpose is to manage the mapping between Grid Credentials to Site credentials • Centralized: one GUMS per site, one configuration file for all gatekeepers/services • PDP: enforcement is done at the gatekeeper/service (through grid-mapfiles or callouts) • Customizable: designed to be integrated with other site systems with little effort
Centralized management • Designed by and for a site with a number of heterogeneous gatekeepers • For example, BNL GUMS has more than 10 gatekeepers (4 from STAR, 1 PHENIX, 6 ATLAS) + other ATLAS services (dCache, DIAL, …) • Some of these are OSG, some are test machines, some needs special test maps, … • One place of configuration allows control and consistency • (For a small site, with one gatekeeper and 20 nodes, that is fine with a single account per VO, we currently recommend mapfiles and edg-mkgridmap.)
… VO … VO VOMS-Adminldap VO GUMS overview SAML + obligations over SOAP/HTTPS Tomcat server Glite trustmanager PRIMA GUMS DB AuthZWS Business logic Persistence (hibernate,ldap) Cmd line AdminWS (Axis) SOAP/HTTPS Web UI (JSP) Webbrowser HTTPS WS = Web Service UI = User Interface filesystem XML configuration
GUMS Policy example <gums> <persistanceFactories> <persistenceFactory name='mysql' className='gov.bnl.gums.hibernate.HIbernatePersistanceFactory' /> </persistanceFactories> <groupMappings> <groupMapping name='usatlasPool'> <userGroup className='gov.bnl.gums.LDAPGroup' server='grid-vo.nikhef.nl' query='ou=usatlas,o=atlas,dc=eu-datagrid,dc=org‘ persistanceFactory='mysql' name='usatlas' /> <compositeAccountMapping> <accountMapping className='gov.bnl.gums.ManualAccountMapper' persistanceFactory='mysql' name='bnlMapping' /> <accountMapping className='gov.bnl.gums.AccountPoolMapper' persistanceFactory='mysql' name='bnlPool' /> <accountMapping className='gov.bnl.gums.GroupAccountMapper' groupName='usatlas1' /> </compositeAccountMapping> </groupMapping> <groupMapping name='star'> <userGroup className='gov.bnl.gums.VOMSGroup' url='https://vo.racf.bnl.gov:8443/edg-voms-admin/star/services/VOMSAdmin‘ persistanceFactory='mysql' name='star' sslCertfile='/etc/grid-security/hostcert.pem' sslKey='/etc/grid-security/hostkey.pem'/> <compositeAccountMapping> <accountMapping className='gov.bnl.gums.ManualAccountMapper' persistanceFactory='mysql' name='bnlMapping' /> <accountMapping className='gov.bnl.gums.NISAccountMapper' jndiNisUrl='nis://nis2.somewhere.com/rhic.bnl.gov' /> </compositeAccountMapping> </groupMapping> … </groupMappings> <hostGroups> <hostGroup className="gov.bnl.gums.CertificateHostGroup" cn='star*.somewhere.gov' groups='star' /> <hostGroup className="gov.bnl.gums.CertificateHostGroup" cn='gums.somewhere.gov' groups='star,phenix,usatlasPool' /> … </hostGroups> </gums>
GUMS Authorization • GUMS admin can perform any operation through web service and web ui door • Host can only perform read operations (map generation and mapping requests) for itself • Configuration can be changed through filesystem only (automatically reloaded when changed)
GUMS performance • BNL production server gives ~30 req/sec… • Not that good • Is not the bottleneck right now, as the production gatekeeper can only give ~5 req/sec • Performance test show that • Overall delay (client-server-client) is ~220ms • The GUMS logic is responsible for up to 20ms • The rest is plain AXIS SOAP + SSL • It’s not glite trustmanager’s fault either…
GUMS performance • JClarens group confirmed this while comparing SOAP with XML-RPC • XML-RPC without SSL: 373 req/sec – with SSL: 274 • SOAP without SSL: 218 req/sec – with SSL: 23 • 10 times slower! • Is it SOAP? Is it Axis implementation? • At least, GUMS can run on a cluster • All state resides in the database, transactions are used, no session transfer needed, no cluster cache needed • Almost all… the configuration file is on filesystem, an needs to be updated on all machines (at the same time)
GUMS Complaints • The configuration file is difficult • It usually takes people a few tries • We should simplify it • We should probably have ways to “share” parts of it (contact a location to get standard OSG groups definitions?)
Storage AuthZ (not in prod) Execution site Gatekeeper GRAMgridFTP site GUMSServer PRIMA SAML + obligations over SOAP/HTTPS SRM/dCache StorageAuthorizationService gPLAZMA Adds AuthZ params that are dCache specific. XACML policy.
Storage AuthZ • gPlazma is dCache authorization infrastructure, which can be set to contact the Storage Authorization Service • Distributed as part of dCache, Beta quality • The Storage AuthZ Service speaks the same SAML GUMS does, and is configured with a XACML policy • Contact GUMS to retrieve the mapping • Adds other AuthZ parameters (i.e. gid, user home path, …) • Prototype level
Other issues: maps • GUMS is able to generate grid-mapfiles and also an inverse accounting map used by OSG accounting • Want to move away from them: creating a map means exploring all the policy, which breaks dynamic account mapping (i.e. for a pool, we have to assign accounts to everybody) • Assumption: we believe that static inverse maps (uid-> DN) are not desirable • For example, in accounting what you really need is a history of what uid was assigned to what DN. That changes with time. It’s better handled by realtime log.
Conclusions • GUMS and PRIMA are deployed in production on a number of OSG sites • Privilege project depends on the following formats: • VOMS Proxy format (PRIMA) • AuthZ request: SAML + obligations (everything) • Just beginning requirements Activity on Policy, Publication and Trust (Stu Fuess, chair)