1 / 37

The Earth System Grid ----- Security to enable Access

The Earth System Grid ----- Security to enable Access. Frank Siebenlist Argonne National Laboratory / University of Chicago franks@mcs.anl.gov NSF Cybersecurity Summit 2007; Arlington, VA - Feb 22-23, 2007. PMEL. Making Climate Simulation Data Available Globally.

yagil
Download Presentation

The Earth System Grid ----- Security to enable Access

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Earth System Grid----- Security to enable Access • Frank Siebenlist • Argonne National Laboratory / University of Chicago • franks@mcs.anl.gov • NSF Cybersecurity Summit 2007; • Arlington, VA - Feb 22-23, 2007

  2. PMEL Making Climate Simulation Data Available Globally ESG Computational/Data Sites and Collaborators

  3. NCAR - David Brown - Luca Cinquini - Peter Fox - Jose’ Garcia - Rob Markel - Don Middleton (PI) - Gary Strand ORNL - Dave Bernholdt - Mei-Li Chen - Line Pouchard NOAA/PMEL - Steve Hankin - Roland Schweitzer USC/ISI - Ann Chervenak - Carl Kesselman - Rob Schuler ANL - Ian T. Foster (PI) - Frank Siebenlist - Dan Fraser - Veronika Nefedova LBNL - Arie Shoshani - Alex Sim - Alex Romosan LANL - Phil Jones LLNL/PCMDI - Dean Williams (PI) - Bob Drach The ESG Team

  4. ESG Architecture

  5. ESG Portal

  6. An Operational DataGrid for Climate Research

  7. An Operational DataGrid for IPCC

  8. Authentication Authorization Accounting/Metrics

  9. Virtual Data Services

  10. Moving Many Files: DML

  11. A Few Metrics • ESG General Climate Portal • 4,000 registrations • 160 TB of data available, 876 datasets and 840,000 files • 30 TB downloaded in 92K files + virtual data services • ESG IPCC Portal(U.S. Intergovernmental Panel on Climate Change (IPCC)) • 1000 registered users • 35 TB of data available in 67K files • 125 TB downloaded in 548K files

  12. Towards Global Earth System Modeling CCM3 at T170 Resolution (about 70km) 1/10d POP Ocean Model MOZART Chemistry Model

  13. PMEL ESG

  14. Inside Inside TeraGrid SAN + MSS RAID + HPSS

  15. The Earth System GridCenter for Enabling Technologies Funded for 2006-2010 • Petascale distributed climate data • Global Grid of data producers (IPCC) • Model experiment environment • Analysis services (online & archive) • ESG-enabled analysis and visualization tools

  16. …ESG Security……in process of architecting next phase…reporting on design choices/challenges

  17. “Client => Portal => Resource” Access Portal Resource browserClient

  18. “Client => Portal => Resource” Accessas Portal-ID Portal Resource PortalAuthN & AuthZ ClientAuthZ browserClient ClientAuthN As Portal-ID Resource only sees/knows AuthN’ed Portal-ID Resource does not “know” Client-ID Resource enforces only Portal-ID access policy Fine-grained client AuthZ determined/enforced at Portal (Client-ID only for audit)

  19. “Client => Portal => Resource” Accessas Portal-ID on behalf of Client-ID Portal Resource PortalAuthN AuthZ & Client AuthZ Client-ID ClientAuthZ browserClient ClientAuthN As Portal-ID on behalf of Client-ID Resource sees AuthN’ed Portal-ID Resource sees UnAuthN’ed Client-ID Resource trusts Portal-ID to forward Client’s request No “cryptographic proof” of delegation Client’s AuthZ determined/enforced at Resource (Client’s AuthZ also determined/enforced at Portal)

  20. “Client => Portal => Resource” Accessas Portal impersonating Client-ID Client Creds Svc Portal Resource ClientCreds ClientAuthN & AuthZ browserClient ClientAuthN ClientAuthZ As Client-ID through Impersonation Portal maintains client’s (proxy-)credentials Resource only sees Client-ID Client’s AuthZ determined/enforced at Resource (Portal-ID only for audit)

  21. “Portal => Resource” Access Methods • As Portal-ID • Resource only sees/knows AuthN’ed Portal-ID • Resource enforces only Portal-ID access policy • All fine-grained client AuthZ determined/enforced at Portal • As Portal-ID on behalf of Client-ID • Resource sees AuthN’ed Portal-ID • Resource trusts Portal-ID to forward Client’s request • Client’s AuthZ determined/enforced at Resource • As Client-ID through Impersonation • Portal maintains client’s (proxy-)credentials • Resource only sees Client-ID • Client’s AuthZ determined/enforced at Resource • As Portal-ID through fine-grained Delegation • Resource sees AuthN’ed Portal-ID • Client-ID’s AuthZ assertion empowers Portal-ID • Portal’s rights at Resource limited by Client’s COMPLEXITY

  22. Light and Fat-Client Access Resource Portal PortalAuthN & AuthZ browserClient ClientAuthN ClientAuthZ Reuse Portal’s AuthZ through push/pull Resource ClientAuthN & AuthZ “Fat”Client Obtain data’s URI after browsing GridFTP, OpenDAP, SRM, ws-transfer, ???

  23. Access Policy Taxonomy (1) “Physical” User, AuthN-ID, DN, Username Operation/Action PUser | Op | Perm | PRsrc Permission Permit | Deny | NotApplicable “Physical” Resource, FileName, URL, FQN Identity-based, ACL-like, most simple policy statement

  24. Access Policy Taxonomy (2) “Physical” User, AuthN-ID, DN, Username User Group, Attribute, “Role” PUser | UGroup UGroup | Op | Perm | RGroup RGroup | PRsrc Resource Group, Classification “Physical” Resource, FileName, URL, FQN Grouping Abstractions policy (mostly) defined on groups

  25. Access Policy Taxonomy (3) “Physical” User, AuthN-ID, DN, Username “Logical” Username, Access-ID PUser | LUser LUser | UGroup UGroup | Op | Perm | RGroup RGroup | LRsrc LRsrc | PRsrc “Logical” Resource, Lfile, URN “Physical” Resource, PFile, URL, FQN “Logical” Abstractions support multiple authN-mechsresource location transparencies

  26. Access Policy Taxonomy (4) PUser | LUser LUser | UGroup Luser/UGroup | Role Puser/Luser/UGroup/Role | Op | Perm | Rgroup/LRsrc/PRsrc RGroup | LRsrc LRsrc | PRsrc Policy on physical, logical, roles and groups …plus hierarchical groups/roles, etc., etc…

  27. Access Policy Taxonomy (5) Meta-Data Catalog integrated with access policy PRsrc | Meta-Data LRsrc | Meta-Data PUser | LUser RGroup | Meta-Data LUser | UGroup UGroup | Op | Perm | RGroup RGroup | LRsrc LRsrc | PRsrc Meta-Data Catalog Integration allows for “secure-browsing”

  28. Access Determination (1) Authenticated User-ID ??Permission?? PUser | LUser LUser | UGroup UGroup | Op | Perm | RGroup RGroup | LRsrc LRsrc | PRsrc Requested operation “Physical” Resource to access Can Subject invoke Operation on Resource? Can AuthN-ID invoke Operation on Physical-Resource?

  29. Policy Assertions from Everywhere

  30. Access Determination (2) MyProxy AuthN Svc - Username=> DN mapping PUser | LUser VOMSRS/VOMS LUser | UGroup Luser/UGroup | Role Puser/Luser/UGroup/Role | Op | Perm | Rgroup/LRsrc/PRsrc RGroup | LRsrc LRsrc | PRsrc SAZ/PRIMA/GUMS Meta-data catalog Data-Service (after staging…) Policy “components” distributed

  31. Policy Assertions from Everywhere PERMIS XACML SAML SAZ PRIMA Shib LDAP Handle VOMS CAS ??? Gridmap XACML

  32. Policy Evaluation Complexity • Single Domain & Centralized Policy Database/Service • Meta-Data Groups/Roles membership maintained with Rules • Only Pull/push of AuthZ-assertions • … • Challenge is to find right “balance” • (driven by use cases…not by fad/fashion ;-) ) • … • Split Policy & Distribute Everything • Separate DBs for meta-data, rules & attribute mappings • Deploy MyProxy, LDAP,VOMS, Shib, CAS, PRIMA, XACML, PRIMA, GUMS, PERMIS, ??? COMPLEXITY

  33. AuthZ & Attr Svcs Topology • Policy Enforcement Use Cases determine “optimal” AuthZ & Attr Svc Topology • Client pull-push versus Server pull • Network-hurdles/firewalls • Crossing of admin domains • Separate Attributes from Rules (VOMS/Shib)orSeparate Policies from Enforcement Point (CAS) • Separation of duty - delegation of admin • Replicating of Policy-DB or Call-Out • Network overhead versus sync-mgmt overhead • !!! Choose “Most Simple” Deployment Option !!!(ideally, services and middleware should allow all options…)

  34. Data Integrity Protection • Data “Corruption” • Many, many copies of the original data files and model-code • Many “opportunities” for undetected changes • Independent from normal integrity protection for storage and data moving • Accidental, script-kiddies or worse… • Integrity Protection • Identify and guard the “original” • Most files are immutable…maybe make them all immutable… • Use file-signatures/digests (SH-1/256, ???) • Tripwire-like • Digest part of meta-data, communicate expected digest with URL/URI, independent digest-services, embed digest in URI, use digest-value as “natural” name for file…file-name=digest-value • Learn from file-sharing P2P application! • Integrate integrity checks in file-moving apps • http, DataMoverLight, GridFTP, Opendap, RLS, etc. • Define procedures for data corruption detection

  35. Conclusion • ESG is a very cool and challenging application! • Security goal is to enable not limit access… • Many challenges not unique to ESG • Leverage existing solutions • Collaborate on non-existing • Interoperability requirements with TG/OSG/??? • Limits technology/mechanism choices(creds, protocols, assertion-formats, interfaces, infrastructure-services, ontology, SSO, audit, etc.) • Requires (closer) collaboration • “Fighting” complexity is major challenge • Cost associated with splitting-up policies • Need better understanding & best practices • Data Integrity Protection • Feature-gap in tools and data management

More Related