1 / 21

Open Science Grid

Open Science Grid. Ruth Pordes Fermilab. http://www.opensciencegrid.org. What is OSG?. Shared Common Distributed Infrastructure Supporting access to contributed Processing, disk & tape resources Over production and research networks and Open tor use by Science Collaborations.

nbeaubien
Download Presentation

Open Science Grid

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Open Science Grid Ruth Pordes Fermilab http://www.opensciencegrid.org OSG at CANS

  2. What is OSG? Shared Common Distributed Infrastructure Supporting access to contributed Processing, disk & tape resources Over production and research networks and Open tor use by Science Collaborations

  3. OSG Snapshot Snapshot of Jobs on OSGs 96 Resources across production & integration infrastructures Using production & research networks Sustaining through OSG submissions: 3,000-4,000 simultaneous jobs . ~10K jobs/day ~50K CPUhours/day. Peak test jobs of 15K a day. 20 Virtual Organizations +6 operations Includes 25% non-physics. ~20,000 CPUs (from 30 to 4000) ~6 PB Tapes ~4 PB Shared Disk

  4. 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 OSG - a Community Consortium iVDGL (NSF) OSG GriPhyN Trillium Grid3 (NSF) (DOE+NSF) PPDG (DOE) • DOE Laboratories and DOE, NSF, other, University Facilities contributing computing farms and storage resources, infrastructure and user services, user and research communities. • Grid technology groups: Condor, Globus, Storage Resource Management, NSF Middleware Initiative. • Global research collaborations: High Energy Physics - including Large Hadron Collider, Gravitational Wave Physics - LIGO, Nuclear and Astro Physics, Bioinformatices, Nanotechnology, CS research…. • Partnerships: with peers, development and research groups Enabling Grids for EScience (EGEE),TeraGrid, Regional & Campus Grids (NYSGrid, NWICG, TIGRE, GLOW..) • Education: I2U2/Quarknet sharing cosmic ray data, Grid schools…

  5. OSG sits in the middle of an environment of a Grid-of-Grids from Local to Global Infrastructures Inter-Operating and Co-Operating Grids: Campus, Regional, Community, National, International. Virtual Organizations doing Research & Education.

  6. Overlaid by virtual computational environments of single to large groups of researchers local to worldwide

  7. OSG Core Activities • Integration: software, systems and end-to-end environments. Production, integration, test infrastructures. • Operations: common support mechanisms, security protections, troubleshooting. • Inter-Operation: across administrative and technical boundaries. • OSG Principles and Characteristics • Guaranteed and opportunistic access to shared resources. • Heterogeneous environment. • Interfacing and Federation across Campus, Regional, national/international Grids preserving local autonomy • New services and technologies developed external to OSG. Each activity includes technical work with Collaborators in the US and elsewhere.

  8. OSG Middleware User Science Codes and Interfaces VO Middleware Applications Biology Portals, databases etc HEP Data and workflow management etc Astrophysics Data replication etc OSG Release Cache: OSG specific configurations, utilities etc. Virtual Data Toolkit (VDT) core technologies + software needed by stakeholders:many components shared with EGEE Infrastructure Core grid technology distributions: Condor, Globus, Myproxy: shared with TeraGrid and others Existing Operating, Batch systems and Utilities.

  9. What is the VDT? • A collection of software • Grid software: Condor, Globus and lots more • Virtual Data System: Origin of the name “VDT” • Utilities: Monitoring, Authorization, Configuration • Built for >10 flavors/versions of Linux • Automated Build and Test: Integration and regression testing. • An easy installation: • Push a button, everything just works. • Quick update processes. • Responsive to user needs: • process to add new components based on community needs. • A support infrastructure: • front line software support, • triaging between users and software providers for deeper issues.

  10. Middleware to Support Security • Identification and Authorization based on X509 extended attribute certificates. In common with Enabling Grids for EScience (EGEE). • Address needs of Roles of groups of researchers for control and policies of access. • Operational auditing across core OSG assets.

  11. OSG Active in Control and Understanding of Risk • Security Process modelled on NIST Management, Operational, Technical controls • Security Incidents: When not If. • Organizations control their own activities: Sites, Communities, Grids. • Coordination between operations centers of participating infrastructures. • End-to-end troubleshooting involves people, software and services from multiple infrastructures & organizations

  12. High Energy Physicists Analyze today’s Data Worldwide PB/mo = < 3 Gb/s> High impact path Production path University of Science and Technology of China

  13. Physics needs in 2008: Tier-0 Tier-1 Tier-1 Tier-2 Tier-2 • 20-30 Petabyte tertiary automated tape storage at 12 centers world-wide physics and other scientific collaborations. • High availability (365x24x7) and high data access rates (1GByte/sec) locally and remotely. • Evolving and scaling smoothly to meet evolving requirements. • E.g. CMS computing model

  14. OSG Data Transfer, Storage and Access - GBytes/sec 365 days a year for CMS & ATLAS Data Rates need to reach ~X3 in 1 year 600MB/sec ~7 Tier-1s, CERN + Tier-2s Bejing is a Tier-2 in this set

  15. Aggressive program of End to End Network performance • Complex end-to-end routes. • Monitoring, configuration, diagnosis. • Automated redundancy and recovery.

  16. Submitting Locally, Executing Remotely: 15,000 jobs/day. 27 sites. Handful of submission points. + test jobs at 55K/day.

  17. Applications cross infrastructures e.g.OSG and TeraGrid

  18. The OSG Model of Federation VO or User that acts across grids A(nother) Grid e.g. NAREGI OSG Interface to Service-X Adaptor between OSG-X and AGrid-X Service-X Service-X Security, Data, Jobs, Operations, Information, Acccounting…

  19. Local Grid with adaptor to national grid Resource Head Node Workers Common • Central Campus wide Grid Services • Enable efficiencies and sharing across internal farms and storage • Maintain autonomy of individual resources Common Gateway & Central Services Guest User Before FermiGrid e.g.Fermilab User Resource Head Node Resource Head Node Resource Head Node Existing Workers Workers Workers Astrophysics Theory ParticlePhysics Next Step: Campus Infrastructure Days - new activity OSG, Internet2 and TeraGrid

  20. Interoperation Increasing in Scope Information & Monitoring Storage Interfaces

  21. Summary of OSG today • Providing core services, software and a distributed facility for an increasing set of research communities. • Helping Virtual Organizations access resources on many different infrastructures. • Reaching out to others to collaborate and contribute our experience and efforts.

More Related