160 likes | 169 Views
Software Sharing at Fermilab: Experiences from Run II, KITS and Fermitools. Ruth Pordes, Lauri Loebel Carpenter, Elizabeth Schermerhorn Fermilab Computing Division. Software at Fermilab is Shared as a Result of :. Orchestration Planning and design from the get-go
E N D
Software Sharing at Fermilab:Experiences from Run II, KITS and Fermitools Ruth Pordes, Lauri Loebel Carpenter, Elizabeth Schermerhorn Fermilab Computing Division
Software at Fermilab is Shared as a Result of : • Orchestration • Planning and design from the get-go • Externally mandated sharing • Capturing a popular Tune • Independent use of available software and then sharing of support and experience • Discovery and Advantage of Common Use • Variation on a Theme • Acquisition or Development for single User Group or Experiment and then adoption by more than one • Availability and Dissemination; Migration of Users to new Groups or Experiments (Talking in the Cafeteria; PR on the Web)
Management - of both experiments and Computing Division - encouraged use of Same DBMS system: Offered resources for central support Increase in reliance on databases for data taking and processing Benefits if Offline and Online use common systems Each Experiment needs Run Information File Catalog Calibration Data Hardware Location and State Luminosity Information Each needs Development and production repository systems Synchronization between Online and Offline 1) CDF/D0/CD Joint Project - Support DatabasesOracle for Run II
Each experiment has several Oracle instances for management and support: Offline Databases Online Databases Production System: Production Repository Integration Repository Production System: Production Repository Integration Repository Synchronization Development System: Development Repository Development System: Development Repository Backup Repository (RMAN) Monitoring and Statistics Repository (OEM)
Design Tools and Methodologies Backups Database Monitoring Packaging and testing of software Contact with Company Standards, Conventions, Reviews Possibility for common off-site database strategy New experiments - e.g. Minos - can take advantage of central pool of knowledge and support Central management of licenses and maintenance feasible Sharing and its Benefits
Implementation Environment D0 Online - NT for Level 3, Python D0 Offline - C++, Python CDF Online - Java CDF Offline - C++ Schedule and Priority differences Affects time available to develop consensus Affects resources available for general solutions Strategic Directions D0 3 tier architecture avoid need to distribute/support Oracle Clients on all OS Corba as middle tier Database Server and code generation CDF 2 tier architecture to avoid bottlenecks and use Oracle for multiple user access control 3rd tier of flat files Automated header and C++ code generation but - sharing not 100% achievable -experiment specific infrastructure and policy
2) Common Tools Shared through Common Product Packaging, Management and Distribution • UPx family of Products • Concepts mature and accepted • Still in use for Run II; modified so no longer requires root access to install or use • Records distributions from central FTP site • In use throughout the Laboratory - da, farms, analysis servers, desktops, information and database systems, offsite collaborators • Full documentation • Central resources for maintenance and support • Consistency, standardization, automation possible
Services Central Distribution Service Configuration Management for Qualifiers and Versions Framework to capture Complete Build and Execution Environment Monitoring and Census of installed and available software Strategy Include base support as part of “canned” OS distributions - e.g. Fermi RedHat Linux Unix (User) Product Support - UPx KITs Distribution Node UPS Database UPD UPD Local Machine Local Machine UPD UPS Database UPS Database UPS Database UPS Database UPC - Census
Must be aware that: Benefits: • Resources to maintain infrastructure - ongoing commitment • Bookkeeping of “benefit” of central support vs “cost” of distributed support difficult • 20 x 10% = 2 FTEs? • People wary of “dictatorship” • Infrastructures can “calcify” and “constrict” • Rules and Conventions allow automation - e.g. build scripts • Many people know tools and rules - easier support coverage • Documentation and • methodologies “general”
Management Support and Information - existing Recommendations Mail Archive for FAQ and Support Community Support Mail Lists
Management Support and Information - future - better integrated information systems Remedy HelpDesk Tracking and FAQ Repository Links to Information Systems of Computer Hardware, Software, System Admins
Desktops: ~500 Servers: ~200 Experiments: ~20 Offsite Distributions: ~50 Number of Products in Kits ~400 Number of Distribution Repositories ~5 Number of UPS databases per Experiment ~3 Infrastructure Support Staff ~ 2 FTEs Scale:
How RPM, AutoRPM and UPx integrate and play together on Linux - clear that we want them to How to support “Fermi-Environment” and “Other Experiments” on same Desktop. E.g. D0 and CMS Increasing Offsite Component NT Guaranteed Support Services and Response when Load Shared Knowledge base must be accurate and complete Nimbleness in Sharing Future Needs Thought
3) Publishing Fermilab Products to the Community • Fermilab under URA/DOE - “ownership” constraints prohibit us from putting software on an open FTP server for the world. • Fermitools allows us to publish software under already agreed to Terms and Conditions (by the lawyers) • Allows publication of software for wider community and ability to offer support and help • Provides mechanism and pressure points to promote transition to more open source environment • Slowly add products as they become available - most recently Trace tool for Linux; oracle query and mail list tools - from whatever group or individuals at Fermilab • Encourages sharing of software From Fermilab as well as To Fermilab
Fermitools cont. • Product infrastructure encompasses Fermitools Products - available through anonymous FTP • Hope this will facilitate moving more Fermilab s/w to public domain and open source • Encouragement to include demos and presentation materials
aSummary • At Fermilab, we operate several different scenarios for sharing software - not just a single model of use • Experience shows that while achieving consensus and sharing is much work the benefits are sometimes acknowledged • Individual taste and biases can still cause divergence and support loads • Future requires better integration of existing methods with linux and NT • URLs of projects referred to: • http://RunIIcomputing.fnal.gov • http://www.fnal.gov/cd • http://www.fnal.gov/fermitools