1 / 17

Test and Integration Current State

Explore the integration progress, goals, and evolution of testbeds for CrossGrid's international project. Dive into middleware updates, software deployment, job scenarios, and application prototypes.

lancef
Download Presentation

Test and Integration Current State

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Test and IntegrationCurrent State Santiago González de la Hoz (on behalf of Integration Team) Santiago.Gonzalez@ific.uv.es Instituto de Física Corpuscular Consejo Superior de Investigaciones Científicas, CSIC Universitat de València Valencia, SPAIN http://gridportal.fzk.de/websites/crossgrid/iteam/

  2. Overview • Goals • Implement the second-third tools-services and first application prototypes running on the international CrossGrid testbed • For this purpose: • A closed collaboration among WP1, WP2, WP3 and WP4 has been achieved (Jesus is leading the whole CrossGrid integration effort). • Since integration meeting in Poznan (July 2003) • The CrossGrid Integration team was formed. • WP2 and WP3 middleware (prototype 1) and a selective applications (prototype 0) were deployed on CrossGrid testbed • Integration Team (WP4, WP3, WP2 and WP1) is working to support of WP1 application prototype 1 • Production testbed is based on LCG-1 middleware • Development testbed based on EDG2.x middleware • Now in Nicosia (January 2004): Integration Status • middleware • Integration among tasks • demo • Work for next months

  3. Testbed evolution(see Jorge’s talk) • Production testbed: • From EDG 1.4.11 to LCG-1 • LCG-1 is: • VDT 1.1.8-9 (Globus 2.2.4) • Information System (MDS) • Selected software from EDG 2.0 • Workload Management System (RB) • EDG Data Management (RLS, LRC, …) • GLUE Schema 1.1 + LCG extensions • RB obtains the list of the existing gatekeepers from the MDS information tree • It uses the lcgpbs job manager. Remove the lcgpbs jobmanager and configure a plain pbs jobmanager to be compatible with MPI. • Development testbed • EDG 2.x release using LCFGng and RH7.3 • The functionality of the Globus MDS has been replaced by RGMA

  4. Integration Work(Integration Team)

  5. Integration work Integration Document • It is based on the contributions received from integration team people more or less each month. • The document covers the following issues: • Couple of lines saying what is the software released (including version) • Pre-Form points (based in developers guide) • Code in CVS and how is organized • Compliance of QA rules • Packaging granularity • Level of autobuild experience • Explicit dependency list • Test-request (based in test and validation proposed by task 4.4) • Documentation: installation/management manual, user manual, development manual • Unit test • Compatibility with other software • Security issues • Usability • Previously reported issues • Plans for demo • Status • Use of other CrossGrid components • Being Updating in Cyprus (Appendix Deliverable D4.6)

  6. Integration between Application and Services&Tools Prototypes • Interactive Job Scenario (see S. Beco’s talk) • To run interactive job for the simplified X# HEP application • X# middleware based on the EDG-2.0.18plus HEP (Task 1.3) executable • Computes a series of histogram for Gaussian Distribution • User interacts with the RAS (Task 3.1) via the X# MD (VNC mechanism is put in place in order to allow users to submit jobs and see the histogram) • In Cyprus demo, they don’t have a web service that allows the user to submit the interactive job. User launches the interactive submission service using a command line

  7. Integration between Application and Services&Tools Prototypes • Application: WP1.1 • Interactivity using MD • Working in integrating the application on the testbed • Submissions of jobs outside Amsterdam cluster: • Solver executable needs environment variables to run at other testbed sites

  8. Integration between Application and Services&Tools Prototypes • Application: WP1.2 • Running without problems with the MD and the Portal (Task 3.1) • Integration done with 3.1 (MD) and 3.2 (Scheduling Agent) • Demo presents: • jobs submission • visualization results • Integration with their portal based on Jetspeed with new components (workflow, collaborative tools) • Problems with job submission (authorities) • Data Management lacks some required functionalities???? • Integration with Marmot (WP2) is OK • Working with MD new version • Demo: • Marmot + OCM-G (G-PM) + Portal

  9. Integration between Application and Services&Tools Prototypes • Application: WP1.3 • Running with MPICH-P4, submission time was long • Fixed using Replica Manager • Submit MPI jobs from the UI to the new EDG-2.x Resource Broker (To show MPICH-G2, in contact with Task 3.2). • Testing Santa-G possibilities • Integrated with the MD • DEMO: • Using Replica Manager • Inclusion of graphics in Portal

  10. Integration between Application and Services&Tools Prototypes • Application: WP1.4 • Atmospheric and Wave models: • Integrated with the MD and working to be integrated with Portal • Running in four processors in a local cluster. • Air pollution: • Integrated with the MD (old configuration) • Trying integrated the data coming from atmospheric prediction • Portal part was integrated for Poznan • They have installed their application in Valencia’s testbed and it runs ok. • DEMO: • They plan use the MD + PPC

  11. Integration between Application and Services&Tools Prototypes • WP2 • MARMOT: • Being integrated with MD • Integrated with WP 1.2 (Flooding application) • GridBenchmarking: • Progress with the GUI in the MD • They use applications for performance predictions • Benchmarking using different locations • Same Kernel as PPC (Vertlq) • In contact with Task 3.2 people about MPI jobs (Interested on MPICH-G2) • Tutorial ready • Being integrated with JIMS • Interested in Santa-G

  12. Integration between Application and Services&Tools Prototypes • WP2 • GP-M (OCM-G): • Show monitoring of Flooding applications with GPM tool • PPC (Performance Predictions): • The GUI on MD is working fine • Problem: they need monitoring information at WN • Working in this (Discarded NWS solution, waiting for JIMS) • Tutorial ready • Demo: • They will show the tool

  13. Integration between Application and Services&Tools Prototypes • WP3 • Task 3.1 a) RAS and MD: • RAS machine and services installed in Poznan, Lisbon and Nicosia (problems with JSS) • MD installed in Pawel’s Laptop (to assure functionality) • Medical, Air pollution and Flooding applications used as pilots to present functionality • MD presents MPI jobs in distributed environment • Interactivity issue is presented by Task 3.1, 3.2 and 1.3 (HEP) • Task 3.1 b) Portal: • Two portlets added to the portal (job-get-output, job-get-info) • Medical, HEP, Flooding and Air pollution applications ready with the portal to be shown in Demo • Show the generic portlets with all edg-commands

  14. Integration between Application and Services&Tools Prototypes • WP3 • Task 3.2: • New RB (EDG2.x and the same used in LCG) • Ready to work with MPICH-P4 • Integrated with RAS and MD • Functionality is rather the same as presented in Poznan, the effort was dedicated to make it available under LCG1 and EDG2.x • Finishing MPICH-G2 support • Training application running HEP • Problems related to connectivity of WN’s.

  15. Integration between Application and Services&Tools Prototypes • WP3 • Task 3.3.4 (Postprocessing, past 3.3 subtask) • RPM ready in CVS??? It’s being autobuild • Install in two clusters in Warsaw (but it must be installed on all sites driven by the scheduler) • The input data are taken from ganglia and iptables modules: • But it has to be integrated with infrastructure monitoring tools developed in CrossGrid (Santa-G and Jiro). Work to be done during integration meeting in Cyprus. • Task 3.3 • OCM-G: • Investigating which applications are suitable for monitoring submitted from the MD (MD plug-in to support monitoring) • Demo shows integration of quite number of X# elements (MD, APPS, and G-PM)

  16. Integration between Application and Services&Tools Prototypes • WP3 • SANTA-G: • RPM’s available • Working on Demo with HEP application and snort tool • Santa-G works fine with EDG2.x RGM-A • JIMS: • It is working in local cluster at Cyfronet • RPM’s finished but…. • Using web service commercial tool • Input to performance predictions has been discussed • Task 3.4 • Functionality exploited with LCG-1 and EDG2.x • They are using four storage systems: (Cyfronet, Lisbon, Slovakia, and Karlsruhe) • Working in Integration with MD • Demo: • Estimation real access cost to these storage systems

  17. Agenda • 9:00 Integration Status • Integration setup at Cyprus (Wei 5’) • General description of integration work (Santi, 10’) • Testbed support for integration (Jorge, 10’) • Repository and tools (Marcus, 10’) • Demos • 9:30 Flooding (Jan and Bettina, 60’) • Marmot + OCM-G + G-PM + Discussion • 10:30 Coffee Break • Demos • 11:00 Medical applications (Alfredo, 30’) • 11:30 HEP (Celso, 15’) • 11:45 Meteo Part (Carlos,15’) • 12:30 Interactivity (Stefano, 30’) • 13:00-14:00 Lunch • Demos • 14:00 GridBenchmarking + JIMS (George, 25’) • 14:25 Data Access Optimization (Lukasz, 15’) • 14:40 Santa-G ( Stuart, 15’) • 14:55 Postprocessing (Adam, 15’) • 15:10 MD+Portal (20’) • 15:30 Help Desk (Farida, 15’) • 15:45 Discussion • IST Demo • Review

More Related