1 / 37

GRID-CSIC: The GRID Initiative for e-Science

GRID-CSIC: The GRID Initiative for e-Science. Dr. José F. SALT CAIROLS Jose.Salt@ific.uv.es Instituto de Física Corpuscular ( Centro Mixto CSIC-UV) VALENCIA. OVERVIEW. 1.- Introduction 2.- The GRID-CSIC Initiative 3.- Infrastructure Indicators and objectives

cdockins
Download Presentation

GRID-CSIC: The GRID Initiative for e-Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GRID-CSIC: The GRID Initiative for e-Science Dr. José F. SALT CAIROLS Jose.Salt@ific.uv.es Instituto de Física Corpuscular ( Centro Mixto CSIC-UV) VALENCIA

  2. OVERVIEW 1.- Introduction 2.- The GRID-CSIC Initiative 3.- Infrastructure Indicators and objectives 4.- Interoperability & Sustainability 5.- Access to the infrastructure 6.- Applications deployed at IFIC (GRID-CSIC) 7.- Conclusions & perspectives

  3. 1.- INTRODUCTION CERN: where the WEB was born Frames of film ‘2001, A Space Odyssey’ Director: Stanley Kubrick 1968 E-Ciencia GRID-CSIC

  4. Origin: • CSIC has experience in GRID projects • LHC Computing GRID projects (ATLAS and CMS Tier-2, High • Energy National Program) • European Projects (DATAGRID, CROSSGRID, EGEE, I2G, DORII,…) • Other national or regional projects (IRISGRID, Spanish Network of e-Science,…) • Collaboration area with CNRS (France) • Opportunity • Strong Internal Support (CSIC VORI-VICyT-VRI) • Objective • Setup an advanced Distributed Computing Infrastructure to Support Research projects requiring resources beyond the possibilities of a single user or research group • Support Multidisciplinary projects and, in particular, those where several centers have to collaborate in the simulation, analysis, processing and distribution or access to large data volumes • e- Science Examples: • Particle Physics Experiments (ATLAS, CMS, ILC, CDF, etc) • Theoretical Physics: SUSY models, Lattice, etc • Space Missions (XMM, Planck,…) • Astronomical Observations • Climate Model • Computational Chemistry • Biocomputing, etc • kkk

  5. What is e-Science ? E-Science (enhanced Science) refers to scientific activities that are carried out by using Resources distributed across Internet “E-Science is about global collaboration in key areas of science, and the next generation Of infrastructure that will enable it” John Taylor, Director of Research Councils. Office of Science and Technology * The use of distributed resources is both a necessity and an added value * More effective when associated to a global collaboration than at the individual level E-Science is supported by e-Infrastructures: new generation of research infrastructures Based on information and communication technologies

  6. TEMPLATE SLIDE SHOW

  7. 2.- The GRID-CSIC Initiative • The project is based in the use of GRID Technologies to share and access to geographically distributesd resources in a transparent way through the use of a MW. This allows the Interoperability with other European GRID Infrastructures, like those of the EGEE and Interactive European GRID (i2g) projects (this last ones coordinated by CSIC). • The infrastructure will be shared within the IBERGRID Initiative being developed with Portugal, and with the Institut des Grilles Infrastructure (CNRS, France) through collaboration agreements. • The project aims to deploy a total power of around 8.000 cores and an on-line storage capacity of around 1.000 TB ( 1PB) in the 3 year period 2008-2011. • The Infrasructure is being deployed in 3 phases • Along the first year, the pilot phase included 3 centers with experience in GRID projects (IFCA, IFIC and IAA) • A second phase extends the project, in 2009, to Barcelona (ICMAB), Palma de Mallorca (IFISC), and also to Madrid (CTI-CFMAC)

  8. Current Status: • EQUIPMENT • Computing Element (CE): • - Sequential Computing:7 x 16 blades x 8 cores = • 848 cores • - Parallel Computing (MPI & Infiniband): 3 x 16 • blades x 8 cores: 384 cores • Storage Element (SE): • - Disk Server in LUSTRE: 182 TB • Installed software: • Ui05: Scientific Linux CERN 4 (SLC4) • WN5: Scientific Linux CERN 5 (SLC5) DELL + HP HP SUN • HUMAN RESOURCES: • Participation of the IFIC Computer Service ( J. Sánchez, A. Fernández) • 2 contracted people : • Victor Méndez: Operation and GRID Services • Carlos Escobar: Portability/Migration of Applications • contribution of the rest of the group

  9. Current Status: Current Status: • Computing: • IBM x3850 M2 servers with 4th generation technologies X-Artchitecture, which allows to scale from 4 up to 16 processor (INTEL Quad Core Xeon X7350) y up to 1 TB of RAM memory in the configuration of 16 processots • 672 cores ( 42 nodes) • Storage • 20 cabines , 240 TB , DELL • Communications: • Infiniband + Gigabit Ethernet • Computing: • - IBM blades 182 (dual quad: 1456 • cores) • - 70 + 14 with infiniband • - Conections to the network 3x 10G • Storage: • - Rack with discks SATA ( aprox 175 TB) • - 4 GPFS servers

  10. Project Structure • The project is structured in 3 main areas: • Infrastructure • Setup and operation of the computing resources and integrations in the GRID framework • Development and Application Support • Support the integration of the applications and specific Mw adaptations • Coordination • Management, internal organization and dissemination • Initial teams:

  11. 3.- Infrastructure Indicators and objectives • 3 nodes in CPDs at IFIC, IFCA and IAA • In total >4000 cores, aprox. 500 TB ( investment ~ 2M Eur) • Interconnected at 2.5 Gbps • Users: • Local: IFCA/IFIC/IAA ( aprox. 80 users) • Institutional: CSIC ( aprox. 110 users) • National: National Grid Initiative ( aprox. 50 users) • European: EGEE and related projects & EGI ( aprox 200 users) • Execution Service • Sequential Batch • Parallel MPI • Interactive Jobs • Technology: • Parallel computing (MPI) in GRID framework • Interactive and Parallel jobs broker • Friendly user Interface to GRID framework (MD/RAS) with support for workflows, interactivity and visualization • Large data transfer and storage • AAA (Authentication, Authorization and accounting) services in GRID

  12. Current Status of the new sites 44 Tb

  13. Objectives • To prepare, manage and maintainaninfrastructure of Distributed Computing togivesupporttomultidisciplinary and collaborativeresearchprojectsrequiringtheseresources and performsimulation, analysis, access , processing and distribution of a largeamount of data • Toconsolidatethe VO presently active, incorporating new researchgroupsinterested in thesetechnologies, • Topromote new areas, VO and applicationstosupporttheneeds of researcher, of the country and tosupportthe social needswiththesupport and participation of theadministration • Tonormalizethemetodologies of creation, deployment and explotation of applications in e-Science • Topromotethecreation of software of general purposetobeapplied in severalapplications as repositories of data, user interfaces, etc

  14. Operations • Minimunservicelevelswillbesatisfiedaccordingtoserviceagreementswiththedifferent Vos, having in mindthat GRID-CSIC operates in attendedmode 9 am-20pm fromMonday- Friday, labor days, and in unattendedmodebutwithrequestthroughalert at anyother time • As anexample of suchanagreementcorrespondingtothe LCG project, themaximumdelay in respondingtooperationalproblems in labor time willbe 2 hours, and theaverageavailabilitymeasuredonanannualbasiswillbeabove 95%. • The total number of processorcoresafterthefirstphase (2011), willbeabove 4000, and therawspaceabove 400 TB. Training • GRID-CSIC has a verygood record onattractingstudentstocourseslikethepostgraduatecourse GRID & e-Science ( alternativelly IFCA and IFIC). • At IFIC node, wegivetutorialsforthe ATLAS Tier-2 users. • At leastthecurrentinvolvement in postgraduatecourseswillbekept, meaning a minimum of 90 hours/year

  15. Outreach • GRID-CSIC has beendirectlypresent at thelargest GRID events in thelastmonths, like EGEE’09 (Barcelona) , UserForum 2010 ( Upsala, Sweden), and severalothers in differentcountries in theframework of existingprojects, • Thislevel has beenmaintained, and improvedwith France ( through CNRS) and Portugal (through LIP); • AT nationallevelthe NGI initiative has 2 largeannualmeetings, and in additionsmallerfocusedworkshops/meetings are organized, • At CSIC leveldisseminationwillbe more structured, byanalizingcomputingrequests in projects and definingthesuitability of GRID-CSIC tosatisfythem. • Finally, at general disseminationlevel, personnel in GRID-CSIC istipicallyvery active organizingpresentations, talks, and even open-dayjourneysorientedtoyoungstudents. Participation in GRID & e-Science Initiatives : • MinimunThemainobjectiveistocontinuetheinvolvement in Europeanprojects, likecurrently in EGEE, DORII, or EUFORIA withinthe TIC area • InterplaywithMedicalPhysicsprojects: PARTNER, ENVISION, Etc: Good share of knwoledge: Software, GRID Computing, software programming, etc • As a transitionfrom EGEE towardsan EGI (European GRID Initiative) : now EGI has started , theinvolvements in EGI willbereinforced. • CSIC has contributedtotheproposalfor a sitefor EGI.org in Spain • WE ARE COLLABORATING withthewinningbidfor EGI.org (in AMSTERDAM) through EGI.

  16. 4.- Interoperability and Sustainability • GRID-CSIC infrastructurewillkeepinteroperabilitywithotherexisting GRID infrastructures and, in particular, withthe EGI mainline, other sectorial projecs as DORII, EUFORIA, etc and alsothe WLCG (Worldwide LHC Computing GRID) whichintegratesthe ATLAS, CMS and LHCbTiers centers • CSIC has also a relevant role in theNational GRID InitiativewithintheSpanish e-Science Network, with Dr. Isabel Campos as coordinator of the GRID infrastructure. • GRID CSIC willmomotean active role for CSIC in theEuropean GRID Infrastructure (EGI) • Thecurrent GRID Infrastructureisstructuredthroughthevoluntarycontributionof resourcesfromseveralgroups: • Thoseresources are independent of other GRID infrastructures and dedicatedtotheSpanish Network for e-Science • Pursuing a reliable and stableinfrastructure • SLA’s (ServiceLevelAgreements) willbesigned • Access grantedthroughtheApplicationEvaluationCommittee • The GRID CSIC sites are included in theeffort of 21 more centers whichhavedeclaredinterest • Somescientific disciplines needs a more complexinteroperability: • Complexworkflowsbetween GRID and High Performance Systems • Job submission • ResourceAllocation • Data Retrieval

  17. 5.- Access to the Infrastructure • Based on Service Agreements with Virtual Organizations (VO) • User from research communities organize themselves into VO • A Service Agreement is establshed with the VO • GRID Mw is able to handle: • Authentication • Authorization • Accounting • How are the Service Agreements set ? • (Real) Example • They are encouraged to apply • Advantages: • More flexible • can handle peaks and special demands (interactivity) • Assessment and external committees: • Local Access Committee: 5 people representing IFIC (J. Sánchez, F. Botella, J. Nieves, S. González and J. Salt) • Global Access Committee: coordinators of the GRID-CSIC sites

  18. WEB Application to apply for the access to The GRID CSIC • SOFTWARE COMPATIBILITY • Lynux es Ok • Software Libre • Software de pago: which are the requirements to install and deploy a given • application ? • Mathematica

  19. Response of AGATA Reconstruction of tracks: g-ray Tracking Algorithm Data Transfer task-1 Tape INFN CNAF Tape IFIC CPU Cluster IFIC GRID-CSIC task-3 10-12 MB/s Disk IFIC task-4 task-2 task-4 CPU Cluster FZK User Interface IFIC task-3 task-2 task-1 Application : AGATA PSA on the GRID Segmented Ge Pulse Shape Analysis to decompose recorded waves: PSA Algorithm Identified Interaction (x, y, z, E, t) Adapted to Run on the GRID GRID Resources used : Reconstructed g-ray 50 cores (2GB per core, SLC5) 2.5 TB Disk storage (Lustre) Additional storage 0.6 TB Tape (Castor) Few EGEE clusters (CANF, FZK, MANCH.) Tests and Results : Task-2: 40 jobs, 2.0 TB data Task-1: 14 jobs, 0.6 TB data Task-4: 49 jobs, 2.1 TB data Task-3: 14 jobs, 0.6 TB data

  20. 6.- Applications deployed at IFIC GRID CSIC

  21. Computing in MédicalPhysics: • Framework: Local / Internacional Collaboration PARTNER • Planning of Hadrotherapy de terapia bymeans of Monte Carlo Simulation • (GEANT4) • Reconstruction of PET images • Implementation and optimization of reconstructionalgorithms • Calculation of thesystem’s response fordifferent PET scanners • Compensation of thephysicalphenomena of degradation of theimage • Description and calculation of physicalmodels • Development of correctionalgorithms • Applications: Geant4, GATE y FLUKA • Perspectives: Cálculos sequential and parallelcalculations (MPI). • Proposal: Application /Pilot • VO: Propia (o IFIC) • Involvedpeople: G. Amorós, J. Ors, F. Roman (CERN-IFIC) • M. Rafecas, J. Cabello, P. Selovi Medical Physics (J. Cabello) - sent 400 sequential jobs with an execution time of 20 h - 5 GB files generated by the program in the GRID - summary of the results in 44 MB files

  22. Internatial Linear Collider • Framework: International Collaboration ILC • Applications: • Simulation: Mokka (Geant4) • Reconstruction : Marlin and others • Lybrary: CLHEP, etc… (a priori, estandar) • Análisis: ROOT, etc… (PROOF para GRID-CSIC?) • Perspectives: Sequential Computing • Perspectives: International VO internacional (‘ATLAS-like’) . Software Packagesmanaged in a centralizedway • Proposal: Application • VO: ILC • Involvedpeople : M. Vos, J. Fuster, A. Faus, C. Lacasta

  23. Física de Altas Energías: Teoría, Fenomenología y Astrofísica • Framework: Local • Actividades: • - ElectroweakInteractions. Parameters and Structure • - FlavourPhysics and CP violation • - EffectiveHadronicTheory and Weaktransitions - Perturbative and non-perturbative QCD - Neutrino Physics, Astrophysics and Nuclear Matter - Supersymmetry, Grand Unification and StringTheroy • Applications: • Numerical: software in C++ and libraries: CLHEP, etc • Mahematica: analyticalcalculationsto compute thecrosssections a 1 or 2 loops in Beyondthe Standard ModelTheories. • Perspectives: Sequential Computing (batch). Execution Time: aprox. 2 days Positive evaluation of theusage of GRID-CSIC! • Proposal: Pilot • VO: IFIC • Involvedpeople: M. Nebot, F. Botella,

  24. Investigación y Desarrollo de detectores para el sLHC • Framework: International Collaboration RD50 • Developmento of SI Microstripsdetectorsfor ATLAS in sLHC • Studies of thePerformance of the SCT detectorsduring a posible • BeamLoss(veryhighChargedensity) • Simulationstake a execution Time of fewdeays • Expectatives: Sequentialcomputing (batch and interactive) • Proposal: Pilot • VO: IFIC? (RD50?) • Involvedpeople: M. Miñano, U. Soldevila, R. Marco, C. García, S. Martí, C. Lacasta, P. Bernabeu. Collaborationwith CNM (Barcelona) Application: Synopsys TCAD: Simulation of processes in semiconductors, Modelling and Operation of devices and characterization for thte development and production of these type of tehcnolgies

  25. TheoreticalPhysics: Lattice • Framework: UV group • Lattice • Software: MPI • Software Support: in thegroup • Expectativas: Calculations in MPI. The MPI infrastructureis in test phase and theLattice’sgroup has providessomeprogramsto do tests. To try togivesupportto non-stantadard software ( ScaLapack) • VO: IFIC • Personas involucradas: V. Gimenez, N. Carrasco • Actualmente la infraestructura MPI está en fase de pruebas. El grupo de Lattice nos ha pasado un pequeño programa para que realicemos pruebas. Some activity details: - # of jobs :400 - execution time : 12-14h - used CPU: 7000 h - used Disk Capacity: 5.5 TB ( in Lustre)

  26. 7.- CONCLUSIONS & PERSPECTIVES • 1.- New sites want to join the GRID-CSIC effort • - the 7th spanish center is going to join GRID CSIC: the IIIA (Barcelona) • 2.- To verify if GRID-CSIC is profitable from the scientific point of view • - a minimum of 8 Million hours / year. • 3.- in this crisis period, it is a good achievement to get the maintainance • of the infrastructure with a support in manpower to bring the applications • to the GRID- • 4.- Transfer of Technology • After a rough start, CSIC GRID initiative is on track to achieve the objectives set for the first phase (2008-2011): • Appropriate use of resources for scientific applications • Important role in the National and European GRID Initiatives • 3 new sites have joined the initiative according to the scheduled planning • Strong interaction Users- Specialists • GRID CSIC provides different speeds of integration in the GRID • To hide the complexity of the GRID to the users

  27. Backup Slides

  28. Current Status IFCA ICMAB CTI IFISC IFIC IAA

  29. Planning

  30. The total number of processorcoresafterthefirstphase (2010), willbeabove 4000, and therawspaceabove 400 TB. Bytheend of 2010, withthe 3 new nodes ( at Madrid and Barcelona) suchcapacitywillbeincreasedby a 50%, and anadditional 50% will come by 2010 afterinstallation of twoadditionalnodes-

More Related