120 likes | 261 Views
GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid 2009-04-03 : Napoli. ENEA Grid e CRESCO in GRISU’: uno strumento per la scienza dei materiali. Outline : ENEA Grid / Cresco infrastructures Computational Capability
E N D
GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid 2009-04-03: Napoli ENEA Grid e CRESCO in GRISU’: uno strumento per la scienza dei materiali • Outline: • ENEA Grid / Cresco infrastructures • ComputationalCapability • Some examplesof Material Science Codesrunning on ENEA Presentazione: S. Raia, S. Migliori Collaborazioni : M. Celino, M. Gusso, P. Morvillo, A. Marabotti, L. Cavallo, F. Ragone.
ENEA Gridinfrastructures [1] • ENEA • -12 research centers in Italy • -A Central Computer and Network • Service (INFO) • - 6 Computer Centres:Casaccia, • Frascati, Bologna,Trisaia, • Portici, Brindisi • - Multiplatform resources for • serial & parallel computation • and graphical post-processing. • - Others computer resources in • ENEA: departments & • individuals Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid|2009-04-03: Napoli
ENEA Gridinfrastructures [2] • Main features: • - Access from any kind of connection • - Sharing data in world wide areas (geographical file system AFS) • - Access to the data from any kind of digital device client • - Running any kind of programs • - Access to National and International GRIDS Foreachcomputational site, ENEA managesLocalNetworks (LAN) whereas the computationalcentres (WAN) are interconnected via “ConsortiumGARR” Network Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid|2009-04-03: Napoli
ENEA Gridcomputatonalresources [4] Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid|2009-04-03: Napoli
Application User Collective Resource Software catalogs Computers Connectivity Fabric Colleagues & 3D Data archives ENEA GridArchitecture & resources integration [3] *A choice of mature components for reliability and easiness of support and maintenance: Distributed File System: AFS Job and resources manager: LSF Multicluster Unified GUI access: Java and Citrix Technologies Quality monitoring system: Patrol Licence Servers * Integration with department and individual resources Distributed File System: AFS Licence pool sharing * Possible integration with other institutions ENEA-GRID www.afs.enea.it/project/eneagrid Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid|2009-04-03: Napoli
IB FC IB FC Sezione 2 (Alto Parallelismo) 256 Nodi blades IBM HS21 con 2 Xeon Quad-Core Clovertown E5345 (2.33GHz/1333MHz/8MB L2), 16 GB RAM per un totale di 2048 core Intel Clovertown Sezione 3 (Speciale) 4 Nodi blades IBM QS21 con 2 Cell BE Processors 3.2 Ghz each. 6 Nodi IBM x3755, 8 Core AMD 8222 con una scheda FPGA VIRTEX5 4 Nodi IBM x 3755, 8 core AMD 8222 con schede NVIDIAQuadro FX 4500 X2 4 Nodi windows8 core 16 Byte RAM Sezione 1 (Grande Memoria) 42 Nodi SMP IBM x3850-M2 con 4 Xeon Quad-Core Tigerton E7330 (32/64 GByte RAM 2.4GHz/ 1066MHz/6MB L2) per un totale di 672 core Intel Tigerton • 35 Nodi di Servizio • Server di : • Front-end • insallazione • AFS • … 2000 Mbits Doppia Intercconnessione a 1 Gbit 1 Rete a 1 Gbit di gestione 4x10 Gbits CRESCO (Portici site) computatonalresources [5] Sistema backup 300 TByte IBM Tape Library TS3500 con 4 drive SERVER BACKUP 3 NodiIBM 3650 Interconnessione InfiniBand 4XDDR SERVER GPFS 4 NodiIBM 3650 Sistema Dischi ad alta velocità 2 GByte/s 160 TByteIBM/DDN 9550 Portici LAN GARR (WAN) Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid|2009-04-03: Napoli
ENEA Grid / Cresco Capability for Material Science codes: • Multicores Platforms(Cresco Linux, spAIX) : 8, 16, 32, 48 smpnodes • GPFS / Infinibandfor high parallelismcodes • Hybrid Parallelism(Cresco smpnodeswith IB interconnections) • Distributed parallel Jobsacross WAN or heterogeneousplatforms • (Parallel Jobs arrays (AFS ‘sysname’ mechanism)) • Nvidia / Cellprocessor / FPGAaccelerators (…next) Some Codes running in ENEA env.: CPMD Gromacs PcGamess Gaussian…. …. others: Abinit, Amber, etc … Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid|2009-04-03: Napoli
Gromacs GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions, but it can be used also for research on non-biological systems, e.g. polymers. Dimeric protein + water + ions: ~77000 atoms Timescale of simulation: 5 ns Simulation made on 16 cores on Cresco1, Time required: 40 h. Peptide + water + ions: ~2500 atoms Timescale of simulation: 40 ns Time required: 24 h (on 8 cores smp Intel Xeon) Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid|2009-04-03: Napoli
PC GAMESS / Firefly PC GAMESS/Firefly: computational Chemistry code for ab initio and DFT calculations. Developed for high performance calculation on Intel processors (x86, AMD64, and EM64T). http://classic.chem.msu.su/gran/gamess/index.html Used on CRESCO for studying electronic proprieties and absorption spectrum of molecules in the photovoltaic field. On Cresco: Performances improvement of 370% (16 cores) compared to runs on PC Desktop quad core. Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid|2009-04-03: Napoli
Gaussian 03: electronic structure program. Starting from the basic laws of quantum mechanics, Gaussianis used to predicts the energies, molecular structures, and vibrational frequencies of molecular systems, along with numerous molecular properties derived from these basic computation types. It is used to study molecules and reactions under a wide range of conditions, including both stable species and compounds which are difficult or impossible to observe experimentally. • On CRESCO is used to study proteinsfolding: • On sharedmemory • Embarrassingparallel: array of parallel jobs • on 8 cores smp nodes. Parallel_job[1] 8 / 16 MPI procs Inp.[1] Parallel_job[2] 8 / 16 MPI procs Inp.[2] . . . . . . . . . . . . Parallel_job[N] 8 / 16 MPI procs Inp.[N] Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid|2009-04-03: Napoli
CPMD: Car-ParrinelloMolecular Dynamics It is a plane wave/pseudopotential implementation of Density Functional Theory, particularly designed for ab-initio molecular dynamics. Extented tests in orderto: 1- Scalingperformances up to 1024 cores 2- ExploitingDual-levelParallelism 3- Outerloopparallelization: TASKSGROUPS Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid|2009-04-03: Napoli
Conclusions … Some ExampleCodes running in ENEA env.: CPMD Gromacs PcGamess Gaussian References in ENEA … Some ExampleCodes running in ENEA env.: CPMD Gromacs PcGamess Gaussian Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid|2009-04-03: Napoli