1 / 43

ILIAN TODOROV Computational Chemistry Group

Overview of Software Engineering Projects within the Scientific Computing Department. ILIAN TODOROV Computational Chemistry Group. DL_POLY. Materials modelling at equilibrium

akando
Download Presentation

ILIAN TODOROV Computational Chemistry Group

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview of Software Engineering Projects within the Scientific Computing Department ILIAN TODOROV Computational Chemistry Group

  2. DL_POLY Materials modelling at equilibrium Full-featured comprehensive force-field – SPME electostatics, instant polarisation, vdW, metals, tersoff, 3- & 4-body & bio-chemical interactions Particle dynamics, constraint dynamics, rigid body dynamics. Classical Molecular Dynamics Suite developed by Ilian Todorov & Bill Smith Supported by CCP5 (EPSRC & NERC) Cost-free to academic researchers. Available commercially. 2012 statistics 454 citations 1,650 downloads 2,000 user mail-list K. Trachenko et al. - J. Phys.: Condens. Matter 25 (2013) 125402 (7pp)

  3. Features & Science • MPI based parallelisation • openMP hybridisation on the way • Cuda port available • 3D FFT (DaFT) implemented within domain decomposition • Parallel I/O (read & write) • ASCII, netCDF formats • DL_FIELD – protonated PDB to DL_POLY Force-Field format converter • Two-temperature thermostat • Defect detection and variable timestep MOF modes of vibration Gaseous permeability and retention in OF for CO2 and N2 Radiation Damage in metals & ceramics Advance implicit solvent ABNP2 GABA_A receptor ion channel opening • 2012 Downloads • EU-UK – 19.0% • China – 15.0% • USA – 14.3% • UK – 14.0%

  4. HPC Parallel Scaling STRONG

  5. HPC Parallel Scaling WEAK

  6. DL_FIELD Force field generator developed by Chin Yong 4,382 atoms 19,400 two-body 7,993 three-body 13,000 four body 730 vdw Force field schemes (1) CHARMM – protein, saccharides, some lipids, organic molecules (2) AMBER – proteins, Glycam – sugars, glycans. (3) OPLSAA – proteins (4) PCFF – small organic molecules, organic polymers. (5) DREIDNG – general force fields for organic and other covalent molecules. (united atom, CHARMM19) 2012 statistics 350 downloads 400 user mail-list

  7. DL_MESO Mesoscale Modelling Suite developed by Michael Seaton & Bill Smith Supported by CCP5 (EPSRC) Cost-free to academic researchers. Available commercially. • Lattice Boltzmann Equation (LBE) & Dissipative Particle Dynamics (DPD) methods • Serial and highly parallelized (domain decomposed) codes in Fortran90 and C++ • Bridges gap between atomistic (e.g. classical MD) and continuum methods Statistics and information • 128 downloads in 2012 (28% EU-UK, 20% China, 15% USA, 11% UK) • Currently 475 users on mailing list

  8. Features & ScienceLattice Boltzmann Equation • Grid-based solution of Boltzmann equation: emergent Navier-Stokes behaviour • Collision schemes for application of fluid viscosity: range from simplicity to greater numerical stability • Complex boundary conditions possible using simple procedures • Plug-and-play nature of additional physics: • Multiple phase/fluid algorithms based on applying interfacial tensions or equations of state • Mass/heat transfers – can be applied with reaction kinetics • Future features: immersed boundary method, CUDA port

  9. Features & ScienceDissipative Particle Dynamics • Soft potentials for fast equilibration, including many-body DPD (gives vapour/liquid systems) • Bond interactions and Ewald electrostatics with smeared charges • Pairwise thermostats (DPD and alternatives): give correct hydrodynamic behaviour • Barostats for NPT ensembles • Boundary conditions: periodic, Lees-Edwards shear, adsorbing hard walls, frozen bead walls • Future plans: improved scalability for thermostats and electrostatics

  10. CASTEP DFT materials modelling Plane-wave basis + pseudopotentials Full-featured with comprehensive spectroscopy capability Density-functional materials modelling project leader - Keith Refson Mainstay code of UK Car-Parrinello consortium Available in PRC via commercialization partnership with Accelrys Inc. 125 UK publications in 2012 incl. 15 in high-impact journals. 474 PRC publications in 2012 incl. 5 in high-impact journals. The Nott-300 MOF - S. Yang et al Nature Chemistry 4, 887 (2012).

  11. HPC Parallel Scaling MPI based parallelization Wavefunction array distributed by basis coefficients (G-vector), k-points G-vector requires 3D FFT implemented using AlltoALL 3D FFT optimized using SYS V Shared Memory within SMP nodes New – also distribute by bands (electron states) Portable, efficient checkpoint I/O using MPI collectives

  12. ONETEP Linear Scaling Parallel DFT Ground state energy up to 2500 atoms in a DNA chain – Skylaris, Haynes, Mostofi & Payne, J. Chem. Phys. 122, 084119 (2005)

  13. ONETEP – TDDFT fullerene molecule – good accuracy with time scaling linearly with the number of excited states needed Non-linear contribution of orthoganalisation between multiple excited states visible

  14. ONETEP – ROAD MAP 2005 2010 2015

  15. ChemShell The QM/MM Modelling Approach project leader - Paul Sherwood • Couple QM (quantum mechanics) and MM (molecular mechanics) approaches • QM treatment of the active site • reacting centre • excited state processes (e.g. spectroscopy) • problem structures (e.g. complex transition metal centre) • Classical MM treatment of environment • enzyme structure • zeolite framework • explicit solvent molecules • bulky organometallic ligands

  16. ChemShell GAUSSIAN Tcl scripts CHARMMxxacademic TURBOMOLE Integratedroutines: DL-FIND CHARMmxx Accelrys GAMESS-UK minima and TS Search datamanagement MOLPRO GROMOS96 Conical IntersectionSearch moleculardynamics MNDO04 DL_POLY genericforce fields Dalton Global Optimisation GULP QM codes QM/MMcoupling MM codes

  17. ChemShell Modelling heterogeneous catalysis BASF 1923 high pressure catalyst 300 bar, < 300 oC Zinc Oxide / Chromia ICI 1965 low pressure catalyst 40-110 bar, 200–300 oC Copper Oxide / Zinc Oxide / Alumina CO2 H2O H2 CH3OH Cu / CuO Catalyst Active Site ZnO / Al2O3 Support

  18. Bulk-like Island Vacant Zn Interstitial Surface Site ChemShell Zincite (0001) Surface Model • Stochiometry adjusted to remove surface dipole • Reconstruction from MM relaxation. • Oxygen – red • Zinc – light grey

  19. ChemShell Surface Cluster Model - Regions QM Boundary, pseudopotentials Active MM Inactive MM TerminatingFitted Charges

  20. ChemShell Cu2+ Ion in Zinc Interstitial Site Polar surfaces stabilised by vacant interstitial sites. Copper anchors to the support via a Cun+ ion in the Interstitial site. Electronic structure of Cu2+ Bulk Ground state d9s0 Excited state (Cu+ d10s0, h+) Surface Ground state (Cu+ d9s1, h+) Spin density - one hole localised on dx2 – y2 and nearest neighbour oxygen ions Dahan et al. J. Phys.: Condens. Matter10 (1998) 2007. S.T. Bromley et al. J. Phys. Chem. B 107 (2003) 7045

  21. ChemShell Interaction with surface is saturated so curve flattens with number of copper atoms added S.A French, AA, Sokol, CRA Catlow, and P Sherwood J. Phys. Chem. C 2008, 112, 7420.

  22. ChemShell QM/MM Studies of Enzymes: Xanthine Oxidase – Enzyme and active site S. Metz and W. Thiel, J. Am. Chem. Soc.131 (2009) 14885

  23. ChemShell QM/MM Studies of Enzymes: Xanthine Oxidase – Mechanism and energy profile . S. Metz and W. Thiel, J. Am. Chem. Soc.131 (2009) 14885

  24. First Principles Theoretical prediction of valence andmaterials properties in rare earth compounds Automated large scale first principles calculations (>6000 runs): 3+/4+ 3+ trivalent L. Petit et al. Band Theory Group SC Department 3+ 3+/2+ divalent 2+ 2+ Electronic structure determines physical properties semiconductor semimetal metal heavy fermion

  25. Code_Saturne CFD on a Grand Scale – EDF lead, Charles Moulinec @ STFC Technology • Co-located finite volume, arbitrary unstructured meshes, predictor-corrector method • 500 000 lines of code, 49% FORTRAN, 41% C, 10% Python • MPI - OpenMP Physical modelling • Single-phase laminar and turbulent flows: k-, k- SST, v2f, RSM, LES • Radiative heat transfer (DOM, P-1) • Combustion coal, heavy fuel oil, gas (EBU, pdf, LWP) • Electric arc and Joule effect • Lagrangian module for dispersed particle tracking • Compressible flow • ALE method for deformable meshes • Conjugate heat transfer (SYRTHES & 1D) • Specific engineering modules for nuclear waste surface storage and cooling towers • Derived version for atmospheric flows (Mercure_Saturne) • Derived version for eulerian multiphase flows Flexibility • OpenSource • Portability (UNIX, Linux and MAC OS) • GUI (Python TkTix, Xml format) • Parallel on distributed memory machines • Periodic boundaries (parallel, arbitrary interfaces) • Wide range of unstructured meshes with arbitrary interfaces • Code coupling capabilities (Code_Saturne/Code_Saturne,Code_Saturne/Code_Aster, ...)

  26. Kernel GUI Configure run script Simulation Mesh modification Define simulation options options Mesh and data setup Mesh partitioning (XML) Navier-Stokes resolution User-defined functions Preprocessor Post Turbulence processing Specific physics Verification Meshes Read meshes Visualization Post-processing output Descending connectivity MPI communication Verification output Intermediate Cell Checkpoint domain Mesh and restart structure number Code_Saturne General Infrastructure

  27. Code_Saturne TEST CASE Large-Eddy Simulations in staggered-distributed tube bundles. Experiment of Simonin and Barcouda. 2-D section: 100,040 cells; 3rd direction: 128 layers -> 13M cells

  28. Code_Saturne Mesh Joining Split the computational domain in N parts and mesh each part independently. Joining might be non-conforming. Time to join 4 x 812M hexa- cell meshesconforming Time to join 15 x 108M tetra - cell meshes: 23 s (HECToR Phase2b) 3072 MPI tasks using 4GiB RAM each. non-conforming

  29. Code_Saturne Mesh Multiplication From a coarse grid, split the cells/elements homogeneously Special treatment is required to preserve the surface description Time to generate a 26B cell mesh from a 51M cell mesh, for the tube bundle case (hexahedral cells only)

  30. Code_Saturne Partitioing Results Test case: 3.2B cell mesh For 65536 cores, ParMETIS needs >1GiB, impossible on HECToR. -SFC Morton usually faster. Computing the halos requires more time when SFC Morton is a partitioning tool, probably because of the poorer edge-cut quality.

  31. Code_Saturne Solving PDE Mesh Joining HECToR – Blue Joule Mesh Multiplication Mesh Joining HECToR - Jaguar The 3.2B case is used for comparison on HECToR and Jaguar, where the average CPU time per time-steps decreases as a function of the number of cores, for both partitioners. A speed-up of 1.46 (resp. 1.14) is still observed for SFC (resp. ParMETIS) on Jaguar, going from 32768 to 65536 cores.

  32. Code_Saturne OpenMP within MPI 26B cell mesh

  33. Code_Saturne I/O Management Comparison IO per Blocks (Ser-IO) and MPI-IO Comparison Lustre (Cray) / GPFS (IBM BlueGene/Q) filesystems Tube Bundle 812M cells Ser-IO: ~same performance on Lustre and GPFS MPI-IO: 8 to 10 times faster with GPFS MPI-IO: about 35 minutes to write a 26B cell mesh file (6TB)

  34. CCPQ led by Martim Plummer

  35. Antimatter Theory • Explicitly correlated Rayleigh Ritz and generalized Kohn variational methods respectively for ‘bound’ and collisional leptonic wavefunctions (eg e+ H2 interactions), • Example: Rearrangement in He anti-H collisions • Grid integration of leptonic plus nuclear wavefunctions to form scattering matrix elements for rearrangement processes (Ps, Ps- formation). • Motivation: to provide data relevant to the ALPHA project • Landmark experiments forming, cooling and trapping atomic antihydrogen in its ground state for >1000s • Currently performing spectroscopic analyses of antihydrogen ALPHA collaboration: Nature Physics 7 (2011) 558, Nature 483 (2012) 43 (equipment for these analyses is partly designed/provided by the Cockcroft Institute at Daresbury) • Liquid helium is used to cool the experimental environment: He, along with H2, is an important component of the background gas acting as ‘impurities’ and destabilizing the antihydrogen • Secondary motivation: the Ps and Ps- products are of scientific interest in themselves

  36. Rearrangement in He anti-H collisionsJonsell, Armour, Plummer, Liu and Todd, New J Phys 14 (2012) 035013 • Elastic collisions are not a good means of cooling antihydrogen as at the temperature of interest (energy range < 1e-3 au) rearrangement (at lower energies nuclear annihilation) is significant

  37. The R-matrix method • The collisions and UK-RAMP multiphoton work use the R-matrix method. • Configuration space divided into ‘inner and ‘outer’ regions by a sphere • Inside: all electron (lepton) calculation, CI, exchange, spherical tensor algebra, Hamiltonian formation and diagonalization (with non-vanishing orbitals on the boundary) • Outside: multipole potentials (from ‘inside’), coupled differential equations, propagation to asymptotic region, possible frame transformations • Inside: energy-independent; outside: energy-dependent

  38. Codes: PRMAT (and UKRmol) • Atomic Inner codes RAD, ANG, HAM recently parallelized from serial or single node-OpenMP to multinode 100s of cores. • mixed mode, MPI/OpenMP, all-MPI shared-memory segments. • Fortran 2003 objects for sms and control of parallelization with passive RMA, MPI-IO asynchronous I/O (see dCSE reports and CSE Highlights 2012). • Important to make spherical tensor algebra code understandable • New CCPQ milestones include development of a much more complicated ‘double-continuum’ code. • DL /RAL also support UKRmol (UCL/OU), the molecular electron (positron) collisions packages: UKRmol-in/out. • The PRMAT outer region code PFARM now interfaces with UKRmol. • Real world applications include: • Astrophysics: stars, interstellar medium (shocks) • Atmospheres, plasmas (nuclear fusion, laser-produced plasmas, lightening) • radiation damage to DNA (electron collisions with DNA bases)

  39. ANG RAD

  40. PFARM • Outer code PFARM, scales to 10000s of cores: now used with both atomic inner region and UKRmol • full parallel diagonalization (Scalapack), multiple MPI task propagation and pipelining:

  41. Optimized code – overall 29% performance improvement on 16384 cores

  42. Example: electron adenine collisions The peaks suggest resonances which may cause break-upThe green curve models ‘in situ’ rather than an isolated moleculeDora, Bryjko, van Mourik and Tennyson, J Chem Phys 146 (2012) 024324Results are also available for guanine

  43. FUTURE or WHAT FUTURE

More Related