450 likes | 735 Views
Gyrokinetic particle-in-cell simulations of plasma microturbulence on advanced computing platforms. Stephane Ethier Princeton Plasma Physics Laboratory SCIDAC 2005 Conference San Francisco, CA.
E N D
Gyrokinetic particle-in-cell simulations of plasma microturbulence on advancedcomputing platforms Stephane Ethier Princeton Plasma Physics Laboratory SCIDAC 2005 Conference San Francisco, CA Work performed under the DOE SCIDAC Center for Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas
The Ultimate Burning Plasma Fusion Powers the Sun and Stars Can we harness Fusion power on Earth?
The Case for Fusion Energy • Worldwide demand for energy continues to increase • Due to population increases and economic development • Worldwide oil and gas production is near or past peak • Need for alternative source: coal, fission, fusion • Increasing evidence that release of greenhouse gases is causing global climate change • This makes nuclear (fission or fusion) preferable to fossil (coal) • Fusion has clear advantages over fission • Inherently safe (no China syndrome) • No weapons proliferation considerations (security) • Greatly reduced waste disposal problems (no Yucca mountain) • Can produce electricity and hydrogen • Abundant fuel, available to all nations • Deuterium and lithium supply will last 1000’s of years
The two ions need to break the Coulomb barrier to get close enough to fuse together Fusion Reaction
The Most Successful Magnetic Confinement Configuration is the Tokamak plasma magnets magnetic field
We know we can make Fusion Energy –The Challenge now is to make it Practical! J. B. Lister
Fusion: DOE OFES #1 Item on List of Priorities November 10, 2003Energy Secretary Spencer Abraham Announces Department of Energy20-Year Science Facility PlanSets Priorities for 28 New, Major Science Research Facilities #1 on the list of priorities is ITER, an unprecedented international collaboration on the next major step for the development of fusion #2 is UltraScale Scientific Computing Capability
Plasma Science Challenges • Macroscopic Stability • What limits the pressure in plasmas? • Wave-particle Interactions • How do particles and plasma waves interact? • Microturbulence & Transport • What causes plasma transport? • Plasma-material Interactions • How can high-temperature plasma and material surfaces co-exist?
atomic mfp electron-ion mfp system size skin depth tearing length ion gyroradius debye length electron gyroradius Spatial Scales (m) 10-2 10-6 10-4 100 102 pulse length Inverse ion plasma frequency current diffusion confinement inverse electron plasma frequency ion gyroperiod Ion collision electron collision electron gyroperiod 105 10-10 100 10-5 Temporal Scales (s) Challenge to Theory & Simulations • Huge range of spatial and temporal scales •Overlap in scales often means strong (simplified) ordering not possible
Importance of Turbulence inFusion Plasmas • Turbulence is believed to be the mechanism for cross-field transport in magnetically confined plasmas: • Size and cost of a fusion reactor determined by particle and energy confinement time and fusion self-heating. • Plasma turbulence is a complex nonlinear phenomenon: • Large time and spatial scale separations similar to fluid turbulence. • Self-consistent electromagnetic fields: many-body problem • Strong nonlinear wave-particle interactions: kinetic effects. • Importance of plasma spatial inhomogeneities, coupled with complex confining magnetic fields, as drivers for microinstabilities and the ensuing plasma turbulence.
Complete but impractical Cannot solve on all time and length scales Can eliminate dimensions by integrating over velocity space (assuming a Maxwellian) The Fundamental Equations for Plasma Physics: Boltzmann+Maxwell 6D+time
r Gyrokinetic Approximation forLow Frequency Modes • Gyrokinetic ordering • Gyro-motion: guiding center drifts + charged ring • Parallel to B: mirror force, magnetically trapped • Perpendicular: E x B, polarization, gradient, and curvature drifts • Gyrophase-averaged 5D gyrokinetic equation • Suppress plasma oscillation and gyro-motion • Larger time step and grid size, smaller number of particles
The Gyrokinetic Toroidal CodeGTC • Description: • Particle-in-cell code (PIC) • Developed by Zhihong Lin (now at UC Irvine) • Non-linear gyrokinetic simulation of microturbulence [Lee, 1983] • Particle-electric field interaction treated self-consistently • Uses magnetic field line following coordinates (y,q,z) • Guiding center Hamiltonian [White and Chance, 1984] • Non-spectral Poisson solver [Lin and Lee, 1995] • Low numerical noise algorithm (dfmethod) • Full torus (global) simulation
The Particle-in-cell Method • Particles sample distribution function • Interactions via the grid, on which the potential is calculated (from deposited charges). • The PIC Steps • “SCATTER”, or deposit, charges on the grid (nearest neighbors) • Solve Poisson equation • “GATHER” forces on each particle from potential • Move particles (PUSH) • Repeat…
Charge Deposition Step (SCATTER operation) GTC 4-Point Average GK (W.W. Lee) Classic PIC Charge Deposition:4-point average method
q Global Field-aligned Mesh (Y,a,z) a = q - z/q Saves a factor of about 100 in CPU time Y z z
Domain Decomposition • Domain decomposition: • each MPI process holds a toroidal section • each particle is assigned to a processor according to its position • Initial memory allocation is done locally on each processor to maximize efficiency • Communication between domains is done with MPI calls (runs on most parallel computers)
MPI_init MPI process MPI process MPI process MPI process Start threads Merge threads OpenMP Loop OpenMP Loop MPI_finalize 2nd Level of Parallelism:Loop-level with OpenMP
Processor 2 Processor 0 Processor 3 Processor 1 New MPI-based particle decomposition • Each domain in the 1D (and soon 2D) domain decomposition can have more than 1 processor associated with it. • Each processor holds a fraction of the total number of particles in that domain. • Scales well when using a large number of particles
Main Computing Platform:NERSC’s IBM SP Seaborg • 416 x 16-processor SMP nodes (with 64G, 32G, or 16G memory) • 380 compute nodes (6,080 processors) • 375 MHz POWER 3+ processors with 1.5 GFlops/sec/proc peak
CRAY X1 at ORNL • 512 Multi-streaming vector processors (MSPs) • 12.8 Gflops/sec peak performance per MSP • Currently being upgraded to X1E (1,024 – 18GF/MSP)
Collaboration with Dr. Leonid Oliker of LBL/NERSC Only US team doing performance study on ES Many thanks to Dr. Sato 5,120 vector processors 8 Gflops/sec per proc. 40 Tflops/sec peak Earth Simulator
Optimization Challenges • “Gather-Scatter” operation in PIC codes • The particles are randomly distributed in the simulation volume (grid). • Particle charge deposition on the grid leads to indirect addressing in memory • Not cache friendly. • Need to be tuned differently depending on the architecture. particle array scatter operation grid array
Vectorization Work • Main challenge: charge deposition (scatter) • Need to avoid memory dependencies • Solved with work-vector method • Each element in the processor register has a private copy of the local grid • ES: Minimize memory banks conflicts • Use “duplicate” directive (thanks to David Parks…) • X1: Streaming + vector • Straightforward since GTC already had loop-level parallelism.
GTC Performance 3.7 Teraflops achieved on the Earth Simulator with 2,048 processors using 6.6 billion particles!!
Device-size Scans: ITER-size Simulations • ITER-size simulation using 1 billion particles (GC), 125 M spatial grid points, and 7000 time steps--- leading to important (previously inaccessible) new results • Made possible by mixed-model MPI-OpenMP on Seaborg
Continuous Improvements in GTC bringnew Computational Challenges • Recent full kinetic electron simulations of electron temperature gradient instability required 8 billion particles! • Electron-wave interaction has sharp resonances that requires higher phase space resolution • Fully electromagnetic version requires new solver (multi-grid)
Look for many GPS-related work during this conference • Scientific accomplishments with enhanced versions of GTC (Z. Lin et al., presented by G. Rewoldt) • Shaped plasma device simulations with general geometry GTC (W. Wang) • New electromagnetic solver for kinetic electrons capability (M. Adams) • Visualization techniques (K.-L. Ma) • Data management and workflows (S. Klasky)
Conclusions • Simulating fusion experiments is very challenging • It involves multiscale physics • Gyrokinetic particle-in-cell simulation is a very powerful method to study plasma micro-turbulence • The GTC code can efficiently use the available computing power • New and exciting discoveries are continuously being made with GTC through advanced computing