1 / 16

High Power Mercury Jet Targets Simulation Using SPH

Explore the benefits and capabilities of Smoothed Particle Hydrodynamics (SPH) for simulating high-power mercury jet interactions with protons. Developed a new SPH code in C++ with advanced features like adaptivity, scalability, and robust material interface handling. Conducted 3D and 2D simulations for energy deposition, pressure, temperature, sound speed, and velocity analysis. Current focus on parallelization for GPUs and MHD module integration to enhance simulation efficiency.

ematney
Download Presentation

High Power Mercury Jet Targets Simulation Using SPH

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. September 19, 2011 Simulation of High Power Mercury Jet Targets using Smoothed Particle Hydrodynamics Roman Samulyak, TongfeiGuo AMS Department, Stony Brook University and Computational Science Center Brookhaven National Laboratory

  2. Benefits of SPH • Exact conservation of mass (Lagrangian code) • Natural (continuously self-adjusting) adaptivity to density changes • Capable of simulating extremely large non-uniform domains • Robustly handles material interfaces of any complexity • Scalability on modern multicore supercomputers

  3. New Smoothed Particle Hydrodynamics (SPH) code • New SPH code has been developed in our group using C++ modular structure • Several smoother particle kernels • Riemann solvers, MUSCL-based schemes (to be completed) • Time stepping • Predictor-corrector • Verlet scheme • Symplectic schemes • Currently MHD and physics models related to high energy density physics are being added

  4. SPH simulation of mercury jet dump (3D)

  5. SPH Simulations of Mercury Jet Interaction with Protons

  6. Energy Deposition for NF / MC mW/g • Image: 3D volume rendering • The number of incoming protons have been normalized to 4MW • To obtain correct pulse structure • for MC, the energy is divided by 15 • for NF, the energy is divided by 150 •   x-range: [-0.4, 0.3] cm, y-range: [-3,2.9] cm, z-range: [-75.0, -0.2] cm

  7. Energy Deposition for NF / MC Slice of data parallel to y-z plane at x = -0.4 (containing the maximum energy deposition) mW/g

  8. Energy Deposition for NF / MC Slice of data parallel to x-y plane at z = -50.0 (containing the maximum energy deposition) Peak energy deposition values: NF: 79 J/g MC: 790 J/g

  9. Increase of Pressure by Isochoric Energy Deposition (SESAME data)

  10. Increase of Temperature by Isochoric Energy Deposition (SESAME data)

  11. Increase of Sound Speed by Isochoric Energy Deposition (SESAME data)

  12. 2D simulation of the MC target Density, g/cc SPH simulations show surface spikes and fine structure of the cavitation zone

  13. 2D simulation of the MC target Velocity, x 10 m/s

  14. 2D simulation of the MC target

  15. 2D simulation of the NF target

  16. Summary • Developed new smoothed particle hydrodynamics code for free surface hydrodynamic flow (MHD part will be completed soon) • Performed simulations for the target program of MAP collaboration • 3D simulations of the entrance of spent mercury jets into the mercury pool • 2D simulations of the interaction of mercury jets with protons • Mercury accelerated to velocities of about 5.5 m/s (NF) and 95 m/s (MC) • The specific internal energy of the normal state was not added to the energy deposition in reported simulations (more critical to NF than MC) • Refined 3D simulations are currently running without issues but the progress is slow as the code is not parallel yet • Parallelization (for both clusters and GPUs) and MHD module are current main priorities • Parallelization for GPUs will enable fast simulations of large 3D problems on workstation-size GPU machines

More Related