1 / 19

Data Management Requirements: Computational Combustion and Astrophysics

This article discusses the computational requirements for analyzing combustion and astrophysics data, including the mathematical models, solvers and software used in simulations. It also explores the use of adaptive mesh refinement and the challenges in managing and analyzing the raw data.

tinanelson
Download Presentation

Data Management Requirements: Computational Combustion and Astrophysics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Management Requirements: Computational Combustion and Astrophysics John Bell Center for Computational Sciences and Engineering Lawrence Berkeley National Laboratory jbbell@lbl.gov http://seesar.lbl.gov December 11, 2006

  2. What we are trying to do • Combustion • Detailed analysis of premixed turbulent combustion • Lean premixed systems have potentially high-efficiency and low emissions • Design issues because premixed flames are inherently unstable • Astrophysics • Simulate white dwarf from convection through explosion • Type Ia supernovae are play a key role in modern cosmology but the explosion mechanism is not understood

  3. Computational approach Components of a computational model • Mathematical model: describing the science in a way that is amenable to representation in a computer simulation • Approximation / discretization: approximating an infinite number of degrees of freedom with a finite number • Solvers and software: developing algorithms for solving the discrete approximation efficiently on high-end architecture We attempt to exploit the special structure of the problems we are considering to compute more efficiently

  4. Adaptive Mesh Refinement • Spatial discretization should exploit locality • Structured adaptive mesh refinement • Hierarchical patches of data • Dynamically created and destroyed • Combination of new numerical methodologies reduces computational effort by several orders of magnitude

  5. V-flame • Simulate turbulent V-flame • Strategy – Independently characterize nozzle and specify boundary conditions at nozzle exit • 12 £ 12 £ 12 cm domain • Methane at f = 0.7 • DRM 19, 20 species, 84 reactions • Mixture model for species diffusion • Mean inflow of 3 m/s • Turbulent inflow • lt = 3.5mm, u' = 0.18 m/sec • Estimated h = 220 m m • No flow condition to model rod • Weak co-flow of air

  6. Experimental comparisons Simulation Experiment Instantaneous flame surface animation Joint with M. Day, J. Grcar, M. Lijewski, R. Cheng, M. Johnson and I. Shepherd, PNAS, 2005 Flame brush comparisons

  7. Thermo-diffusive Effects Low swirl burner flames for different fuels • Experiments focused on effect of different fuels on flame behavior • Identical fueling rate and turbulence • Nearly the same stabilization nearly the same turbulent burning speed

  8. Local flame speed analysis • Construct local coordinate system around flame and integrate reaction data • Other mathematical analysis paradigms • Stochastic particles • Pathline analysis

  9. Diffusion flames • Study behavior of fuel bound nitrogen characteristic of biomass fuels • What do experimentalists measure • Exhaust gas composition • Planar laser-induced fluorescence • Temperature • NO concentration • NO measurements • Illuminate flame with a tuned laser sheet • NO absorbs a photon • Measure emission • Problem – NO can lose photon in a collision before it is emitted -- Quenching Joint with P. Glarborg, A. Jensen, W. Bessler, C. Schulz

  10. NO measurement • Quenching requires knowledge of local composition and temperature • fB,i – Boltzmann population term • gl,i – Linear shape profile • Qk(p,T,X) – Electronic quenching • Experimentalists typically guess the composition for quenching correction • Generate synthetic PLIF images from simulation

  11. NO measurement – cont’d NO A-X(0,0) Excitation NO- NO A-X(0,2) Excitation

  12. NO Cont’d • Can use simulation data to compute quenching correction to experimental data • Simulation also provides a more detailed picture of nitrogen chemistry • Reaction path gives quantitative picture chemical behavior of the system Proc. Comb. Inst., 2002

  13. Type Ia Supernovae • Thermonuclear explosion of C/O white dwarf. • Brightness rivals that of host galaxy, L ¼ 1043 erg / s • Large amounts of • Radioactivity powers the light curve • Light curve is robust • Standard candle in determining the expansion of the universe SN 1994D Computational Astrophysics Consortium: Adaptive Methods

  14. Astrophysics issues . . . Are about the same • Specialized treatment of fluid mechanics • Chemistry -> Nuclear physics • Complex diffusive transport -> radiation • Simulations • Common software framework • AMR Turbulent Spectrum Astrophysical Journal, 2006

  15. Workflow • How do we extract “science” from the simulation data • We typically don’t do visual analysis of the raw data • Our analyses typically start with some “mathematical” transformation of the data but , . . . to leading order, we can’t a priori define what this means • I can’t define requirements • Typically, we will do some prototype as part of defining what we want to look at • Data analysis tool needs to be able to ingest application specific information • Data analysis tool requirements • Work with hierarchical data • Incorporate problem physics • Hardened version of prototype tools • Integrated with visualization

  16. Data management • How does the data flow through this process • Run simulation • Dump data in plotfiles (reasonable I/O) • Tar plotfiles and archive in mass storage • We only store data at end of coarse steps, and maybe not each coarse time step • Analyze data • Pull data from mass storage • Move data to analysis platform • Analysis must be done in parallel • Machine for computation may not be good for analysis • Untar plotfile data • Run analysis program (reasonable I/O; demand driven) • This fundamentally does not scale • Have to move everything from mass storage to rotating disk • Need to shift data to appropriate platforms

  17. I/O • How do we do I/O? • AMR Plotfiles • Model used to be “each processor writes to disk” • Level x Processors files Now, • Identify number of desired channels • Tell processors when it is their turn to write • Level x desired channels files • Is this scalable? Will something work better?

  18. Visualization / Analytics • AMRVIS?D • Reads plotfiles directly • Main visualization is slices through data • Limited functionality of contouring, vector fields, volume rendering • Supports a data spreadsheet capability • Fancier visualization done with TECPLOT • Not particularly scalable • Adopt VISIT as principal vis tool • Relation to AMRVIS – replace? need spreadsheet capability • What functionality is missing

  19. Requirements • Data management cartoon • Simulate at ORNL – make large tar files • Move data to NERSC archive (manually) • Iterate on • Pull data from archive on DaVinci (manually) • Run analysis programs (manually) • We need to develop a schema to facilitate analysis without so much manual data movement • Automate data transfer • Archive data so we only read what we need and can stage retrieving data from storage • Automate processing large amount of data once prototypes are operational

More Related