1 / 12

Isabelle d ’ Ast - CERFACS

Large Eddy Simulation of two phase flow combustion in gas turbines: Predicting extreme combustion processes in real engines. Isabelle d ’ Ast - CERFACS. CERFACS. Around 130 people in Toulouse ( South West France).

csell
Download Presentation

Isabelle d ’ Ast - CERFACS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Large Eddy Simulation of two phase flow combustion in gas turbines:Predicting extreme combustion processes in real engines Isabelle d’Ast - CERFACS

  2. CERFACS • Around 130 people in Toulouse ( South West France). • Goal : Develop and improve numerical simulation methods and advanced scientific computing on real applications ( CFD, climate, electromagnetism ). • 30 to 40 A class publications by year (International journals) • 10 Phds per year • Collaborations with industry and academia ( France, Germany, Spain, USA, Italy ).

  3. Scientific problem: the prediction of extinction in an industrial burner • In an industrialburnerdue to the fast change in operating conditions, the fuel mass flux canvarymuchfasterthan the air mass flux  the engine extinction must beavoided. • Extinction is an unsteadyphenomenonwhich has been widelystudied in veryacademic configurations, but verylittle in complexindustrialburners. Purpose of the project: perform Large Eddy Simulation (LES) in an industrial combustion chamber to understand the mechanisms of extinction and evaluate the capability of LES to predictaccurately the extinction limits. Combustion chamber sector Air injection Unstructured mesh Annular combustion chamber Air + fuel injection Outlet ~ 9M cells

  4. B) Science Lesson • The AVBP Code : • Navier stokes 3D compressible equations • Two phase ( liquid Eulerian/lagragian ) • Reactive flows • Real thermodynamics (perfect and transcritical gases) • Moving meshes (piston engines) • Large Eddy Simulation • Scales larger than the mesh cells are fully resolved • Scales smaller than the mesh cells are modeled via a sub-grid stress tensor model ( Smagorinsky / Wale ) • Unstructured grids: • Important for complex geometries • Explicit schemes ( Taylor Galerkin / Lax wendroff ) • Contact: gabriel.staffelbach@cerfacs.fr for details

  5. C) Parallel Programming Model • MPI code written in fortran 77. • Library requirements: • Parmetis (partitionning) • (p)HDF5 ( I/Os) • Lapack • The code runs on any x86 / power / Sparc computer in the market so far ( BullX, Bluegene P, CRAY XT5, Power 6, Sgi Altix) • Currently migrating to fortran 90 (validation underway). • Introduction of OpenMP and OmpSS for fine grain threading in progress.

  6. E) I/O Patterns and Strategy • Two categories of I/O • Small binary files ( one file written by the master for progress monitoring). • Large HDF5 files. Single file only. • Written by the master ( HDF5 standard) • Phdf5 collective file under study ( parallel I/O handled via PHDF5 only). Performance is erratic and variable. • Multiple master - Slave I/O ( a subset of ranks has I/O responsabilities ) One file per master ( 1/100 of core count files ) under study. Sketch code performance encouraging. • Average size of HDF5 : 2GB. Depends on the mesh size ( max today 15GB per file, one file per dumped time steps , usually 200 for converged simulation). Binary file 100 MB. • Input I/O 2 large HDF5 files. • Sequential master read • Buffered / alltoall alltoallv under validation.

  7. F) Visualization and Analysis • Visualization uses 2 methods: • Translation of selected datasets to ensight/fieldview/tecplot format. Relies on parallelisation of these tools. • Xdmf format : xml indexing of HDF file and direct read via paraview/ensight (no translation) • ‘advanced user methods’ available (not tested on INTREPID yet): • Single HDF5 file written in block format ( per partition ) . • Indexed via xdmf • Read and postprocessed in parallel directly via pvserver ( paraview ) on the cluster and automatically generates jpg. • Full migration to xdmf for 2012 3Rd quarter. Generalisation of pvserver.

  8. G) Performance • Performance analysis with : • Scalasca • Tau • Paraver / dyninst • Current Bottlenecks : • Master/slave • Extreme usage of allreduce. Over 100 Calls per iteration. • Hand coded collective communications instead of alltoall / broadcast • Cache misses: Adaptative cache loop not implemented for node (only for cells). • Pure MPI Implementation (instead of hybrid mode). • Current status and future plans for improving performance: • Parallelisation of preprocessing task sketch done 2h -> 3min max memory 15GB versus 50 MB . Replacement of the current master /slave scheme 3rd Quarter 2012. • Buffered – MPI_reduce switch underway on current version: 20% gain per iteration at 1024 cores. Strong scaling performance to be studied. • OpenMP / OmpSS implementation to reduce communications

  9. H) Tools • How do you debug your code? • Compiler : “-g -fbounds-check -Wuninitialized -O -ftrapv -fimplicit-none -fno-automatic –Wunused” • Gdb / ddt • Current status and future plans for improved tool integration and support • Debug verbosity level included in the next code release.

  10. I) Status and Scalability • How does your application scale now? • 92% scalability up to 8 Racks on BG-P ( dual mode ) • Target 128k cores end of 2012: • Currently 60% on 64k cores.

  11. I) Status and Scalability • What are our top pains? • 1- Scalable I/O. • 2- Blocking allreduce. • 3- Scalable post-processing. • What did you change to achieve current scalability?   • Buffer Asynchronous partition communications ( Irecv/Isend) previously per dataset Irecv/Send. • Current status and future plans for improving scalability • Switch to Parmetis 4 for improve performance and larger datasets • Ptscotch ? ( Zoltan ? )

  12. J) Roadmap • Where will your science take you over the next 2 years? • Currently we are able to predict instabilities, extinction and ignition of gas turbines. • Switch to larger problems and safety concerns : Fires in buildings ( submitted for consideration for 2013 ). • What do you hope to learn / discover? • Understanding flame propagation inside buildings/furnaces will greatly improve prediction models and safety standards can be adapted accordingly. • Even larger datasets : 2013 I/O expected 40Gb per snapshot. • Need to improve workflow ( fully parallel postprocessing ) – Scalable I/O.

More Related