1 / 35

Dimitri J. Mavriplis Department of Mechanical Engineering University of Wyoming

Grid Resolution Study of a Drag Prediction Workshop Configuration using the NSU3D Unstructured Mesh Solver. Dimitri J. Mavriplis Department of Mechanical Engineering University of Wyoming. Motivation. Both AIAA Drag Prediction Workshops have shown large scatter in CFD results

Download Presentation

Dimitri J. Mavriplis Department of Mechanical Engineering University of Wyoming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Grid Resolution Study of a Drag Prediction Workshop Configuration using the NSU3D Unstructured Mesh Solver Dimitri J. Mavriplis Department of Mechanical Engineering University of Wyoming

  2. Motivation • Both AIAA Drag Prediction Workshops have shown large scatter in CFD results • Grid convergence not achieved • Both AIAA Drag Prediction Workshops have shown generally poor agreement of CFD with experimental results • CL vs. incidence • Moment predictions • Notable exceptions in both workshops • Not well understood

  3. Intricacies of DPW Cases • Generic/Simple Wing Body (Nacelle) configurations • Substantial amounts of flow separation • Appears in sensitive areas (trailing edge) • Indicative of off-design conditions • Particularly challenging for CFD codes • DPW is good test case for sensitivity studies in CFD solvers • More benign attached flow cases should be piece of cake for current RANS solvers

  4. Motivation • Follow on Studies: • E.M. Lee-Rausch et. al.: AIAA 2003-3400 • E.M. Lee-Rausch et. al.: AIAA 2004-0554 • General lack of grid convergence • In many cases, agreement between codes gets worse with increased grid refinement • Issues Raised (running different codes on same grids with same turbulence model): • Modeling vs. discretization error • Effects due to: • Structured vs. unstructured • Cell centered vs. vertex based discretizations • Use of prisms or tetrahedra in boundary layer regions • Upwind versus Artificial Dissipation • Thin layer vs. Full Navier-Stokes terms • Distance function calculation

  5. Motivation • Follow up on DPW and previous studies to better understand sources of error • Determine dominant sources of error through sensitivity studies • Modeling error: • Distance function evaluation methods • Thin Layer vs. Full Navier Stokes • Dissipation Levels (re: upwind flux functions) • Grid Resolution • Goal is not to match experimental data but to better understand importance of various sources of error and establish requirements for grid convergence • Leave turbulence and transition models unchanged • SA model/fully turbulent

  6. Families of Grids • Grid Convergence studies require families of grids derived by globally coarsening or refining an initial grid to maintain similar relative variations in grid resolution • Family of 4 grids: • 1.1 million points • 3.0 million points Original DPW2 grids (VGRIDns) • 9.1 million points • 72 million points Generated by uniform refinement of 9.1 million pt grid • Single grid of different family • 65 million points Generated c/o S. Pirzadeh, NASA Langley, VGRIDns 64bit version on NASA Columbia Supercomputer

  7. Baseline (3M pt) DPW 2 Grid • 4 cells across blunt TE • 7.e-06 chords spacing at wall • y+ ~ 1, 26 cells/layers in boundary layer

  8. Grids Specifications • NSU3D preprocessing converts tetrahedra into prisms in boundary layer regions

  9. Grid Specifications 3.0 million pt grid 72 million pt grid

  10. Grid Specifications 65 million pt grid 72 million pt grid

  11. Grid Specifications 65 million pt grid 72 million pt grid

  12. NSU3D Unstructured RANS Solver • Vertex based discretization • Central difference + Matrix dissipation • Multi-dimensional thin-layer Navier-Stokes • Implicit Point/Line Solver in BLayer Regions • Agglomeration Multigrid • ~500 cycles to converge, 1000 cycles for higher CL cases • Parallelization through MPI and/or OpenMP

  13. NSU3D Convergence and Scalability Mach=0.75, Incidence=0o, Reynolds = 3million 3.0 million point grid: 2.5 hours on 16 Opterons

  14. NSU3D Convergence and Scalability 3 Tflops on 2008 cpus on NASA Columbia 20 minutes for 72 million pt grid (benchmark case)

  15. NSU3D Convergence and Scalability • 3.0 million pts: • 2.5 hours on cluster of 16 Opterons • 10 minutes on 128 cpus of NASA Columbia • Most runs done on Opteron Cluster • 72 million pts: • 4.5 hours on 128 cpus of NASA Columbia • 20 minutes on 2008 cpus of NASA Columbia • Most runs done on 128 cpus of Columbia

  16. Sensitivity Studies • Sensitivity to Distance Function Evaluation • Sensitivity to Viscous Term Formulation • Sensitivity to Artificial Dissipation • Grid Resolution Effect on Above Sensitivities • Mode of Operation: • Fixed Incidence (AoA=0o) • Simpler for grid convergence study • Mach=0.75 and Mach=0.3 • Subsonic case contains no discontinuities • No change in turbulence/transition model

  17. Sensitivity to Distance Function • 3 Distance Function Formulations • Exhaustive Search • Slow but exact, may be discontinuous • Eikonal/Hamilton-Jacobi Equations • Used for level set problems • Fast sweep algorithm • Fast, accurate,smooth, but NON parallel • Poisson Equation • Fast, smooth, parallel • Good accuracy near surface, poor accuracy away from surface

  18. Sensitivity to Distance Function • Mach = 0.75, Reynolds Number =3 million

  19. Sensitivity to Distance Function • Eikonal Equation very close, much faster • Mach = 0.75, AoA = 0o, Reynolds =3 million

  20. Sensitivity to Viscous Term Formulation • NSU3D uses multi-dimensional thin layer approx. • Computed along mesh edges • NSU3D implements Full Navier-Stokes terms • Double loop over edges: (gradients, fluxes) • Extended stencil • Hybrid: Normal 2nd differences as above, Cross terms using double loop • 30% more expensive, less robust: Inexact Jacobian

  21. Sensitivity to Viscous Term Formulation • Less than 0.5% variation in CL on finer grids • Less than 2 counts variation in CD on finer grids

  22. Grid Convergence and Dissipation Sensitivity • Sensitivity to Dissipation • Run with nominal value of dissipation parameter • Run with ½ nominal value of dissipation paramter • Provides estimate of influence of Upwind/Artificial Dissipation, effect of various flux functions, levels of dissipation • Run on sequence of grids (1M,3M,9M,72M) • Fixed Incidence (AoA=0o) • Mach=0.75, Mach=0.3 (no discontinuities) • Expect: CL CD convergenceand reduction in sensitivity to dissipation as grid is refined

  23. Grid Convergence and Dissipation • Drag is grid converging • Lift is somewhat erratic: • better grid convergence at lower dissipation values • Sensitivity to dissipation decreases as expected

  24. Grid Convergence and Dissipation • Drag is grid converging • Sensitivity to dissipation decreases as expected • Friction Drag is essentially grid converged

  25. Additional Results: 65M pt Mesh • Add results from 65M pt mesh • This mesh is not self-similar with others • Different spacings… but approximately same as 72M pt mesh • 6-8 vs 12 point across trailing edge • No spanwise stretching • 976,828 vs 474,926 pts on Aircraft surface • Strictly, should not be plotted on same N-2/3 plot • Useful to see differences with other grids

  26. 65M pt mesh Results • 10% drop in CL at AoA=0o: closer to experiment • Drop in CD: further from experiment • Same trends at Mach=0.3 • Little sensitivity to dissipation

  27. 65M pt Mesh Results • Most of change in CD comes from Friction Drag • Recall CDV was “fully” converged • Similar results for Mach=0.3, Low sensitivity to dissipation

  28. 65M pt Mesh Results • Much better agreement with experiment(CL and CM)

  29. 65M pt Mesh Results • Low on Drag • Transition effects remain • Note CDP changes when comparing 65/72M pt results at fixed CL

  30. Surface CP Comparisons • Improved Inboard, more TE separation • Outboard still shows discrepancies

  31. Surface CP Comparisons • Improved Inboard, more TE separation • Outboard still shows discrepancies

  32. Surface CP Comparisons • Improved Inboard, more TE separation • Outboard still shows discrepancies

  33. Surface CP Comparisons • Improved Inboard, more TE separation • Outboard still shows discrepancies

  34. Conclusions • Surprising to see large differences at high resolution • 10% change in CL from 65M to 72M points • In the presence of apparent grid convergence • General trends reproduced by FUN3D • E.M. Lee-Rausch et. al.: AIAA-2005-4842 • Sensitivity to grid resolution/topology dominates other modeling issues investigated herein • Implications for turbulence/transition modeling

  35. Conclusions • Speculative Explanation: • Main difference between fine grids is spanwise stretching at TE • Isotropic cells lead to increase TE separation • Reduces Lift substantially when integrated along span • Little effect of root separation (same resolution in 2 grids) • Further Work: Generate coarser family of meshes from 65M pt mesh • Examine grid convergence between 2 families of grids • State-of-the-art Computers (NASA Columbia) has large effect on advancing state-of-the-art of Computational Aerodynamics

More Related