280 likes | 409 Views
Overview of GNEP Fast Reactor Simulation Program. Andrew Siegel, ANL. Work sponsored by U.S. Department of Energy Office of Nuclear Energy, Science & Technology. Outline. Background/Approach Overview of 2007 progress. Outline. Background/Approach. Motivating issues for simulation program.
E N D
Overview of GNEP Fast Reactor Simulation Program Andrew Siegel, ANL Work sponsored by U.S. Department of Energy Office of Nuclear Energy, Science & Technology
Outline • Background/Approach • Overview of 2007 progress
Outline • Background/Approach
Motivating issues for simulation program • Predictive models are the backbone of reactor design/analysis • Core, overall plant design • Fuel performance • Integrated safety assessment • Certification • Question: Do existing tools/models meet GNEP needs? • If not, what improvements are needed? • How should these improvements be prioritized?
State of Existing tools: short version • Most were first developed in 70’s and 80’s • Targeted improvements through more recent programs (e.g. IFR) • Models based on assumptions about both computing power and solvers that are no longer true • 1.E8 FLOPS Cray-1 (late 1970’s) • 1.E15 FLOPS BG/P, Cray (2008) • Thus, easy to identify “shortcomings” • Coarse discretizations/geometries underpinned by models based on experimentally measured correlations. e.g. • Subchannel models with rod bundle heat transfer correlations • Homogenized geometries with few-group transport • Key question, though • To what extent will improvements result in superior design, enhanced safety, quicker certification, etc?
Modeling improvements for GNEP • Four classes of needs for GNEP • Reduction of uncertainties for more optimal design • Improved efficiency of use via better software engineering • Detailed study of localized phenomena • Numerical experiments complement traditional experiments • Specific examples • More accurate predictions of peak subassembly temperature • Reduction of “hot channel factors” • Much more seamless integration of tools • Major increases in efficiency • Reduction of modeling uncertainties for flux calculations • Used in fuel cycle analysis, heating calculation, reactivity coefficient calculation, control rod worth and shutdown margin evaluation, etc. • Understanding of detailed localized phenomena • E.g. thermal striping in upper plenum, pipe flow, etc. • Empirical correlations for low breeding ratio designs (grid spacers, etc.) • Parameterization of wire-induced cross flow
Outline • Background/Approach • Overview of 2007 progress
GNEP Fast Reactor Simulation Program 2007 • Two simultaneous goals • Develop general advanced capability • Begin develop general advanced modeling for each physical process/enabling technology • Heat Transfer, neutron transport, structural mechanics • Framework: Meshing/geometry, visualization/analysis, solvers, coupling, data archiving • Small coupling demonstration • Ultimately, use for safety, optimized design, etc. • Early application of new codes to study specific issues • Predictions with lower uncertainties • See previous slide
07 Work Package Structure Advanced Thermal Modeling ANL, LANL Advanced Neutronics Modeling ANL, ORNL, LANL, INL Advanced Structural Mechanics LLNL Code Framework Design ANL, LLNL
Overview of thermal hydraulics modeling goals • Fischer talk • Code • Version 1. multi-resolution T-H (Nek) • DNS, LES: multi-pin -> several subassembly • RANS: Many subassemblies -> full core • Improved subchannel: Full core • Coupled to other modules • Usable by designers: “validated”, documented • Problems • Quantify effect of wire wrap on mixing in rod bundle geometry • Predictions of thermal striping at core outlet in upper plenum • Identify subassembly hot-spots (reduce hot channel factors) • Lowering uncertainties of assembly outlet temperature predictions
A A A A LES of 7 Pin Configuration Time-averaged axial (top) and transverse (bottom) velocity distributions. Snapshot of axial velocity
Reactor Core TH Plan – desktop strategy Empirical subchannel codes: • Very fast – capable of whole core at pin level (400,000 pins) • Past practice: • Empirical, based on experimental data • Serial • Complex input decks • Current effort: • Empirical, data from experiment and from LES simulations • Parallel • Input based on same mesh framework as detailed TH/neutronics 217 pin velocity distribution from Nek5000 subchannel analysis
Status of 07 work • Fischer talk • Development of Nek code • Design of reactor core mesh/geometry • Analysis of 7-pin and 19-pin LES • 217-pin (single assembly) Large Eddy Simulations • Analysis of upper plenum thermal striping, comparison with CHAD • Star-CD inter-comparisons using RANS • Coupling with neutron transport • Comparison with COBRA • IAEA international benchmark on transient coolant behavior in the outlet plenum of Monju based on measurements made during a plant trip test performed in December 1995 • Open issues: Validation cases, computer time, transient coupling …
Summary of Neutronics 07 work scope • Yang talk • Code • Version 0 of UNIC (Unstructured Deterministic Neutron Transport) • General geometry capability using unstructured finite elements • First order form solution using method of characteristics • Second order form solution using even-parity flux formulation • Parallel capability for scaling to thousands of processors • Adjoint capability for sensitivity and uncertainty analysis • X-section generation • Calculations • An ABR full subassembly with fine structure geometrical description for coupling with thermal-hydraulics calculation • A whole ABR configuration with pin-by-pin description
ABTR Whole-Core Calculations • Four benchmark problems are being analyzed • All require P7 or S8 angular order • 33, 100, and 230 groups are planned • 30º symmetry core with homogenized assemblies • ~40,000 spatial DOF • ~100 processors • 120º periodic core with homogenized assemblies • ~400,000 spatial DOF • ~500 processors • 30º symmetry core with homogenized pin cells • 1.7 million spatial DOF • ~1000 processors • Single assembly with explicit geometry • 2.2+ million spatial DOF • ~5000 processors
Structural mechanics • Ferencz talk • Code • Adaptation/application of LLNL Diablo code • Integrating/coupling with other physics modules • Calculations • Core restraint • Calculate structural response reliably to evaluate the reactivity effects during both long-term irradiation and transient conditions.
Components • formalized interfaces • encapsulate physics • follow strict design rules • unit tests • Framework • provide services to components • Defines module structure • domain of CS Enabling technologies Visualization neutron transport fuel • MC • MOL • Direct Geometry Mesh generation Coupling High-performance i/o Ultra-scalable solvers thermo hydraulics Structural mechanics balance of plant Uncertainty Framework Design • Goals • Develop/implement overall (lightweight) software architecture • Visualization/Analysis (Bradley talk) • Use/Develop Visit (LLNL) • Parallel solver library • Use/develop PETSc • Mesh generation • Use CUBIT from Sandia • High-performance i/o • Use hdf5 and pNetCFD • Coupling/runtime meshing • Use/Develop MOAB • Testing framework • Custom, just beginning development (on critical path) • Repository management • Use svn + informal policing (need to improve)
Frameworks, cont. • Software design • Begun development of standards documentation • No work yet on Users Guide • Only informal coding standards -- portability challenges
Driver UNIC Nek q on n q, on th MOAB on Lc(n) Tc, Tf, c on th Depletion Tc, Tf on Lc(th) Tc, Tf, c on Lc(th) on th on Lc(n) Material Properties XSection
Mihai Anitescu (MCS staff) Applied mathematician Uncertainty analysis Alvaro Caceres (MCS post-doc): Physics Ph.D., hpc software engineer SHARP code architecture Tom Fanning (NE Staff) Reactor Safety Improvements to SAS based on SHARP Paul Fischer (MCS Staff) CFD (higher order methods) Nek development lead Dinesh Kaushik (MCS staff) Computational scientist Parallel implementation of UNIC James Lottes (MCS pre-doc) Parallel algorithms for CFD Nek development Dave Pointer (NE Staff) Fluid dynamics/heat transfer Application/analysis of CFD tools Christian Rabiti (NE staff) Computational neutronics UNIC development/verification Barry Smith (MCS Staff) Computational Mathematician Optimized parallel solvers for UNIC Mike Smith (NE staff) Computational neutronics UNIC development lead Tim Tautges (MCS Staff) Former CUBIT lead, adv. Meshing Mesh generation, integration Won-sik Yang (NE Staff) Reactor design Problem definition, validation, … • Co-located code team • Weekly meeting • Shared code repo • Internal Wiki • Automated test suite • … Establishment of Software Process
Carlos Pantano (UIUC) Joint INCITE award Subgrid-scale modeling of liquid metals in fast reactor core Elmer Lewis (Northwestern) Advanced neutronics methods Jean Ragusa (Texas A&M) Joint NERI award Advanced coupling methods Informal “steering committee” James Cahalan Bob Hill Hussein Khalil Bob Rosner Temitope Taiwo Other collaborators
Integration challenges • Major challenges being worked on • How to transition users from current to new tools • How much work to invest in improvement/integration of existing tools vs modern capabilities • How to integrate work done outside of ANL (e.g. SM) with main code suite • How to overlay code with GNEP milestones • How to handle fast transients in coupling framework
Computing time • ANL GNEP dedicated small cluster (coming) • Director’s allocations on ANL BG/L and ORNL Cray • Must compete otherwise • 1M hours: 2007 INCITE award “LES of Wire Wrapped Fuel Pins” • 50M hours: 2008 INCITE neutronics proposal • 20M hours: 2008 INCITE T-H proposal
Incorporation of Legacy Modules • Codes such as SASIVa, Cobra IV, Relap, etc., are trusted and familiar tools for reactor designers. • All new codes will need to be benchmarked against these, as a starting baseline (in addition to validation against new experiments, etc.) • Moreover, these codes are often fast – e.g., < 10 seconds on a workstation for TH subchannel model of a 217-pin assembly vs. 4 hours on 100,000 processors for a first principles (LES) solution. • (High fidelity simulations improved subchannel models) • SHARP will support legacy code interfaces to allow users and developers to: • validate a given geometry/model against current tools, without changing the geometry definition • focus on testing / debugging a single high-fidelity module while retaining coupled physics at low cost