1 / 26

Pseudospectral Chebyshev Representation of Few-group Cross Sections on Sparse Grids *

Date. Pseudospectral Chebyshev Representation of Few-group Cross Sections on Sparse Grids *. Pavel M. Bokov, Danniëll Botes South African Nuclear Energy Corporation (Necsa ), South Africa Vyacheslav G. Zimin National Research Nuclear University “MEPhI ”, Russia.

frey
Download Presentation

Pseudospectral Chebyshev Representation of Few-group Cross Sections on Sparse Grids *

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Date Pseudospectral Chebyshev Representation of Few-group Cross Sections on Sparse Grids* Pavel M. Bokov, Danniëll Botes South African Nuclear Energy Corporation (Necsa), South Africa Vyacheslav G. Zimin National Research Nuclear University “MEPhI”, Russia *Presented by Frederik Reitsma

  2. Outline • Introduction • Theory • Results • Conclusions

  3. Introduction 3

  4. Problem Statement The representation of few-group, homogenised neutron cross sections as they are passed from the cell or assembly code to the full core simulator. Group collapsing and spatial homogenisation cross section representation Full core simulation

  5. Homogenised Few-Group Cross Sections • Several cross sections are represented simultaneously • Result of a single transport calculation • Cross sections depend on several state parameters • Instantaneous and history parameters • Historically ~3 state parameters, ~5 and more in newer models • Makes the problem intrinsically multidimensional • Requires methods that are scalable with the number of dimensions • Smooth dependence on state parameters

  6. Representing Cross Sections • Three necessary, interdependent, steps • Sampling strategy • Model selection • Converting the data from the samples into the information of the model • Each step should be performed in a manner that is, in some way, optimal

  7. Sampling Strategies • Regular tensor product mesh • allows one to use interpolation and approximation techniques developed for one dimension • Low discrepancy grids • provide the most even coverage of the parameter space with a finite number of samples • Sparse grids • mitigate the curse of dimensionality for functions that satisfy certain smoothness criteria Sparse Grid Tensor Product Grid Low Discrepancy Grid

  8. Selecting the Model • Based on physical considerations • Choose state parameters and nominal conditions based on knowledge of the physical problem • Perturb parameters from nominal conditions and perform a Taylor series expansion • The expansion is most often second order • The problem-independent approach • Tensor products of one-dimensional basis functions or their linear combinations • Local support – done with linear functions or splines as basis functions • Global support – done with higher order polynomials

  9. Fitting Model Against Data • Traditionally done in two ways • Interpolation (usually requires a regular grid) • Linear (table representation) • Higher order polynomials • Approximation • Uses regression or quasi-regression • Can be done on an arbitrary grid

  10. Theory 10

  11. Our Method Global, hierarchical, multi-dimensional polynomial interpolation on a Clenshaw-Curtis sparse grid

  12. The Sampling Strategy Cross-sections sampled on Clenshaw-Curtis sparse grid based on 1D Chebyshev-Gauss-Lobatto mesh • Properties of the Clenshaw-Curtis sparse grid • Nested (points are re-used when the mesh is refined) • Allow to mitigate the curse of dimensionality for smooth functions • Excellent convergence rate (interpolation, quadrature) • Chebyshev-Gauss-Lobatto mesh • Extrema and endpoints of the Chebyshev polynomials • Nested (points are re-used when the mesh is refined) • Mitigates Runge’s phenomenon (oscillations between interpolation nodes) • Number of points for level l defined as N = 2l + 1

  13. The Model • Linear combination of tensor products of multi-dimensional basis functions • Basis functions are built as a tensor product of univariate Lagrange cardinal polynomials • Based on Chebyshev-Gauss-Lobatto points • Infinitely differentiable • Limited amplitude on interpolation interval • Formally global support but effectively local support (local features) • By construction, the basis function vanishes at every other node from the current and previous levels • Unknown coefficients in the linear combination are used for model fitting

  14. The Interpolation • Built hierarchically • At each iteration the correction to the interpolation from the previous iteration is calculated • Built directly from the samples • Since at any given level the basis functions are linearly independent • Each basis function is associated with one node • Hierarchical surpluses • describe the contribution of each basis function to the interpolation • are calculated as the function value at a given point minus the interpolation at the previous level

  15. Error Control and Optimisation • Terms with small hierarchical surpluses makea small contribution to the interpolation • This can be used for estimation of the maximal (L∞) error • Representation can be optimised by eliminating small terms • Improves representation size and reconstruction time • Should not affect interpolation error if used carefully • After the model optimisation step, only significant terms are retained

  16. The Algorithm • Start with level l= 0 and the constant function (with the value of the function sample at the centre of the problem domain) as the initial interpolation • Repeat until some stop criterion is reached: • Increase the level by one • For each node that is new to this level • Construct the associated basis function • Calculate the hierarchical surplus • Optimise the final representation by rejecting terms with small hierarchical surpluses

  17. Results 17

  18. The Example • Macroscopic two-group cross sections for a VVER-1000 pin cell • Xenon concentration at equilibrium from the start • Power density = 108 W/cm3

  19. The Calculation • The transport code that was used is UNK (KIAE, Russia) • Calculation done up to l = 6, which gives a total of 19313 points • Since sparse grids are nested, calculations for lower levels were done on the appropriate subset of the level 6 grid • Error checked on 4096 independent quasi-random points • Quasi-random and sparse grid points were all calculated in a single UNK run • All calculations therefore used the same burnup grid

  20. Cross Section Behaviour • Dependence on burnup (other state parameters set to nominal) • 20% variation in thermal absorption • Dependence on burnup with other state parameters varied within their ranges • 80% variation in thermal absorption

  21. Error Decay Accuracy of approximation for k∞ • Target accuracy for k∞ • δmax = 0.05% (50 pcm) • δmean = 0.01% to 0.02% • (10 to 20 pcm) Level 4 meets the target accuracy for the mean error, but not the maximum error Level 5 meets the target accuracy for both the mean error, and the maximum error 801 2433

  22. Result for 801 Samples

  23. Result for 2433 Samples

  24. Conclusions 24

  25. Conclusions • A constructive method for hierarchical Lagrange interpolation on a Clenshaw-Curtis sparse grid has been developed and implemented • The method combines the efficiency of Chebyshev interpolation with the low calculation and storage requirements of sparse grid methods • Method provides a conservative method for error control and method for model optimization Continues…

  26. Conclusions (continued) • Method was used to represent the two-group homogenised neutron cross-sections (VVER-1000 pin cell, standard state parameters) • Few hundred samples leads a representation with an accuracy, sufficient for practical applications: • 0.1–0.2%, in terms of maximal relative error, and • 0.01–0.02% in terms of mean relative error • The optimisation of the representation leads to an improvement: • between 8 and 50 times in terms of the number of terms used and • between 4 and 65 times in terms of reconstruction speed-up

More Related