1 / 108

Probabilistic R5V2/3 Assessments

This presentation provides an introduction and purpose of probabilistic assessments for R5V2/3. It discusses the limitations of deterministic assessments and the need for probabilistic approaches. The agenda and methodology for the assessment are also outlined.

cannette
Download Presentation

Probabilistic R5V2/3 Assessments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Probabilistic R5V2/3 Assessments Rick Bradford Peter Holt 17th December 2012

  2. Introduction & Purpose • Why probabilistic R5V2/3? • If deterministic R5V2/3 gives a lemon • If you want to know about lifetime / reality • Trouble with ‘bounding’ data is, • It’s not bounding • It’s arbitrary • Most of the information is not used

  3. What have we done? (All 316H) • 2009/10: HPB/HNB Bifs – R5V4/5 (Peter Holt) • 2011: HYA/HAR Bifs – Creep Rupture (BIFLIFE) • 2012: HYA/HAR Bifs – R5V2/3 (BIFINIT) • Today only R5V2/3

  4. Psychology Change • Best estimate rather than conservative • Including best estimate of error / scatter • The conservatism comes at the end… • ..in what “failure” (initiation) probability is regarded as acceptable… • …and this may depend upon the application (safety case v lifetime)

  5. So what is acceptable? • Will vary – consult Customer • For “frequent” plant might be ~0.2 failures per reactor year (e.g., boiler tubes) • For HI/IOGF plant might be 10-7pry to 10-5 pry (maybe) • But the would be assessment the same!

  6. What’s the Downside? • MINOR: Probabilistic assessment is more work than deterministic • MAJOR: Verification • The only way of doing a meaningful verification of a Monte Carlo assessment is to do an independent Monte Carlo assessment! • Learning Points: Can be counterintuitive • Acceptance by others • Brainteaser

  7. Aim of Today • Probabilistic “How to do it” guide • For people intending to apply – I hope • Knowledge of R5V2/3 assumed • We’ve only done it once • So everything based on HAR bifurcations experience (316H)

  8. Limitation • Only the crack initiation part of R5V2/3 addressed • Not the “precursor” assessments • Primary stress limits • Stress range limit • Shakedown • Cyclically enhanced creep • Complete job will need to address these separately

  9. Agenda • Introduction 9:30 – 10:00 • Computing platform 10:00 – 10:15 • Methodology • Deterministic 10:15 – 10:45 • Probabilistic 10:45 – 12:30 • Lunch 12:30 – 13:00 • Input Distributions • Materials Data (316) 13:00 – 14:30 • Loading / Stress 14:30 – 14:45 • Plant History 14:45 – 15:00

  10. Computing Platform • So far we’ve used Excel • Latin Hypercube add-ons available • RiskAmp / “@Risk” ?? • Most coding in VBA essential • Minimise output to spreadsheet during execution • Matlab might be a natural platform • I expect Latin Hypercube add-ons would also be available – but not checked • Develop facility within R-CODE/DFA?

  11. Run Times • Efficient coding crucial • Typically 50,000 – 750,000 trials • (Trial = assessment of whole life of one component with just one set of randomly sampled variables) • Have achieved run times of 0.15 to 0.33 seconds per trial on standard PCs (~260 load cycles) • Hence 2 hours to 3 days per run

  12. Methodology • We shall assume Monte Carlo • Monte Carlo is just deterministic assessment done many times • So the core of the probabilistic code is the deterministic assessment

  13. Deterministic Methodology • R5V2/3 Issue 3 (Revision 1 2013 ?) • Will include new weldments Appendix A4 • BUT when used for 316H probabilistics, we advise revised rules for primary reset • NB: Deterministic assessments should continue to use the ‘old’ Manus O’Donnell rules E/REP/ATEC/0027/AGR/01

  14. Hysteresis Cycle Construction • R5V2/3 Appendix A7 • Always sketch what the generic cycle will look like for your application • Helpful to write down the intended algorithm in full as algebra • Recall that the R5 hysteresis cycle construction is all driven by the elastically calculated stresses • Example – HANDOUT (from BIFINIT-RB) • Remember that the dwell stress cannot be less than the rupture reference stress

  15. Talking to the HANDOUT • A brief run-through the elements of R5V2/3 Appendix A7 hysteresis cycle construction methodology using the hysteresis cycle on the next slide as the basis of the illustration

  16. C G F D E B H J A Illustrative Hysteresis Cycle

  17. Weldments • R5V2/3 Appendix A4 • Initially use parent stress-strain data • WSEF used in hysteresis cycle construction – not FSRF • WER – leave nucleation cycles out of parent fatigue endurance • Factor dwell stress by ratio of weld:parent cyclic strength (unless replaced by rupture reference stress) • For 2mm thick we assumed weld = parent

  18. Creep Dwell • Creep relaxation, and hence damage, by integration of forward creep law • Prohibits relaxation below rupture reference stress • Strain hardening – both terms evaluated at the same strain • Both evaluated at same point in scatter, h

  19. Primary Reset Issue: 316H • Is creep strain reset to zero at the start of each dwell – so as to regenerate the initial fast primary creep rate? • Existing advice is unchanged for deterministic assessments… • Reset primary creep above 550oC • Do not reset primary creep at or below 550oC… • …use continuous hardening instead (creep strain accumulates over cycles)

  20. Primary Reset Issue: 316H • For probabilistic assessments we advise the use of primary reset at all temperatures • But with two alleviations, • Application of the zeta factor, z • Only reset primary creep if the previous unload caused significant reverse plasticity • “significant” plasticity in this context has been taken as >0.01% plastic strain, though 0.05% may be OK

  21. The zeta factor

  22. Probabilistics • Is it all just normal distributions? • No • Also Log-normal, also… • All sorts of weird & wonderful pdfs • Or just use random sampling of a histogram…

  23. Normal and Log-Normal PDFs • Normal pdf • Log-normal is the same with z replaced by ln(z) • Integration measure is then d(ln(z))=dz/z

  24. A Brief Reminder of the Basics: PDFs versus cumulative distributions, definitions of mean, median, variance, standard deviation, CoV, correlation coefficient. Illustrative graphs. DO VIA HANDOUT • Illustrate number of standard deviations against probability for normal distribution, e.g., 1.65 sigma = 95%, etc.

  25. Non-Standard Distribution:Elastic Follow-Up

  26. Non-Standard Distribution: Overhang

  27. Non-Standard Distribution Thermal Transient Factor wrt Reference Trip

  28. How Many Distributed Variables • Generally – lots! • If a quantity is significantly uncertain… • …and you have even a very rough estimate of its uncertainty… • …then include it as a distributed variable. • The Latin Hypercube can handle it

  29. How Many Distributed Variables Here are those used for the HYA/HAR bifurcations (PJH distribution types given)… Bifurcation inlet sol inner radius (PERT) Bifurcation inlet sol outer radius (PERT) Boiler tube sol inner radius (PERT) Boiler tube sol outer radius (PERT) Inner radius oxidation metal loss k parameter (Log-Normal) Off-Load IGA at bore (Log-Normal) Chemical clean IGA at bore (Log-Normal or Uniform) Outer radius metal loss (Log-Normal)

  30. How Many Distributed Variables Deadweight moment (Normal) Bifurcation thermal moment (Normal) Unrestricted MECT (not used by PJH) Gas temperature (Normal) Steam temperature (Normal) Metal temperature interpolation parameter (Normal) Follow-up factor, Z (PERT) Carburisation allowance (Log-Normal of Sub-Set) Number of restricted tubes post-clean (not used by PJH)

  31. How Many Distributed Variables Restriction after 3rd clean (not used by PJH) Overhang distribution (actual overhangs used) Tube thermal moment (Normal) Bifurcation 0.2% proof stress (Log-Normal) Tube 0.2% proof stress (Log-Normal) Zeta () (Log-Normal) Ramberg-Osgood A parameter (Log-Normal) Young's modulus (Log-Normal)

  32. How Many Distributed Variables The weld fatigue endurance (Log-Normal) The bifurcation parent fatigue endurance (Log-Normal) The boiler tube fatigue endurance (Log-Normal) The minimum differential pressure in hot standby (Uniform distribution of a set of cases) The start-up peak thermal stress (Ditto) The trip peak thermal stress (Ditto) The minimum temperature during hot standby (Ditto)

  33. How Many Distributed Variables Creep strain rate (weld) (Log-Normal) Creep strain rate (bifurcation) (Log-Normal) Creep strain rate (boiler tube) (Log-Normal) Creep ductility (weld) (Log-Normal) Creep ductility (bifurcation) (Log-Normal) Creep ductility (boiler tube) (Log-Normal)

  34. Where are the pdfs? • “But what if no one has given me a pdf for this variable”, I hear you cry. • Ask yourself, “Is it better to use an arbitrary single figure – or is it better to guestimate a mean and an error?” • If you have a mean and an error then any vaguely reasonable pdf is better than assuming a single deterministic value

  35. How is Probabilistics Done? • (Monte Carlo) probabilistics is just deterministic assessment done many times • This means random sampling (i.e. each distributed variable is randomly sampled and these values used in a trial calculation) • But how are the many results weighted?

  36. Options for Sampling: (1)Exhaustive(Numerical Integration) • Suppose we want +/-3 standard deviations sampled at 0.25 sd intervals • That’s 25 values, each of different probability. • Say of 41 distributed variables • That’s 2541 = 2 x 1057 combinations • Not feasible – by a massive factor

  37. Options for Sampling: (2)Unstructured Combination • Each trial has a different probability • Range of probabilities is enormous • Out of 50,000 trials you will find that one or two have far greater probability than all the others • So most trials are irrelevant • Hence grossly inefficient

  38. Options for Sampling: (2)Random but Equal Probability • Arrange for all trials to have the same probability • Split all the pdfs into “bins” of equal area (= equal probability) – say P • Then every random sample has the same probability, PN, N = number of variables

  39. Equal Area “Bins” Illustrated for 10 Bins (More Likely to Use 10,000 Bins)

  40. Bins v Sampling Range • 10 bins = +/- 1.75 standard deviations (not adequate) • 300 bins = +/- 3 standard deviations (may be adequate) • 10,000 bins = +/- 4 standard deviations (easily adequate for “frequent”; not sure for “HI/IOGF/IOF”)

  41. Optimum Trial Sampling Strategy • Have now chosen the bins for each variable • Bins are of equal probability • So we want to sample all bins for all variables with equal likelihood • How can we ensure that all bins of all variables are sampled in the smallest number of trials? • (Albeit not in all combinations)

  42. Answer: Latin Hypercube • N-dimensional cube • N = number of distributed variables • Each side divided into B bins • Hence BN cells • Each cell defines a particular randomly sampled value for every variable • i.e., each cell defines a trial • All trials are equally probable

  43. Latin Hypercube • A Latin Hypercube consists of B cells chosen from the possible BN cells such that no cell shares a row, column, rank,… with any other cell. • For N = 2 and B = 8 an example of a Latin Hypercube is a chess board containing 8 rooks none of which are en prise. • Any Latin Hypercube defines B trials which sample all B bins of every one of the N variables.

  44. Example – The ‘Latin Square’ • N=2 Variables and B=4 Samples per Variable 1 2 3 4 • B cells are randomly occupied such that each row and column contains only one occupied cell. • The occupied cells then define the B trial combinations. 1 2 3 4

  45. Generation of the Latin Square • A simple way to generate the square/hypercube 3 4 2 1 • Assign the variable samples in random order to each row and column. • Occupy the diagonal to specify the trial combinations. • These combinations are identical to the ones on the previous slide. 3 1 4 2

  46. Range of Components • Modelling just one item – or a family of items? • Note that distributed variables do not just cover uncertainties but can also cover item to item differences, • Temperature • Load • Geometry • Metal losses

  47. Plant History • A decision is required early on… • Model on the basis of just a few idealised load cycles… • …or use the plant history to model the actual load cycles that have occurred • Can either random sample to achieve this • Or can simply model every major cycle in sequence if you have the history (reactor and boiler cycles) • Reality is that all cycles are different

  48. Cycle Interaction • Even if load cycles are idealised, if one or more parameters are randomly sampled every cycle will be different • Hence a cycle interaction algorithm is obligatory • And since all load cycles differ, the hysteresis cycles will not be closed, even in principle • This takes us beyond what R5 caters for • Hence need to make up a procedure

More Related