1 / 41

Center for Radiative Shock Hydrodynamics Fall 2011 Review

Center for Radiative Shock Hydrodynamics Fall 2011 Review. Assessment of predictive capability Derek Bingham. CRASH has required innovations to most UQ activities. Experiment design Screening (identifying most important inputs) Emulator construction Prediction

george
Download Presentation

Center for Radiative Shock Hydrodynamics Fall 2011 Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Center for Radiative Shock HydrodynamicsFall 2011 Review Assessment of predictive capability Derek Bingham

  2. CRASH has required innovations to most UQ activities • Experiment design • Screening (identifying most important inputs) • Emulator construction • Prediction • Calibration/tuning (solving inverse problems) • Confidence/prediction interval estimation • Analysis of multiple simulators Will focus the framework where we can quantify uncertainties in predictions and the impact of the sources of variability

  3. The predictive modeling approach is often called model calibration* where: • model or system inputs • system response • simulator response • calibration parameters • observational error *Kennedy and O’Hagan (2001); Higdon et al. (2004) Page 3

  4. The predictive modeling approach is often called model calibration where: • model or system inputs • system response • simulator response • calibration parameters • observational error Page 4

  5. The predictive modeling approach is often called model calibration where: • model or system inputs • system response • simulator response • calibration parameters • observational error Gaussian Process Models (looking at other models) Page 5

  6. The predictive modeling approach is often called model calibration where: • model or system inputs • system response • simulator response • calibration parameters • observational error Goal is to estimate unknown calibration parameters and also make predictions of the physical system Page 6

  7. The Gaussian process model specifications links simulations and observations through the covariance • Vector of observations and simulations denoted as Page

  8. We have used 2-D CRASH simulations and observations to build and explore the predictive model for shock location and breakout time • Experiment data: • 2008 and 2009 experiments • Experiment variables: Be thickness, Laser energy, Xe fill pressure, Observation time • Response: Shock location (2008) and shock breakout time (2009) • 2-D CRASH Simulations • 104 simulations, varied over 5 inputs • Experiment variables: Be thickness, Laser energy, Observation time • Calibration parameters: Electron flux limiter, Be gamma, Wall opacity

  9. Can sample from joint posterior distribution of the calibration parameters Breakout time calibration Shock location calibration Joint calibration

  10. A look at the posterior marginal distributions of the calibration parameters

  11. Statistical model can be used to evaluate sensitivity of codes or system to inputs 2-D CRASH shock breakout time sensitivity plots

  12. The statistical model is used to predict shock breakout time incorporating sources of uncertainty

  13. The statistical model is used to predict shock location incorporating sources of uncertainty (μs) (μs) (μs)

  14. We developed a new statistical model for combining outputs from multi-fidelity simulators • Have simulations from 1-D and 2-D models • 2-D models runs come at a higher computational cost • Would like to use all simulations, and experiments, to make predictions

  15. We developed a new statistical model for combining outputs from multi-fidelity simulators • Have simulations from 1-D and 2-D models • 2-D models runs come at a higher computational cost • Would like to use all simulations, and experiments, to make predictions • 1-D CRASH Simulations • 1024 simulations • Experiment variables: Be thickness, Laser energy, Xe fill pressure, Observation time • Calibration parameters: Electron flux limiter, Laser energy scale factor • 2-D CRASH Simulations • 104 simulations • Experiment variables: Be thickness, Laser energy, Xe fill pressure, Observation time • Calibration parameters: Electron flux limiter, Wall opacity, Be gamma

  16. The available shock information comes from models and experiments • where: • model or system inputs • system response • simulator response • vectors of calibration parameters • Modeling approach in the spirit of Kennedy and O’Hagan (2000); Kennedy and O’Hagan (2001); Higdon et al. (2004) 1-D simulator …calibration parameters are adjusted 2-D simulator …calibration parameters are adjusted Experiments … calibration parameters are fixed and unknown

  17. Calibrate lower fidelity code to higher fidelity code • Idea is that the 1-D code does not match the 2-D code for two reasons

  18. Link the simulator responses and observations through joint model and discrepancies

  19. Link the simulator responses and observations through joint model and discrepancies

  20. Link the simulator responses and observations through joint model and discrepancies

  21. Link the simulator responses and observations through joint model and discrepancies • Comments: • For deciding what variables belong in the discrepancy, one can ask “what is fixed at this level” • The interpretation of the calibration parameters changes somewhat • Discrepancies are almost guaranteed for this specification

  22. Link the simulator responses and observations through joint model and discrepancies Gaussian Process Models

  23. Need to specify prior distributions • Approach is Bayesian • Inverted-gamma priors for variance components • Beta priors for the correlation parameters • Log-normal priors for the calibration parameters

  24. Can illustrate using a simple example Low fidelity model

  25. Can illustrate using a simple example Low fidelity model High fidelity model

  26. Can illustrate using a simple example Low fidelity model High fidelity model True model + replication error

  27. How would this work in practice? • Evaluate each computer model at at different input settings • We evaluated the low fidelity (LF) model 20 times with inputs (x, t1, tf) chosen according to a Latin hypercube design • The high fidelity (HF) model was evaluated 5 times with inputs (x, t2, tf) chosen according to a Latin hypercube design • The experimental data was generated by evaluating the true model 3 times and adding replication error from a N(0,0.2)

  28. Observations and response functions at the true value of the calibration parameters

  29. We can construct 95% posterior prediction intervals at the observations

  30. Comparison of predicted response surfaces

  31. New methodology applied to CRASH for breakout time

  32. Observations • Able to build a statistical model that appears to predict the observations well • Prediction error is in the order of the experimental uncertainty • Care must be taken choosing priors for the variances of GP’s

  33. Developing new statistical model for combining simulations and experiments • Approach to combine outputs from experiments and several different computer models • Experiments: • The mean function is just one of many possible response functions • View computer model evaluations as biased versions of this “super-reality”

  34. Super-reality model for prediction and calibration • Experiments: • Computer model: • Each computer model will be calibrated directly to the observations • Information for estimating individual unknown calibration parameters comes from observations and models with that parameter as on input

  35. Have deployed state of the art UQ techniques to leverage CRASH codes and experiments • Use the model calibration framework to perform a variety of tasks such as explore the simulation response surfaces, making predictions for experiments and sensitivity analysis • Developed new statistical model for calibration of multi-fidelity computer models with field data • Can make predictions with associated uncertainty informed by multi-fidelity models • Developing model to combine several codes (not necessarily ranked by fidelity) and observations

  36. Allocation of computational budget • The goal is to use available simulations and experiments to evaluate the allocation of the computational budget to computational models • Since prediction is our goal, will use the reduction in the integrated mean square error (IMSE) • This measures the prediction variance, averaged across the input space • The optimal set of simulations is the one that maximized the expected reduction in the IMSE

  37. Criterion can be evaluated in the current statistical framework • Can compute an estimate of the mean square error at any potential input, conditional on the model parameters • Would like a new trial to improve the prediction everyone in the input region • This criterion is difficult to optimize

  38. A quick illustration – CRASH 1-D using shock location • Can use the 1-D predictive calibration model to evaluate the value of adding new trials • Suppose wish to conduct 10 new field trials • Which 10? What do we expect to gain?

  39. Expected reduction in IMSE for up to 10 new experiments Expected reduction in IMSE Number of follow-up experiments

  40. Can compare the value of new experiments to simulations • One new field trial yields an expected reduction in the IMSE of about 5% • The optimal IMSE design with 200 1-D new computer trials yields an expected reduction of of about 3% • The value of an experiment is substantially more than that of a computer trial • Can do the same exercise when there are multiple codes

  41. Fin

More Related