410 likes | 519 Views
Center for Radiative Shock Hydrodynamics Fall 2011 Review. Assessment of predictive capability Derek Bingham. CRASH has required innovations to most UQ activities. Experiment design Screening (identifying most important inputs) Emulator construction Prediction
E N D
Center for Radiative Shock HydrodynamicsFall 2011 Review Assessment of predictive capability Derek Bingham
CRASH has required innovations to most UQ activities • Experiment design • Screening (identifying most important inputs) • Emulator construction • Prediction • Calibration/tuning (solving inverse problems) • Confidence/prediction interval estimation • Analysis of multiple simulators Will focus the framework where we can quantify uncertainties in predictions and the impact of the sources of variability
The predictive modeling approach is often called model calibration* where: • model or system inputs • system response • simulator response • calibration parameters • observational error *Kennedy and O’Hagan (2001); Higdon et al. (2004) Page 3
The predictive modeling approach is often called model calibration where: • model or system inputs • system response • simulator response • calibration parameters • observational error Page 4
The predictive modeling approach is often called model calibration where: • model or system inputs • system response • simulator response • calibration parameters • observational error Gaussian Process Models (looking at other models) Page 5
The predictive modeling approach is often called model calibration where: • model or system inputs • system response • simulator response • calibration parameters • observational error Goal is to estimate unknown calibration parameters and also make predictions of the physical system Page 6
The Gaussian process model specifications links simulations and observations through the covariance • Vector of observations and simulations denoted as Page
We have used 2-D CRASH simulations and observations to build and explore the predictive model for shock location and breakout time • Experiment data: • 2008 and 2009 experiments • Experiment variables: Be thickness, Laser energy, Xe fill pressure, Observation time • Response: Shock location (2008) and shock breakout time (2009) • 2-D CRASH Simulations • 104 simulations, varied over 5 inputs • Experiment variables: Be thickness, Laser energy, Observation time • Calibration parameters: Electron flux limiter, Be gamma, Wall opacity
Can sample from joint posterior distribution of the calibration parameters Breakout time calibration Shock location calibration Joint calibration
A look at the posterior marginal distributions of the calibration parameters
Statistical model can be used to evaluate sensitivity of codes or system to inputs 2-D CRASH shock breakout time sensitivity plots
The statistical model is used to predict shock breakout time incorporating sources of uncertainty
The statistical model is used to predict shock location incorporating sources of uncertainty (μs) (μs) (μs)
We developed a new statistical model for combining outputs from multi-fidelity simulators • Have simulations from 1-D and 2-D models • 2-D models runs come at a higher computational cost • Would like to use all simulations, and experiments, to make predictions
We developed a new statistical model for combining outputs from multi-fidelity simulators • Have simulations from 1-D and 2-D models • 2-D models runs come at a higher computational cost • Would like to use all simulations, and experiments, to make predictions • 1-D CRASH Simulations • 1024 simulations • Experiment variables: Be thickness, Laser energy, Xe fill pressure, Observation time • Calibration parameters: Electron flux limiter, Laser energy scale factor • 2-D CRASH Simulations • 104 simulations • Experiment variables: Be thickness, Laser energy, Xe fill pressure, Observation time • Calibration parameters: Electron flux limiter, Wall opacity, Be gamma
The available shock information comes from models and experiments • where: • model or system inputs • system response • simulator response • vectors of calibration parameters • Modeling approach in the spirit of Kennedy and O’Hagan (2000); Kennedy and O’Hagan (2001); Higdon et al. (2004) 1-D simulator …calibration parameters are adjusted 2-D simulator …calibration parameters are adjusted Experiments … calibration parameters are fixed and unknown
Calibrate lower fidelity code to higher fidelity code • Idea is that the 1-D code does not match the 2-D code for two reasons
Link the simulator responses and observations through joint model and discrepancies
Link the simulator responses and observations through joint model and discrepancies
Link the simulator responses and observations through joint model and discrepancies
Link the simulator responses and observations through joint model and discrepancies • Comments: • For deciding what variables belong in the discrepancy, one can ask “what is fixed at this level” • The interpretation of the calibration parameters changes somewhat • Discrepancies are almost guaranteed for this specification
Link the simulator responses and observations through joint model and discrepancies Gaussian Process Models
Need to specify prior distributions • Approach is Bayesian • Inverted-gamma priors for variance components • Beta priors for the correlation parameters • Log-normal priors for the calibration parameters
Can illustrate using a simple example Low fidelity model
Can illustrate using a simple example Low fidelity model High fidelity model
Can illustrate using a simple example Low fidelity model High fidelity model True model + replication error
How would this work in practice? • Evaluate each computer model at at different input settings • We evaluated the low fidelity (LF) model 20 times with inputs (x, t1, tf) chosen according to a Latin hypercube design • The high fidelity (HF) model was evaluated 5 times with inputs (x, t2, tf) chosen according to a Latin hypercube design • The experimental data was generated by evaluating the true model 3 times and adding replication error from a N(0,0.2)
Observations and response functions at the true value of the calibration parameters
We can construct 95% posterior prediction intervals at the observations
Observations • Able to build a statistical model that appears to predict the observations well • Prediction error is in the order of the experimental uncertainty • Care must be taken choosing priors for the variances of GP’s
Developing new statistical model for combining simulations and experiments • Approach to combine outputs from experiments and several different computer models • Experiments: • The mean function is just one of many possible response functions • View computer model evaluations as biased versions of this “super-reality”
Super-reality model for prediction and calibration • Experiments: • Computer model: • Each computer model will be calibrated directly to the observations • Information for estimating individual unknown calibration parameters comes from observations and models with that parameter as on input
Have deployed state of the art UQ techniques to leverage CRASH codes and experiments • Use the model calibration framework to perform a variety of tasks such as explore the simulation response surfaces, making predictions for experiments and sensitivity analysis • Developed new statistical model for calibration of multi-fidelity computer models with field data • Can make predictions with associated uncertainty informed by multi-fidelity models • Developing model to combine several codes (not necessarily ranked by fidelity) and observations
Allocation of computational budget • The goal is to use available simulations and experiments to evaluate the allocation of the computational budget to computational models • Since prediction is our goal, will use the reduction in the integrated mean square error (IMSE) • This measures the prediction variance, averaged across the input space • The optimal set of simulations is the one that maximized the expected reduction in the IMSE
Criterion can be evaluated in the current statistical framework • Can compute an estimate of the mean square error at any potential input, conditional on the model parameters • Would like a new trial to improve the prediction everyone in the input region • This criterion is difficult to optimize
A quick illustration – CRASH 1-D using shock location • Can use the 1-D predictive calibration model to evaluate the value of adding new trials • Suppose wish to conduct 10 new field trials • Which 10? What do we expect to gain?
Expected reduction in IMSE for up to 10 new experiments Expected reduction in IMSE Number of follow-up experiments
Can compare the value of new experiments to simulations • One new field trial yields an expected reduction in the IMSE of about 5% • The optimal IMSE design with 200 1-D new computer trials yields an expected reduction of of about 3% • The value of an experiment is substantially more than that of a computer trial • Can do the same exercise when there are multiple codes