1 / 30

Nonparametric Bootstrap Inference on the Characterization of a Response Surface

This presentation outlines a new technique for response surface methodology, focusing on nonparametric bootstrap inference. The method allows for estimation of unknown parameters in the presence of non-normal errors, extending previous linear combination analysis. Using bootstrap resampling, the technique provides tighter confidence intervals and increased efficiency compared to traditional methods.

rseo
Download Presentation

Nonparametric Bootstrap Inference on the Characterization of a Response Surface

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Nonparametric Bootstrap Inference on the Characterization of a Response Surface Robert Parody Center for Quality and Applied Statistics Rochester Institute of Technology 2009 QPRC June 4, 2009

  2. Presentation Outline • Introduction • Previous Work • New Technique • Example • Simulation Study • Conclusion and Future Research

  3. Introduction • Response Surface Methodology (RSM) • Identify the relationship between a set of k-predictor variables and the response variable y • Typically, the goal of the experiment is to optimize E(Y) • xis transformed into coded x by

  4. The Model • A second order model is fit to the data represented by • where: • bi, bii, and bij are unknown parameters • e ~ F(0,s2) and independent • wu are other effects such as block effects and covariates, which are not interacting with the xi’s

  5. Equivalently, in matrix form,

  6. Background Canonical Analysis • Rotate the axis system so that the new system lies parallel to the principle axes of the surface • P is the matrix of eigenvectors of B where PP = PP = I • The rotated variables and parameters: • w = Px • q=Pb • L=PBP =diag(li)

  7. Types of Surfaces • If all li < 0 (> 0),the stationary point is a maximizer (minimizer); contours are ellipsoidal. • If the li have different signs, the stationary point is a minimax point (complicated hyperbolic contours).

  8. Standard Errors for the li • Carter Chinchilli and Campbell (1990) • Found standard errors and covariances for li by way of the delta method • Bisgaard and Ankenman (1996) • Simplified this with the creation of the Double Linear Regression (DLR) method

  9. Previous Work • Edwards and Berry (1987) • Simulated a critical point for a prespecified linear combination of the parameters • The natural pivotal quantity for constructing simultaneous intervals for these linear combinations of the parameters is

  10. Shortcoming • The technique on the previous slide is only valid when • The errors are i.i.d. normal with constant variance • The set of linear combinations of interest are prespecified

  11. Research Goal • Employ a nonparametric bootstrap based on a pivotal quantity to extend the previously mentioned work to include situations where: • The set of linear combinations of interest are not prespecified • Relax the error distribution assumption

  12. Bootstrap Idea Resample from the original data – either directly or via a fitted model – to create replicate datasets Use these replicate datasets to create distributions for parameters of interest Consider the nonparametric version by utilizing the empirical distribution 12

  13. Empirical Distribution The empirical distribution is one which equal probability 1/N is given to each sample value yi The corresponding estimate of the cdf F is the empirical distribution function (EDF) , which is defined as the sample proportion: 13

  14. New Technique The pivotal quantity for simultaneous inference on li:

  15. Bootstrap Equivalent Replace the parameter with the estimates and the estimates with the bootstrap estimates to get:

  16. Bootstrap Parameter Estimation • Find the model fits • Resample from the modified residuals N times with replacement • Add these values to the fits and use them as observations • Fit the new model and determine the bootstrap parameter estimates

  17. An Adjustment • We usually at least assume that the errors are iid from a distribution with mean 0 and constant variance s2 • The residuals on the other hand come from a common distribution with mean 0 and variance s2(1-hii) • So the modified residuals become 17

  18. Critical Point Procedure • Create nonparametric bootstrap estimates for the unknown parameters in Q* • Now find Q* by maximizing over the j elements • Repeat this process for a large number of bootstrap samples (m) and take the (m+1)(1-a)th order statistic

  19. Bootstrap Simulation Size • Edwards and Berry (1987) showed conditional coverage probability of 95% simulation-based bounds will be +/-0.002 for 99% of the generations for (m+1)=80000

  20. Example • Chemical process experiment with k=5 from Box (1954) • Goal: Maximize percentage yield

  21. Parameter Estimates

  22. Parameter Estimates

  23. Critical Point • Using a=0.05 and (m+1)=80000, we get

  24. Estimates and 95% Simultaneous Confidence Intervals

  25. Estimates and 95% Simultaneous Confidence Intervals

  26. Relative Efficiency • Comparison of critical points • For the example, we would only need ~88% of the sample size for the simulation method as compared to traditional simultaneous methods • Computer Time • Approximately 2 minutes on a Intel Core 2 Duo computer

  27. Simulation Study • 10 critical points were created • For each critical point, 10000 confidence intervals were created by bootstrapping the residuals • This was done 100 times for each point

  28. Simulation Results

  29. Conclusions • New technique yields tighter bounds • Works for linear combinations not prespecified • Relaxes normality assumption on the error terms • Simulation study yields adequate coverage

  30. Future Research • Relax model assumptions further to include nonhomogeneous error variances • Apply to other situations where we are unable to prespecify the combinations, such as ridge analysis

More Related