1 / 36

Sequential Experimental Designs For Sensitivity Experiments NIST GLM Conference April 18-20, 2002

Explore strategies for sensitivity experiments using sequential designs. Learn about non-sequential and group-sequential approaches for data collection towards estimating dart weights accurately. Discover Bayesian methods and priors in experimental design.

smerrill
Download Presentation

Sequential Experimental Designs For Sensitivity Experiments NIST GLM Conference April 18-20, 2002

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sequential Experimental Designs For Sensitivity ExperimentsNIST GLM ConferenceApril 18-20, 2002 Joseph G. Voelkel Center for Quality and Applied Statistics College of Engineering Rochester Institute of Technology

  2. Sensitivity Experiments • ASTM method D 1709–91 • Impact resistance of plastic film by free-falling dart method 2

  3. Engineer Specify a probability of failure g – 0.50, 0.10, … Find dart weight x=d such that Prob(F; d)=g Statistician Find a strategy for selecting weights {xi} so that d is estimated as precisely as possible Objectives Darts are dropped one at a time. Weight of ith dart may depend on results obtained up to date 3

  4. Data Collection Possibilities Non-sequential • Specify n and all the {xi} before any Pass-Fail data {Yi} are obtained. • Find dose d of drug at which 5% of mice develop tumors Group-sequential • Example: two-stage. • Specify n1 and the {x1i}. Obtain data {Y1i} • Use this info to specify n2 and the {x2i}. Obtain data {Y2i} • Same mice example, but with more time. (Fully) Sequential • Use all prior knowledge: x1Y1 x2Y2 x3Y3 x4Y4 • Dart-weight example. One machine, one run at a time. 4

  5. Objective: Example Estimate weight d at which 10% of the samples fail • So, try to set the {xi} to minimize Model and Objectives 5

  6. (Fully) Sequential experiments Estimating a d corresponding to a given g, e.g. 0.10. The real problem. g = 0.50? g = 0.001? Our Interest 6

  7. Up-Down Method. Dixon and Mood (1948) Only for g=0.50 Robbins-Munro (1951) wanted {xi} to converge to d. Like Up-Down, but with decreasing increments g far from 0.50  convergence is too slow A Quick Tour of Some Past Work 7

  8. A Quick Tour of Some Past Work • Wu’s (1985) Sequential-Solving Method • Similar in spirit to the R-M procedure • Collect some initial data to get estimates of a and b • Better than R-M, much better than Up-and-Down • Performance depends somewhat heavily on initial runs • Asymptotically optimal, in a certain sense 8

  9. Some Non-Sequential Bayesian Results • Tsutakawa (1980) • How to create design for estimation of d for a given g. • Certain priors on d and b • Some approximations • Assumed constant number of runs made at equally spaced settings. • Chaloner and Larntz (1989) • Includes how to create design for estimation of d for a given g • Some reasonable approximations used • Not restricted to constant number of runs or equally spaced settings. 9

  10. 10

  11. 11

  12. This Talk. Bayesian Sequential Design • A way to specify priors • Measures of what we are learning about a, b, and d—AII and Information • Specifying the next setting, with some insights • Some examples and comparisons • Rethinking the priors 12

  13. Specifying Priors • Consider the related tolerance-distribution problem • The r.v. Xi represents the (unobservable) speed at which the ith sample of film would have failed. Say from a location-scale family (e.g., logistic, normal, …) 13

  14. Specifying Priors • Two-parameter distribution • Could specify priors on • (a, b) • (a, d) • (b, d) • For simplicity, want to assume independence so only need to specify marginals of each parameter (a, b) 14

  15. Specifying Priors • Instead of (a,b) … • Consider g=0.10 example • Consider • a • distance from d to a = 2.2b • Easier for engineer to understand (a, a-d) 15

  16. Specifying Priors • Ask engineer for • Best guess and 95% range for a • 5.0 ± 3.0 • Best guess and 95% range for a–d distance • 6.6 / 2.0 • Translate a–d=2.2b into b terms: 3.0 / 2.0 • Translate into normal, independent, priors on a and ln(b) • We used a discrete set of 1515=225 values as prior distribution of (a,b) (a, b) 16

  17. Specifying Priors • More natural for engineer to think about priors on a and d. We let engineer do this as follows. • We created 27 combinations of prior distributions: • a best guess—10 • a uncertainty (95% limits)— ± 2, 4, 6. • a-d best guess— 1, 3, 5 • a-d uncertainty (95% limits)— / 2, 4, 6. • We graphed these in terms of (a,d) (a, d) 17

  18. Example of Prior Distributions of d. a =10± 4 18

  19. Finding the next setting xn+1 to run 19

  20. AII Measure 20

  21. Simple Example • Objective: find the dcorresponding to g=0.10 21

  22. Simple Example 22

  23. Simple Example • Finding the AII for various x settings 23

  24. How AII “Thinks” 24

  25. Setting increment = 1 First Simulation • (a,b)=(8,1.82). Makes d=4.0 25

  26. d Example with a More Diffuse Prior • a =10 ± 4, a-d =5 / 6 • Simulation againdone with a =8,a-d =4 26

  27. Example with a More Diffuse Prior 27

  28. Behavior of AII after 0, 2, 10, 20, 60 runs 28

  29. Information on a, b, d = a-2.2b • A serious problem—all the information on d was obtained through b • The simulation trusted the relative tight prior on a … • Another problem: more objective methods of estimation, such as MLE, will likely not work well • Are there other ways to specify priors that might be better? Two methods… 29

  30. Equal-Contribution Priors • For d=a-2.2b, restrict original prior so that Var0(a)=Varo(2.2b) • Results of another simulation • Problem: fails for case d=a: Var0(a)=Varo(0b)? 30

  31. Relative Priors • Consider the tolerance-distribution problem • The r.v. Xi represents the (unobservable) speed at which the ith sample of film would have failed. Say from a location-scale family (e.g., logistic, normal, …) 31

  32. Relative Priors • We observe only the ( x, Yx ) ’s • If we could observe the X ’s, the problem would be a simple one-sample problem of finding the 100g percentile of a distribution. • Assume the distribution of the X ’s has a finite fourth moment. 32

  33. After m runs, observing X1, X2, …, Xm, we have • Using delta-method to find Var(s) and m-1m • So, to a good approximation Relative Priors 33

  34. Now assume tolerance distribution is symmetric and its shape is know, e.g. logistic. Then • So, with k1 and k2 known, Relative Priors • So, in this sense it is defensible to specify only the prior precision with which a is know, and base the prior precision of b upon it. 34

  35. Logistic Example 35

  36. Summary • AII as a useful measure of value of making next run at x. Combination of shift in posterior mean & probability that a failure will occur at x • Informal comparison to non-Bayesian methodsBayesian x-strategy is more subtle • Danger of simply using any prior, and recommended way to set priors 36

More Related