1 / 72

A Bayesian  2 test for goodness of fit 10/23/09 Multilevel RIT

A Bayesian  2 test for goodness of fit 10/23/09 Multilevel RIT. Overview. Talk about basic  2 test. Review with some examples. Talk about the paper with examples. Basic  2 test. y 1. y 2. y 3. y 4. y 5. y n.

darci
Download Presentation

A Bayesian  2 test for goodness of fit 10/23/09 Multilevel RIT

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Bayesian 2test for goodness of fit10/23/09Multilevel RIT

  2. Overview • Talk about basic 2test. Review with some examples. • Talk about the paper with examples.

  3. Basic 2test y1 y2 y3 y4 y5 yn • The 2 test is used to test if a sample of data came from a population with a specific distribution. • An attractive feature of the 2 goodness-of-fit test is that it can be applied to any univariate distribution for which you can calculate the CDF.

  4. The value of the 2 depends on how you partition the support. The sample size needs to be a sufficient size for the approximation to be valid.

  5. The 2 statistic, in the case of the simple hypothesis, is: 2 with k-1 degrees of freedom, as n goes to infinity is the number of observations within the kth bin K is the number of partitions or bins specified over the sample space n is the sample size is the probability assigned by the null model to this interval

  6. 4 examples We generate 4 sets of RVs: • 1000 normal • 1000 double exponential • 1000 t distribution with 3 degrees of freedom • 1000 lognormal We use the chi square test to see if each of the data sets fits a normal distribution. Ho: the data come from a normal distribution

  7. The 2 statistic, in the case of composite hypothesis, is: 2 with k-s-1 degrees of freedom, as n goes to infinity are the estimates of the bin probabilities based on either the MLE for the grouped data or on the minimum 2 method. Where s is the dimension of the underlying parameter vector 

  8. = 5.73

  9. The MLE for the grouped data means maximizing this function with respect to , while minimum 2 estimation involves finding the value of  that minimizes a function related to Rg.

  10. A Bayesian 2statistic. Let y1, ……., yn (= y) denote the scalar-valued, continuous, identically distributed, conditionally independent observations drawn from the pdf f(y|).  is indexed by an s-dimensional parameter vector     Rs We want to generate a sampled value from the posterior p( | y) . To do that, we can apply the inverse of the probability integral transform method.

  11. Set up these integrals, and then solve for ’s . . . Generally, in practice, the are calculated using the Gibbs sampler.

  12. Notation considerations denotes a value of sampled from the posterior distribution based on y The MLE

  13. This is interesting because if you contrast RBwith R^we see that R^ has k – s – 1 degrees of freedom while RB has K – 1 degrees of freedom. RB is independent of the number of parameters.

  14. The process is:

  15. The process is: • Have data y1, ……., yn

  16. The process is: • Have data y1, ……., yn • Generate from data y1, ……., yn (by integral transform or Gibbs sampler).

  17. The process is: • Have data y1, ……., yn • Generate from data y1, ……., yn (by integral transform or Gibbs sampler). • Create ’s

  18. The process is: • Have data y1, ……., yn • Generate from data y1, ……., yn (by integral transform or Gibbs sampler). • Create ’s • Calculate RB

  19. The process is: • Have data y1, ……., yn • Generate from data y1, ……., yn (by integral transform or Gibbs sampler). • Create ’s • Calculate RB • Repeat steps 2 to 4 to get many RB’s

  20. The process is: • Have data y1, ……., yn • Generate from data y1, ……., yn (by integral transform or Gibbs sampler). • Create ’s • Calculate RB • Repeat steps 2 to 4 to get many RB’s • By LLN,

  21. We can then report the proportion of RBvalues that exceeded the 95th percentile of the reference 2 with k-1 degrees of freedom. If the RB values did represent independent draws from the 2, then the proportion of values falling in the critical region of the test would exactly equal the size of the test. If the proportion is higher than what is expected then, the excess can be attributed to dependence between RB values or lack of fit.

  22. The statistic A is used in the event that formal significance tests must be performed to assess model adequacy.

  23. The statistic A is used in the event that formal significance tests must be performed to assess model adequacy. A is related to a commonly used quantity in signal detection theory and represents the area under the ROC curve [e.g., Hanley and McNeil (1982)] for comparing the joint posterior distribution of RBvalues to a χ2K−1 random variable.

  24. The statistic A is used in the event that formal significance tests must be performed to assess model adequacy. A is related to a commonly used quantity in signal detection theory and represents the area under the ROC curve [e.g., Hanley and McNeil (1982)] for comparing the joint posterior distribution of Rbvalues to a χ2K−1 random variable. The expected value of A, if taken with respect to the joint sampling distribution of y and the posterior distribution of θ given y, would be 0.5. Large deviations in the expected value of A from 0.5, when the expectation is taken with respect to the posterior distribution of θ for a fixed value of y, indicate model lack of fit.

  25. Some things to keep in mind • Unfortunately, approximating the sampling distribution of A can be a lot of trouble.

  26. Some things to keep in mind • Unfortunately, approximating the sampling distribution of A can be a lot of trouble. • How do you decide how many bins to make and how to assign probabilities to these bins? Consistency of tests against general alternatives requires that k  as n  .

  27. Some things to keep in mind • Unfortunately, approximating the sampling distribution of A can be a lot of trouble. • How do you decide how many bins to make and how to assign probabilities to these bins? Consistency of tests against general alternatives requires that k  as n  . • Having too many bins can result in loss of power.

  28. Some things to keep in mind • Unfortunately, approximating the sampling distribution of A can be a lot of trouble. • How do you decide how many bins to make and how to assign probabilities to these bins? Consistency of tests against general alternatives requires that k  as n . • Having too many bins can result in loss of power. • Mann and Wald suggested to use 3.8(n-1)0.4 equiprobable cells.

  29. Example Let y = (y1, ….., yn) denote a random sample from a normal distribution with unknown  and 2 Let us assume a joint prior for (, 2) to be proportional to 1/2 .

  30. For a given data vector y and posterior sample (μ˜ ,σ˜ ), bin counts mk(μ˜ ,σ˜ ) are determined by counting the number of observations yithat fall into the interval ( ˜σ−1(ak−1) + ˜μ, ˜σ−1(ak) + ˜μ), where −1(·) denotes the standard normal quantile function. Based on these counts, RB(μ˜,σ˜ ) is calculated according to

  31. Power Calculation • The next figure displays the proportion of times in 10,000 draws of t samples that the test statistic A was larger than the 0.95 quantile for the sampled values of App. (App comes from posterior predictive observations of y).

  32. Main advantages: Goodness-of-fit tests based on the statistic RBprovide a simple way of assessing the adequacy of model fit in many Bayesian models. Essentially, the only requirement for their use is that observations be conditionally independent. From a computational perspective, such statistics can be calculated in a straightforward way using output from existing MCMC algorithms. Values of RB generated from a posterior distribution may prove useful both as a convergence diagnostic for MCMC algorithms and for detecting errors written in computer code to implement these algorithms.

  33. There is a later paper written in 2007 that uses the same methodology, but applied to censored data.

More Related