1 / 30

Non-Poisson Counting Uncertainty, or “What’s this J Factor All About?”

Keith D. McCroan US EPA National Air and Radiation Environmental Laboratory Radiobioassay and Radiochemical Measurements Conference October 29, 2009. Non-Poisson Counting Uncertainty, or “What’s this J Factor All About?”. Counting uncertainty.

varen
Download Presentation

Non-Poisson Counting Uncertainty, or “What’s this J Factor All About?”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Keith D. McCroan US EPA National Air and Radiation Environmental Laboratory Radiobioassay and Radiochemical Measurements Conference October 29, 2009 Non-Poisson Counting Uncertainty, or“What’s this J Factor All About?”

  2. Counting uncertainty • Most rad-chemists learn early to estimate “counting uncertainty” by square root of the count C. • They are likely to learn that this works because C has a “Poisson” distribution. • They may not learn why that statement is true, but they become comfortable with it.

  3. “The standard deviation of C equals its square root. Got it.”

  4. The Poisson distribution • What’s special about a Poisson distribution? • What is really unique is the fact that its mean equals its variance: μ = σ2 • This is why we can estimate the standard deviation σ by the square root of the observed value – very convenient. • What other well-known distributions have this property? None that I can name.

  5. The Poisson distribution in Nature • How does Nature produce a Poisson distribution? • The Poisson distribution is just an approximation – like a normal distribution. • It can be a very good approximation of another distribution called a binomial distribution.

  6. Binomial distribution • You get a binomial distribution when you perform a series of N independent trials of an experiment, each having two possible outcomes (success and failure). • The probability of success p is the same for each trial (e.g., flipping a coin, p = 0.5). • If X is number of successes, it has the “binomial distribution with parameters N and p.” X ~ Bin(N, p)

  7. Poisson approximation • The mean of X is Np and the variance is Np(1 − p). • When p is tiny, the mean and variance are almost equal, because (1 − p) ≈ 1. • Example: N is number of atoms of a radionuclide in a source, p is probability of decay and counting of a particular atom during the counting period (assuming half-life isn’t short), and C is number of counts.

  8. Poisson counting • In this case the mean of C is Np and the variance is also approximately Np. • We can consider C to be Poisson: C ~ Poi(μ) where μ = Np

  9. Poisson – Summary • In a nutshell, the Poisson distribution describes occurrences of relatively rare (very rare) events (e.g., decay and counting of an unstable atom) • Where significant numbers are observed only because the event has so many chances to occur (e.g., very large number of these atoms in the source)

  10. Violating the assumptions • Imagine measuring 222Rn and progeny by scintillation counting – Lucas cell or LSC. • Assumptions for the binomial/Poisson distribution are violated. How? • First, the count time may not be short enough compared to the half-life of 222Rn. • The binomial probability p may not be small. • If you were counting just the radon, you might need the binomial distribution and not the Poisson approximation.

  11. More importantly... • We actually count radon + progeny. • We may start with N atoms of 222Rn in the source, but we don’t get a simple “success” or “failure” to record for each one. • Each atom might produce one or more counts as it decays. • C isn’t just the number of “successes.”

  12. Lucas 1964 • In 1964 Henry Lucas published an analysis of the counting statistics for 222Rn and progeny in a Lucas cell. • Apparently many rad-chemists either never heard of it or didn’t fully appreciate its significance. • You still see counting uncertainty for these measurements being calculated as .

  13. Radon decay • Slightly simplified decay chain: • A radon atom emits three α-particles and two β-particles on its way to becoming 210Pb (not stable but relatively long-lived). • In a Lucas cell we count just the alphas – 3 of them in this chain.

  14. Thought experiment • Let’s pretend that for every 222Rn atom that decays during the counting period, we get exactly 3 counts (for the 3 α-particles that will be emitted). • What happens to the counting statistics?

  15. Non-Poisson counting • C is always a multiple of 3 (e.g., 0, 3, 6, 9, 12, ...). • That’s not Poisson – A Poisson variable can assume any nonnegative value. • More important question to us: What is the relationship between the mean and the variance of C?

  16. Index of dispersion, J • The ratio of the variance V(C) to the mean E(C) is called the index of dispersion. • Often denoted by D, but Lucas used J. • That’s why this factor is sometimes called a “J factor” • For a Poisson distribution, J = 1. • What happens to J when you get 3 counts per decaying atom?

  17. Mean and variance • Say D is the number of radon atoms that decay during the counting period and C is the number of counts produced. • Assume D is Poisson, so V(D) = E(D). • By assumption, C = 3 × D.So, E(C) = 3 × E(D) V(C) = 9 × V(D) J = V(C) / E(C) = 3 × V(D) / E(D) = 3

  18. Index of dispersion • So, the index of dispersion for C is 3, not 1 which we’re accustomed to seeing. • This thought experiment isn’t realistic. • You don’t really get exactly 3 counts for each atom of analyte that decays. • It’s much trickier to calculate J correctly.

  19. Technique • Fortunately you really only have to consider a typical atom of the analyte (e.g., 222Rn) at the start of the analysis. • What is the index of dispersion J for the number of counts C that will be produced by this hypothetical atom as it decays? • Easiest approach involves a statistical technique called conditioning.

  20. Conditioning • Consider all the possible histories for the atom – i.e., all the different ways the atom can decay. • It is convenient to define the histories in terms of the states the atom is in at the beginning and end of the counting period. • Calculate the probability of each history • typically using Bateman equations

  21. Conditioning - Continued • For each history, calculate the conditional expected values of C and C2 given that history (i.e., assuming it occurs). • Next calculate the overall expected values E(C) and E(C2) as probability-weighted averages of the conditional values. • Calculate V(C) = E(C2) − E(C)2 . • Finally, J = V(C) / E(C). • Details left to the reader.

  22. Radium-226 • Sometimes you measure radon to quantify the parent 226Ra. • Let J be the index of dispersion for the number of counts produced by a typical atom of the analyte 226Ra – not radon. • Technique for finding J (conditioning) is the same, but the details are different. • Value of J is always > 1 in this case.

  23. Thorium-234 • If you beta-count a sample containing 234Th, you’re counting both 234Th and the short-lived decay product 234mPa. • With ~50 % beta detection efficiency, you have non-Poisson statistics here too. • The counts often come in pairs. • The value of J doesn’t tend to be as large as when counting radon in a Lucas cell or LSC (less than 1.5).

  24. Gross alpha/beta? • If you don’t know what you’re counting, how can you estimate J? • You really can’t. • Probably most methods implicitly assume J = 1. • But who really knows?

  25. Simplification • Assume every radiation of the decaying atom has detection efficiency ε or 0. Then where • m1 is expected number of detectable radiations from • an atom of analyte during the counting interval • m2 is expected square of this number

  26. Bounds for J • m1 ≤ m2 ≤ Nm1, where N is the maximum number of counts per atom. So, 1 − ε × m1 ≤ J ≤ 1 + ε × (N − m1 − 1) • In many situations m1 is very small. Then 1 ≤ J ≤ 1 + ε × (N − 1) • E.g., for 226Ra measured by 222Rn in a Lucas cell, N = 3. So, 1 ≤ J ≤ 1 + 2ε

  27. Remember • Suspect non-Poisson counting if: • One atom can produce more than one count (N>1) as it decays through a series of short-lived states • Detection efficiency (ε) is high • Together these effects tend to give you on average more than one count per decaying atom. • In many cases, 1 ≤ J ≤ 1 + ε × (N − 1).

  28. Questions?

  29. Reference • Lucas, H.F., Jr., and D.A. Woodward. 1964. Journal of Applied Physics 35:452.

  30. Testing for J > 1 • You can test J > 1 with a χ2 test, but you may need a lot of measurements.

More Related