1 / 68

Lecture 2 Probability and Measurement Error, Part 1

Understand random variables, probability density functions, correlated errors, Gaussian distribution, mean, and variance estimation. Learn about joint probability density functions and uncorrelated cases. Explore statistical concepts for inverse problems.

wabell
Download Presentation

Lecture 2 Probability and Measurement Error, Part 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 2 Probability and Measurement Error, Part 1

  2. Syllabus Lecture 01 Describing Inverse ProblemsLecture 02 Probability and Measurement Error, Part 1Lecture 03 Probability and Measurement Error, Part 2 Lecture 04 The L2 Norm and Simple Least SquaresLecture 05 A Priori Information and Weighted Least SquaredLecture 06 Resolution and Generalized Inverses Lecture 07 Backus-Gilbert Inverse and the Trade Off of Resolution and VarianceLecture 08 The Principle of Maximum LikelihoodLecture 09 Inexact TheoriesLecture 10 Nonuniqueness and Localized AveragesLecture 11 Vector Spaces and Singular Value Decomposition Lecture 12 Equality and Inequality ConstraintsLecture 13 L1 , L∞ Norm Problems and Linear ProgrammingLecture 14 Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15 Nonlinear Problems: Newton’s Method Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17 Factor AnalysisLecture 18 Varimax Factors, Empircal Orthogonal FunctionsLecture 19 Backus-Gilbert Theory for Continuous Problems; Radon’s ProblemLecture 20 Linear Operators and Their AdjointsLecture 21 Fréchet DerivativesLecture 22 Exemplary Inverse Problems, incl. Filter DesignLecture 23 Exemplary Inverse Problems, incl. Earthquake LocationLecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

  3. Purpose of the Lecture review random variables and their probability density functions introduce correlation and the multivariate Gaussian distribution relate error propagation to functions of random variables

  4. Part 1 random variables and their probability density functions

  5. random variable, d no fixed value until it is realized d=? d=? d=1.04 d=0.98 indeterminate indeterminate

  6. random variables have systematics tendency to takes on some values more often than others

  7. Nrealization of data index, i 10 di 5 0

  8. (A) (B) (C)

  9. probability of variable being between d and d+Δd is p(d) Δd p(d) d Δd

  10. in general probability is the integral probability that d is between d1 and d2

  11. the probability that d has some value is 100% or unity probability that d is between its minimum and maximum bounds, dmin and dmax

  12. How do these two p.d.f.’s differ? p(d) d 0 5 p(d) d 0 5

  13. Summarizing a probability density function typical value “center of the p.d.f.” amount of scatter around the typical value “width of the p.d.f.”

  14. Several possibilities for a typical value p(d) d dML point beneath the peak or “maximum likelihood point” or “mode”

  15. p(d) 50% 50% d dmedian point dividing area in half or “median”

  16. p(d) d <d> balancing point or “mean” or “expectation”

  17. can all be different dML≠ dmedian ≠ <d>

  18. formula for “mean” or “expected value” <d>

  19. step 1: usual formula for mean <d> d data step 2: replace data with its histogram Ns ≈ <d> ds s histogram step 3: replace histogram with probability distribution. Ns ≈ <d> s N p ≈ P(ds) s ds probability distribution

  20. If the data are continuous, use analogous formula containing an integral: ≈ P(ds) <d> s <d>

  21. quantifying width

  22. This function grows away from the typical value: q(d) = (d-<d>)2 so the function q(d)p(d) is small if most of the area is near <d> , that is, a narrow p(d) large if most of the area is far from <d> , that is, a wide p(d) so quantify width as the area under q(d)p(d)

  23. (A) (B) (C) (D) (E) (F)

  24. variance mean width is actually square root of variance, that is, σ

  25. estimating mean and variance from data

  26. estimating mean and variance from data usual formula for square of “sample standard deviation” usual formula for “sample mean”

  27. MabLab scripts for mean and variance from tabulated p.d.f. p dbar = Dd*sum(d.*p); q = (d-dbar).^2; sigma2 = Dd*sum(q.*p); sigma = sqrt(sigma2); from realizations of data dbar = mean(dr); sigma = std(dr); sigma2 = sigma^2;

  28. two important probability density functions: uniform Gaussian (or Normal)

  29. uniform p.d.f. p(d) box-shaped function 1/(dmax-dmin) d dmin dmax probability is the same everywhere in the range of possible values

  30. Gaussian (or “Normal”) p.d.f. bell-shaped function 2σ d Large probability near the mean, d. Variance is σ2.

  31. Gaussian p.d.f. probability between <d>±nσ

  32. Part 2 correlated errors

  33. uncorrelated random variables no pattern of between values of one variable and values of another when d1 is higher than its mean d2 is higher or lower than its mean with equal probability

  34. joint probability density functionuncorrelated case <d2 > 0 10 p d2 0 0.2 <d1 > 0.0 10 d1

  35. <d2 > 0 10 p d2 0 0.2 <d1 > 0.0 10 d1 no tendency for d2 to be either high or low when d1 is high

  36. in uncorrelated case joint p.d.f.is just the product of individual p.d.f.’s

  37. <d2 > 10 0 p d2 0.25 0 <d1 > 10 0.00 2σ1 d1

  38. <d2 > 10 0 p d2 0.25 0 <d1 > 10 0.00 2σ1 d1 tendency for d2to be high when d1 is high

  39. <d2 > 10 0 p d2 0.25 0 θ 2σ1 <d1 > 2σ2 10 0.00 2σ1 d1

  40. <d2> d2 <d1> 2σ1 d1

  41. (A) (B) (C) <d2> <d2> <d2> d2 d2 d2 <d1> <d1> <d1> d1 d1 d1

  42. formula for covariance + positive correlation high d1 high d2 - negative correlation high d1 low d2

  43. joint p.d.f.mean is a vectorcovariance is a symmetric matrix diagonal elements: variances off-diagonal elements: covariances

  44. estimating covariance from a table D of data Dki: realization k of data-type i in MatLab, C=cov(D)

  45. univariatep.d.f. formed from joint p.d.f. p(d) → p(di) behavior of di irrespective of the other ds integrate over everything but di

  46. d2 p(d1,d2) p(d1) integrate over d2 d1 d1 integrate over d1 p(d2) d2

  47. multivariate Gaussian (or Normal) p.d.f.

  48. mean covariance matrix

  49. Part 3 functions of random variables

  50. functions of random variables data with measurement error inferences with uncertainty data analysis process

More Related