1 / 52

Errors in Experimental Measurements

Errors in Experimental Measurements. Sources of errors Accuracy, precision, resolution A mathematical model of errors Confidence intervals For means For proportions How many measurements are needed for desired error?. Why do we need statistics?. 1. Noise, noise, noise, noise, noise!.

jody
Download Presentation

Errors in Experimental Measurements

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Errors in Experimental Measurements • Sources of errors • Accuracy, precision, resolution • A mathematical model of errors • Confidence intervals • For means • For proportions • How many measurements are needed for desired error? Copyright 2004 David J. Lilja

  2. Why do we need statistics? 1. Noise, noise, noise, noise, noise! OK – not really this type of noise Copyright 2004 David J. Lilja

  3. Why do we need statistics? 2. Aggregate data into meaningful information. 445 446 397 226 388 3445 188 1002 47762 432 54 12 98 345 2245 8839 77492 472 565 999 1 34 882 545 4022 827 572 597 364 Copyright 2004 David J. Lilja

  4. What is a statistic? • “A quantity that is computed from a sample [of data].” Merriam-Webster → A single number used to summarize a larger collection of values. Copyright 2004 David J. Lilja

  5. What are statistics? • “A branch of mathematics dealing with the collection, analysis, interpretation, and presentation of masses of numerical data.” Merriam-Webster → We are most interested in analysis and interpretation here. • “Lies, damn lies, and statistics!” Copyright 2004 David J. Lilja

  6. Goals • Provide intuitive conceptual background for some standard statistical tools. • Draw meaningful conclusions in presence of noisy measurements. • Allow you to correctly and intelligently apply techniques in new situations. → Don’t simply plug and crank from a formula. Copyright 2004 David J. Lilja

  7. Goals • Present techniques for aggregating large quantities of data. • Obtain a big-picture view of your results. • Obtain new insights from complex measurement and simulation results. → E.g. How does a new feature impact the overall system? Copyright 2004 David J. Lilja

  8. Sources of Experimental Errors • Accuracy, precision, resolution Copyright 2004 David J. Lilja

  9. Experimental errors • Errors → noise in measured values • Systematic errors • Result of an experimental “mistake” • Typically produce constant or slowly varying bias • Controlled through skill of experimenter • Examples • Temperature change causes clock drift • Forget to clear cache before timing run Copyright 2004 David J. Lilja

  10. Experimental errors • Random errors • Unpredictable, non-deterministic • Unbiased → equal probability of increasing or decreasing measured value • Result of • Limitations of measuring tool • Observer reading output of tool • Random processes within system • Typically cannot be controlled • Use statistical tools to characterize and quantify Copyright 2004 David J. Lilja

  11. Example: Quantization→ Random error Copyright 2004 David J. Lilja

  12. Quantization error • Timer resolution → quantization error • Repeated measurements X ± Δ Completely unpredictable Copyright 2004 David J. Lilja

  13. A Model of Errors Copyright 2004 David J. Lilja

  14. A Model of Errors Copyright 2004 David J. Lilja

  15. A Model of Errors Copyright 2004 David J. Lilja

  16. Probability of Obtaining a Specific Measured Value Copyright 2004 David J. Lilja

  17. A Model of Errors • Pr(X=xi) = Pr(measure xi) = number of paths from real value to xi • Pr(X=xi) ~ binomial distribution • As number of error sources becomes large • n → ∞, • Binomial → Gaussian (Normal) • Thus, the bell curve Copyright 2004 David J. Lilja

  18. Frequency of Measuring Specific Values Accuracy Precision Mean of measured values Resolution True value Copyright 2004 David J. Lilja

  19. Accuracy, Precision, Resolution • Systematic errors → accuracy • How close mean of measured values is to true value • Random errors → precision • Repeatability of measurements • Characteristics of tools → resolution • Smallest increment between measured values Copyright 2004 David J. Lilja

  20. Quantifying Accuracy, Precision, Resolution • Accuracy • Hard to determine true accuracy • Relative to a predefined standard • E.g. definition of a “second” • Resolution • Dependent on tools • Precision • Quantify amount of imprecision using statistical tools Copyright 2004 David J. Lilja

  21. 1-α c1 c2 α/2 α/2 Confidence Interval for the Mean Copyright 2004 David J. Lilja

  22. Normalize x Copyright 2004 David J. Lilja

  23. 1-α c1 c2 α/2 α/2 Confidence Interval for the Mean • Normalized z follows a Student’s t distribution • (n-1) degrees of freedom • Area left of c2 = 1 – α/2 • Tabulated values for t Copyright 2004 David J. Lilja

  24. 1-α c1 c2 α/2 α/2 Confidence Interval for the Mean • As n → ∞, normalized distribution becomes Gaussian (normal) Copyright 2004 David J. Lilja

  25. Confidence Interval for the Mean Copyright 2004 David J. Lilja

  26. An Example Copyright 2004 David J. Lilja

  27. An Example (cont.) Copyright 2004 David J. Lilja

  28. 1-α c1 c2 α/2 α/2 An Example (cont.) • 90% CI → 90% chance actual value in interval • 90% CI → α = 0.10 • 1 - α /2 = 0.95 • n = 8 → 7 degrees of freedom Copyright 2004 David J. Lilja

  29. 90% Confidence Interval Copyright 2004 David J. Lilja

  30. 95% Confidence Interval Copyright 2004 David J. Lilja

  31. What does it mean? • 90% CI = [6.5, 9.4] • 90% chance real value is between 6.5, 9.4 • 95% CI = [6.1, 9.7] • 95% chance real value is between 6.1, 9.7 • Why is interval wider when we are more confident? Copyright 2004 David J. Lilja

  32. 95% 9.4 6.5 6.1 9.7 Higher Confidence → Wider Interval? 90% Copyright 2004 David J. Lilja

  33. 1-α c1 c2 α/2 α/2 Key Assumption • Measurement errors are Normally distributed. • Is this true for most measurements on real computer systems? Copyright 2004 David J. Lilja

  34. Key Assumption • Saved by the Central Limit Theorem Sum of a “large number” of values from any distribution will be Normally (Gaussian) distributed. • What is a “large number?” • Typically assumed to be >≈ 6 or 7. Copyright 2004 David J. Lilja

  35. How many measurements? • Width of interval inversely proportional to √n • Want to minimize number of measurements • Find confidence interval for mean, such that: • Pr(actual mean in interval) = (1 – α) Copyright 2004 David J. Lilja

  36. How many measurements? Copyright 2004 David J. Lilja

  37. How many measurements? • But n depends on knowing mean and standard deviation! • Estimate s with small number of measurements • Use this s to find n needed for desired interval width Copyright 2004 David J. Lilja

  38. How many measurements? • Mean = 7.94 s • Standard deviation = 2.14 s • Want 90% confidence mean is within 7% of actual mean. Copyright 2004 David J. Lilja

  39. How many measurements? • Mean = 7.94 s • Standard deviation = 2.14 s • Want 90% confidence mean is within 7% of actual mean. • α = 0.90 • (1-α/2) = 0.95 • Error = ± 3.5% • e = 0.035 Copyright 2004 David J. Lilja

  40. How many measurements? • 213 measurements → 90% chance true mean is within ± 3.5% interval Copyright 2004 David J. Lilja

  41. Proportions • p = Pr(success) in n trials of binomial experiment • Estimate proportion: p = m/n • m = number of successes • n = total number of trials Copyright 2004 David J. Lilja

  42. Proportions Copyright 2004 David J. Lilja

  43. Proportions • How much time does processor spend in OS? • Interrupt every 10 ms • Increment counters • n = number of interrupts • m = number of interrupts when PC within OS Copyright 2004 David J. Lilja

  44. Proportions • How much time does processor spend in OS? • Interrupt every 10 ms • Increment counters • n = number of interrupts • m = number of interrupts when PC within OS • Run for 1 minute • n = 6000 • m = 658 Copyright 2004 David J. Lilja

  45. Proportions • 95% confidence interval for proportion • So 95% certain processor spends 10.2-11.8% of its time in OS Copyright 2004 David J. Lilja

  46. Number of measurements for proportions Copyright 2004 David J. Lilja

  47. Number of measurements for proportions • How long to run OS experiment? • Want 95% confidence • ± 0.5% Copyright 2004 David J. Lilja

  48. Number of measurements for proportions • How long to run OS experiment? • Want 95% confidence • ± 0.5% • e = 0.005 • p = 0.1097 Copyright 2004 David J. Lilja

  49. Number of measurements for proportions • 10 ms interrupts → 3.46 hours Copyright 2004 David J. Lilja

  50. Important Points • Use statistics to • Deal with noisy measurements • Aggregate large amounts of data • Errors in measurements are due to: • Accuracy, precision, resolution of tools • Other sources of noise → Systematic, random errors Copyright 2004 David J. Lilja

More Related