1 / 47

Chapter 4

Chapter 4. Basic Probability and Probability Distributions. Probability Terminology. Classical Interpretation : Notion of probability based on equal likelihood of individual possibilities (coin toss has 1/2 chance of Heads, card draw has 4/52 chance of an Ace). Origins in games of chance.

omer
Download Presentation

Chapter 4

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4 Basic Probability and Probability Distributions

  2. Probability Terminology • Classical Interpretation: Notion of probability based on equal likelihood of individual possibilities (coin toss has 1/2 chance of Heads, card draw has 4/52 chance of an Ace). Origins in games of chance. • Outcome: Distinct resultof random process (N= # outcomes) • Event: Collection of outcomes (Ne= # of outcomes in event) • Probability of event E: P(event E) = Ne/N • Relative Frequency Interpretation: If an experiment were conducted repeatedly, what fraction of time would event of interest occur (based on empirical observation) • Subjective Interpretation: Personal view (possibly based on external info) of how likely a one-shot experiment will end in event of interest

  3. Obtaining Event Probabilities • Classical Approach • List all N possible outcomes of experiment • List all Ne outcomes corresponding to event of interest (E) • P(event E) = Ne/N • Relative Frequency Approach • Define event of interest • Conduct experiment repeatedly (often using computer) • Measure the fraction of time event E occurs • Subjective Approach • Obtain as much information on process as possible • Consider different outcomes and their likelihood • When possible, monitor your skill (e.g. stocks, weather)

  4. Basic Probability and Rules • A,B Events of interest • P(A), P(B)  Event probabilities • Union: Event either A or B occurs (A  B) • Mutually Exclusive: A, B cannot occur at same time • If A,B are mutually exclusive: P(either A or B) = P(A) + P(B) • Complement of A: Event that A does not occur (Ā) • P(Ā) = 1- P(A) That is: P(A) + P(Ā) = 1 • Intersection: Event both A and B occur (A  B or AB) • P (A  B) = P(A) + P(B) - P(AB)

  5. Conditional Probability and Independence • Unconditional/Marginal Probability: Frequency which event occurs in general (given no additional info). P(A) • Conditional Probability: Probability an event (A) occurs given knowledge another event (B) has occurred. P(A|B) • Independent Events: Events whose unconditional and conditional (given the other) probabilities are the same

  6. John Snow London Cholera Death Study • 2 Water Companies (Let D be the event of death): • Southwark&Vauxhall (S): 264913 customers, 3702 deaths • Lambeth (L): 171363 customers, 407 deaths • Overall: 436276 customers, 4109 deaths Note that probability of death is almost 6 times higher for S&V customers than Lambeth customers (was important in showing how cholera spread)

  7. John Snow London Cholera Death Study Contingency Table with joint probabilities (in body of table) and marginal probabilities (on edge of table)

  8. John Snow London Cholera Death Study Death Company .0140 D (.0085) S&V .6072 DC (.5987) .9860 WaterUser .0024 D (.0009) .3928 L DC (.3919) .9976 Tree Diagram obtaining joint probabilities by multiplication rule

  9. Bayes’s Rule - Updating Probabilities • Let A1,…,Ak be a set of events that partition a sample space such that (mutually exclusive and exhaustive): • each set has known P(Ai) > 0 (each event can occur) • for any 2 sets Ai and Aj, P(Aiand Aj) = 0 (events are disjoint) • P(A1) + … + P(Ak) = 1 (each outcome belongs to one of events) • If C is an event such that • 0 < P(C) < 1 (C can occur, but will not necessarily occur) • We know the probability will occur given each event Ai: P(C|Ai) • Then we can compute probability of Ai given C occurred:

  10. Northern Army at Gettysburg • Regiments: partition of soldiers (A1,…,A9). Casualty: event C • P(Ai) = (size of regiment) / (total soldiers) = (Column 3)/95369 • P(C|Ai) = (# casualties) / (regiment size) = (Col 4)/(Col 3) • P(C|Ai) P(Ai) = P(Ai and C) = (Col 5)*(Col 6) • P(C)=sum(Col 7) • P(Ai|C) = P(Ai and C) / P(C) = (Col 7)/.2416

  11. Example - OJ Simpson Trial • Given Information on Blood Test (T+/T-) • Sensitivity: P(T+|Guilty)=1 • Specificity: P(T-|Innocent)=.9957 P(T+|I)=.0043 • Suppose you have a prior belief of guilt: P(G)=p* • What is “posterior” probability of guilt after seeing evidence that blood matches: P(G|T+)? Source: B.Forst (1996). “Evidence, Probabilities and Legal Standards for Determination of Guilt: Beyond the OJ Trial”, in Representing OJ: Murder, Criminal Justice, and the Mass Culture, ed. G. Barak pp. 22-28. Harrow and Heston, Guilderland, NY

  12. Random Variables/Probability Distributions • Random Variable: Outcome characteristic that is not known prior to experiment/observation • Qualitative Variables: Characteristics that are non-numeric (e.g. gender, race, religion, severity) • Quantitative Variables: Characteristics that are numeric (e.g. height, weight, distance) • Discrete: Takes on only a countable set of possible values • Continuous: Takes on values along a continuum • Probability Distribution: Numeric description of outcomes of a random variable takes on, and their corresponding probabilities (discrete) or densities (continuous)

  13. Discrete Random Variables • Discrete RV: Can take on a finite (or countably infinite) set of possible outcomes • Probability Distribution: List of values a random variable can take on and their corresponding probabilities • Individual probabilities must lie between 0 and 1 • Probabilities sum to 1 • Notation: • Random variable: Y • Values Y can take on: y1, y2, …, yk • Probabilities: P(Y=y1) = p1 … P(Y=yk) = pk • p1 + … + pk = 1

  14. Example: Wars Begun by Year (1482-1939) • Distribution of Numbers of wars started by year • Y = # of wars stared in randomly selected year • Levels: y1=0, y2=1, y3=2, y4=3, y5=4 • Probability Distribution:

  15. Masters Golf Tournament 1st Round Scores

  16. Means and Variances of Random Variables • Mean: Long-run averagea random variable will take on (also the balance point of the probability distribution) • Expected Value is another term, however we really do not expect that a realization of X will necessarily be close to its mean. Notation: E(X) • Mean and Variance of a discrete random variable:

  17. Rules for Means • Linear Transformations: a + bY (where a and b are constants): E(a+bY) = ma+bY = a + bmY • Sums of random variables: X + Y (where X and Y are random variables): E(X+Y) = mX+Y = mX + mY • Linear Functions of Random Variables: E(a1Y1++anYn) = a1m1+…+anmn where E(Yi)=mi

  18. Example: Masters Golf Tournament • Mean by Round (Note ordering): m1=73.54 m2=73.07 m3=73.76 m4=73.91 Mean Score per hole (18) for round 1: E((1/18)X1) = (1/18)m1 = (1/18)73.54 = 4.09 Mean Score versus par (72) for round 1: E(X1-72) = mX1-72 = 73.54-72= +1.54 (1.54 over par) Mean Difference (Round 1 - Round 4): E(X1-X4) = m1 - m4 = 73.54 - 73.91 = -0.37 Mean Total Score: E(X1+X2+X3+X4) = m1+ m2+ m3+ m4 = = 73.54+73.07+73.76+73.91 = 294.28 (6.28 over par)

  19. Variance of a Random Variable • Special Cases: • X and Y are independent (outcome of one does not alter the distribution of the other): r = 0, last term drops out • a=b=1 and r = 0 V(X+Y) = sX2 + sY2 • a=1 b= -1 and r = 0 V(X-Y) = sX2 + sY2 • a=b=1 and r0 V(X+Y) = sX2 + sY2 + 2rsXsY • a=1 b= -1 and r0 V(X-Y) = sX2 + sY2 -2rsXsY

  20. Examples - Wars & Masters Golf m=0.67 m=73.54

  21. Binomial Distribution for Sample Counts • Binomial “Experiment” • Consists of n trials or observations • Trials/observations are independent of one another • Each trial/observation can end in one of two possible outcomes often labelled “Success” and “Failure” • The probability of success, p, is constant across trials/observations • Random variable, Y, is the number of successes observed in the n trials/observations. • Binomial Distributions: Family of distributions for Y, indexed by Success probability (p) and number of trials/observations (n). Notation: Y~B(n,p)

  22. Binomial Distributions and Sampling • Problem when sampling from a finite population: the sequence of probabilities of Success is altered after observing earlier individuals. • When the population is much larger than the sample (say at least 20 times as large), the effect is minimal and we say X is approximately binomial • Obtaining probabilities:

  23. Example - Diagnostic Test • Test claims to have a sensitivity of 90% (Among people with condition, probability of testing positive is .90) • 10 people who are known to have condition are identified, Y is the number that correctly test positive • Table obtained in EXCEL with function: BINOMDIST(k,n,p,FALSE) • (TRUE option gives cumulative distribution function: P(Yk)

  24. Binomial Mean & Standard Deviation • Let Si=1 if the ith individual was a success, 0 otherwise • Then P(Si=1) = p and P(Si=0) = 1-p • Then E(Si)=mS = 1(p) + 0(1-p) = p • Note that Y = S1+…+Sn and that trials are independent • Then E(Y)=mY = nmS = np • V(Si) = E(Si2)-mS2 = p-p2 = p(1-p) • Then V(Y)=sY2 =np(1-p)

  25. Continuous Random Variables • Variable can take on any value along a continuous range of numbers (interval) • Probability distribution is described by a smooth density curve • Probabilities of ranges of values for Y correspond to areas under the density curve • Curve must lie on or above the horizontal axis • Total area under the curve is 1 • Special case: Normal distributions

  26. Normal Distribution • Bell-shaped, symmetric family of distributions • Classified by 2 parameters: Mean (m) and standard deviation (s). These represent location and spread • Random variables that are approximately normal have the following properties wrt individual measurements: • Approximately half (50%) fall above (and below) mean • Approximately 68% fall within 1 standard deviation of mean • Approximately 95% fall within 2 standard deviations of mean • Virtually all fall within 3 standard deviations of mean • Notation when Y is normally distributed with mean m and standard deviation s :

  27. Two Normal Distributions

  28. Normal Distribution

  29. Example - Heights of U.S. Adults • Female and Male adult heights are well approximated by normal distributions: YF~N(63.7,2.5) YM~N(69.1,2.6) Source: Statistical Abstract of the U.S. (1992)

  30. Standard Normal (Z) Distribution • Problem: Unlimited number of possible normal distributions (- < m <  , s > 0) • Solution: Standardize the random variable to have mean 0 and standard deviation 1 • Probabilities of certain ranges of values and specific percentiles of interest can be obtained through the standard normal (Z) distribution

  31. Standard Normal (Z) Distribution Table Area 1-Table Area z

  32. 2nd Decimal Place I n t g e r p a r t & 1st D e c i m a l

  33. 2nd Decimal Place I n t g e r p a r t & 1st D e c i m a l

  34. Finding Probabilities of Specific Ranges • Step 1 - Identify the normal distribution of interest (e.g. its mean (m) and standard deviation (s) ) • Step 2 - Identify the range of values that you wish to determine the probability of observing (yL , yU), where often the upper or lower bounds are  or - • Step 3 - Transform yL and yU into Z-values: • Step 4 - Obtain P(zL Z  zU) from Z-table

  35. Example - Adult Female Heights • What is the probability a randomly selected female is 5’10” or taller (70 inches)? • Step 1 -Y ~ N(63.7 , 2.5) • Step 2 -yL = 70.0 yU =  • Step 3 - • Step 4 - P(Y 70) = P(Z  2.52) = • 1-P(Z2.52)=1-.9941=.0059 (  1/170)

  36. Finding Percentiles of a Distribution • Step 1 - Identify the normal distribution of interest (e.g. its mean (m) and standard deviation (s) ) • Step 2- Determine the percentile of interest 100p% (e.g. the 90th percentile is the cut-off where only 90% of scores are below and 10% are above). • Step 3 - Find p in the body of the z-table and itscorresponding z-value (zp) on the outer edge: • If 100p< 50 then use left-hand page of table • If 100p50 then use right-hand page of table • Step 4 - Transform zp back to original units:

  37. Example - Adult Male Heights • Above what height do the tallest 5% of males lie above? • Step 1 - Y ~ N(69.1 , 2.6) • Step 2 - Want to determine 95th percentile (p = .95) • Step 3 - P(Z1.645) = .95 • Step 4 - y.95 = 69.1 + (1.645)(2.6) = 73.4 (6’,1.4”)

  38. Assessing Normality and Transformations • Obtain a histogram and see if mound-shaped • Obtain a normal probability plot • Order data from smallest to largest and rank them (1 to n) • Obtain a percentile for each: pct = (rank-0.375)/(n+0.25) • Obtain the z-score corresponding to the percentile • Plot observed data versus z-score, see if straight line (approx.) • Transformations that can achieve approximate normality:

  39. Sampling Distributions • Distribution of a Sample Statistic: The probability distribution of a sample statistic obtained from a random sample or a randomized experiment • What values can a sample mean (or proportion) take on and how likely are ranges of values? • Population Distribution: Set of values for a variable for a population of individuals. Conceptually equivalent to probability distribution in sense of selecting an individual at random and observing their value of the variable of interest

  40. Sampling Distribution of a Sample Mean • Obtain a sample of n independent measurements of a quantitative variable: Y1,…,Yn from a population with mean m and standard deviation s • Averages will be less variable than the individual measurements • Sampling distributions of averages will become more like a normal distribution as n increases (regardless of the shape of the population of individual measurements)

  41. Central Limit Theorem • When random samples of size n are selected from any population with mean m and finite standard deviation s, the sampling distribution of the sample mean will be approximately distributed for large n: Z-table can be used to approximate probabilities of ranges of values for sample means, as well as percentiles of their sampling distribution

  42. Sample Proportions • Counts of Successes (Y) rarely reported due to dependency on sample size (n) • More common is to report the sample proportion of successes:

  43. Sampling Distributions for Counts & Proportions • For samples of size n, counts (and thus proportions) can take on only n distinct possible outcomes • As the sample size n gets large, so do the number of possible values, and sampling distribution begins to approximate a normal distribution. Common Rule of thumb: np 10 and n(1-p)  10 to use normal approximation

  44. Sampling Distribution for Y~B(n=1000,p=0.2)

  45. Using Z-Table for Approximate Probabilities • To find probabilities of certain ranges of counts or proportions, can make use of fact that the sample counts and proportions are approximately normally distributed for large sample sizes. • Define range of interest • Obtain mean of the sampling distribution • Obtain standard deviation of sampling distribution • Transform range of interest to range of Z-values • Obtain (approximate) Probabilities from Z-table

More Related