70 likes | 184 Views
Probability classical approach P(event E) = N e /N, where N = total number of possible outcomes, N e = total number of outcomes in event E assumes equally likely outcomes examples? various coin tossing… relative frequency approach
E N D
Probability • classical approach • P(event E) = Ne/N, where N = total number of possible outcomes, Ne = total number of outcomes in event E • assumes equally likely outcomes • examples? various coin tossing… • relative frequency approach • this is the empirical approach: do the experiment, count the number of times the event occurs, divide by the total number of times the experiment is done and this becomes the approximate value of the probability of the event. • use R to do the experiment “toss a penny and a dime” penny=sample(c("H","T"),size=100,replace=T) dime=sample(c("H","T"),size=100,replace=T) expt=paste(penny,dime,sep=""); table(expt) • subjective approach
Rules for events A, B • 0≤P(A)≤1, for every event A • P(not A) = 1- P(A) • P(A or B) = P(A) + P(B), if A, B are mutually exclusive • P(A and B) = P(A) P(B), if A, B are independent • Random variables: Discrete • # of possible values is countable • distribution is a list of values along with the corresponding probabilities • examples: (i) # of Head’s in 3 tosses of a fair coin (ii) # of voters out of 25 randomly sampled who approve of the job President Bush is doing (iii) # of health workers testing positive to TB in an office of 50 workers
Random variables: Continuous • possible values over an interval of numbers • examples: (i) measurements of lengths, weights, volume, time, etc. ; (ii) scores on exams • The distribution is a density curve • always lies above the horizontal axis • total area under the curve is 1 • areas correspond to relative frequencies and probabilities • examples: (i) Normal (ii) t (iii) Chi-square (iv) F (see Figure 4.8 on p. 140) • Now we’ll look at an important example of each of these types of random variables: the binomial is discrete and the normal is continuous.
Binomial R.V. arises as a count of “successes” in n independent trials on which there are only two possible outcomes for each trial. • X=# of Heads in 10 tosses of a fair coin • Y=# of seeds in a pack of 25 that germinate • Z=# of O+ donors waiting in a line of 14 people at the Red Cross Blood Center • Knowledge of the distribution of X (or Y or Z) allows us to answer questions about how likely the various values of X (or Y or Z) are? • One of the best known examples of a binomial experiment is the political poll: Can we estimate the percentage of adults in the U.S. who have a favorable rating of the job President Bush is doing?
Check out the file of approval ratings of the President since he took office based on polls taken by the Pew Research Center: bush=read.csv(file=file.choose(),header=T) bush[1:5,] #note the variable named “approve” attach(bush); approve[1:100] plot(app[100:1],type=“b”,ylim=c(0,100)) #plot the vector backwards since the file is from #most recent to earliest in time… • This poll gives percent approval - the binomial is the number of approvals out of n , where n is the size of the sample of adults taken. In most national polls, n is around 1000. Look at smaller values of n to develop the binomial distribution…
Try Example 4.7 on page 132…Can you tell what n and are? Let’s use R to compute these probabilities… try help.start() #and then search for Binomial #the binom function has several types… try them all #start with this one… plot(0:20,dbinom(0:20,size=20, prob=.85)) #to see the B(20,.85) distribution - notice the #skewness to the left! The mean and standard deviation of a B(n,) r.v. are given by the formulas: mean = n and standard deviation = sqrt(n )) Use R to do some simulation of the binomial variable… try rbinom
HW (discrete/binomial): #4.5,4.6,4.10,4.16,4.22-23 (use R), 4.28 (use R) • HW (continuous/normal): #4.33-44, 4.46 4.48