950 likes | 2.43k Views
Test of Significance. Vikash R Keshri Moderator: Mr. M. S. Bharambe. Outline. Introduction : Important Terminologies. Test of Significance : Z test. t test. F test. Chi Square test. Fisher’s Exact test. Significant test for correlation Coefficient.
E N D
Test of Significance Vikash R Keshri Moderator: Mr. M. S. Bharambe
Outline • Introduction: • Important Terminologies. • Test of Significance: • Z test. • t test. • F test. • Chi Square test. • Fisher’s Exact test. • Significant test for correlation Coefficient. • One Way Analysis of Variance (ANOVA). • Conclusion:
Introduction: • All scientists work look for the answer to following questions: • How probable the difference between the observed and expected results by chance only ? • Is the difference statistically significant?
Important Terminologies: • Population & Sample: Population is any infinite collection of elements i.e. individual, items, observations etc. A part or subset of population. But The Basic problem of the sample is generalization. • Parameters & Statistic: A parameter is a constant describing a population. Statistic is quantity describing the sample i.e. a function of observation.
SamplingDistribution: • The distribution of the value of statistics which would arise from all possible samples are called sampling distribution.
StandardError(SE): • The standard deviation of sampling distribution is called as the Standard Error. It provides the estimate that how far from the true value the estimated value is likely to be.
Confidence Limits: Confidence Limit is range within which all the Possible sample mean will lie. • A population mean ± 1 Std. Error limit correspond to 68.27 percent of sample mean value. • A population mean ±1.96 Std. Error correspond to 95.0% of the sample mean values. • Population mean ± 2.58 stand. Error corresponds to 99 % sample mean values. • Population mean ± 3.29 correspond to 99.9% of the sample mean value. • Interval is confidence interval.
Hypothesis: A statistical Hypothesis is a statement about the parameter (forms of population). i.e. x1 = x2 or x = µ or p1 = p2 or p = P • Null Hypothesis (H0): It is hypothesis of no difference between two outcome variables. • Alternative Hypothesis (H1): There is difference between the two variables under study. • Hypotheses are always about parameters of populations, never about statistic from samples. • Test of Significance: Testing the null hypothesis.
Parametric Vs. Non – Parametric test; Parametric test Non parametric test Variable under study don’t follow normal distribution or any other distribution of normal family. Association can be estimated. • Based on assumptions that data follow normal distribution or normal family of distribution. • Estimate parameter of underlying normal distribution. • Significance of difference known
P – Value: • P value provides significant departure or some degree of evidence against null hypothesis. • P value derived from statistical tests depend on the size and direction of the effect. • P < 0.05 = significant = 1.96 Std. Error = 95% Confidence Interval. • P < 0.01 or p < .001 = highly significant = 99% and 99.9% Confidence Interval. • The Non Significant departure doesn’t provide the positive evidence in favour of hypothesis. • Dependent on Sample Size. • If P > alpha, calculate the power • If power < 80% - The difference could not be detected; repeat the study with deficit number of study subjects. • If power ≥ 80 % - The difference between groups is not statistically significant.
One Sided ( One tailed) Vs. Two Sided (two tailed) : • Two Sided test: Significantly large departure from Null Hypothesis in either direction will be judged by significance. • One Sided Test: Is used we are interested in measuring the departure in only one particular direction. • A one sided test at level P is same as two sided test at level 2P. • Example: test to compare population mean of two group A and B • Alternate Hypothesis mean of A > mean of B. – One tailed test. • Alternate Hypothesis Mean of B > mean of A > mean of B. – two tailed test.
STEPS : • Defining the research question. • Null Hypothesis (H0) - there is no difference between the group. • Alternative hypothesis (H1) – there is some difference between the groups. • Selecting appropriate test. • Calculation of test criteria (c). • Deciding the acceptable level of significance (α). Usually 0.05 (5%). • Compare the test criteria with theoretical value at α. • Accepting Null Hypothesis or Alternative Hypothesis. • Inference.
Common concerns: • Sample mean andPopulation mean • Two or more sample mean. • Sample Proportion (percentage) vs. Population proportion (percentages). • Two or more Sample Proportion (percentages). • Sample Correlation Coefficient vs. population correlation coefficient. • Two sample correlation coefficient.
Why test of significance? • Testing SAMPLE and commenting on POPULATION. • Two different SAMPLES (group means) from same or different POPULATIONS (from which the samples were drawn)? • Is the difference obtained TRUE or by chance alone? • Will another set of samples be also different? • Significance Testing - Deals with answer to above Questions.
StandardNormal Deviate (Z) test • Assumptions: • Samples are selected randomly. • Quantitative data. • Variable follow normal distribution in the population. • Sufficiently large sample Size.
The steps: • To find out the problem and question to be answered. • Statement of Null (H0) • Alternative Hypothesis (H1). • Calculation of standard Error. • Calculation of Critical ratio. • Fixation of level of significance. (α) critical level of significance. • Comparison of calculated critical ratio with the theoretical value. • Drawing the inference.
Comparison of Means of Two Samples: • Zc = x1 – x2 / SE (x1 –x2). • SE of (x1 – x2) = √ [ (SE12 + SE2 2)] • SE of (x1 – x2) = [SD12 /n1+ SD22/ n2] ½ • Example: We have to compare and infer from the given data that the arm circumference of Indian and American children.
Interpreting Z value: • Area under curve: • Z 0.05, = 1.96 • Z0.001 = 2.56 • Z0.01 = 3.29 • If Calculated Z value (Zc ) > Z 0.05, Z0.01, Z0.001 • Null hypothesis is rejected • Alternate Hypothesis is accepted.
Comparing Sample Mean with Population Mean: • Z = difference between sample and population mean / SE of sample mean. • SE of sample mean= sample std. deviation / square root of n • Example: If the Mean weight of population Follow normal distribution. Do the mean weight of 17.8 kg. of 100 children with std. deviation of 1.25 Kg. different from the population mean wt. of 20 kg.
Difference between two sample Proportions: • Difference in proportion / SE (Difference in proportion) • Z = p1 – p2 / [PQ (1/n1 + 1/n2)]1/2 • Here p1 = Proportion of sample 1 p2 = Proportion of sample 2 • P = p1 n1 + p2n2 / n1 + n2 and Q = 1- P • Example: Given table provides data for Prevalence of Overweight and Obesity among Indians and USA. can we conclude that the Prevalence of Overweight and Obesity among Indians and USA is same?
Comparison of Sample Proportion with Population Proportion: • Zc = Difference between sample proportion and population proportion / SE of Difference between sample proportion and population proportion. • Zc = p – P / [PQ (1/n)] ½ • p=Sample proportion , P = Population Proportion and Q = 1-P. , n = Sample Size. • Example: In school health survey the prevalence of nutritional dwarfism among the school age children in class 10 is 18.3. Sample size studied was 250. Does it confirm that 20% of school age of children is nutritional dwarf?
Variance Ratio test (F – test). • Developed by Fisher and Snedecor. • Comparison of Variance between two groups (or Sample). • Involves the distribution of F. • Applied If the • SD 12 and SD 22 of two sample is known. • SD 12 > SD 22 than • SD 12 / SD 22 follows the F distribution at n1 -1 and n2 – 1 Degree of Freedom. • F = SD 12 / SD 22 • Example: SD12 of 25 males’ adults for height is 5.0. SD 12 for 25 females is 9.0. Can we conclude that the variance in height is same in both male and female adults?
t – test: • Prof. W.S. Gosset. ( pen name of student.) • Difference b/w Normal and t Distribution: • Very Small Sample size don’t follow the normal distribution. • They follow the t distribution. • Bell shaped vs. symmetrical.
Prerequisite: Unpaired data: • Sample size is small (Usually < 30) • Population variance is not known. • Two separate group of samples drawn from two separate population group. • These two groups can be control and cases also. Paired data: • Applied only when each individual gives a pair of data. i.e. study of accuracy of two instruments or study on weight of one individual on two different occasion.
Assumptions: • Samples are randomly selected. • Quantitative data. • Variable under study follow normal distribution family. • Sample variances are mostly same in both group. • Sample size is small (usually < 30).
Unpaired t test: • Mean of two independent samples. • Example: • Mean value of birth weight with std. deviation is given below by socio- economic status. • Small randomly selected sample size. Variance is mostly the same, so t test can be applied.
Steps: • State Null hypothesis (H0): X1 = X2 • Alternative Hypothesis (H1): H0 is not true. • Test criteria t = mean difference between two samples / SE (mean difference between two samples) • t = x1 – x2 / SE (x1 – x2). • SE (x1 – x2) = SD [1/n2 + 1/n2]1/2 SD = [(n1-1)SD 12 + (n2 -1) SD2 2 / n1 + n2 -2] • Calculate df = (n1 – 1) + (n2-1) = n1+ n2 -2. • Compare of calculated t value with its table value at t0.05, t0.01 , t0.001 at n1+ n2 -2 df. • Inference: ifcalculated value is > or equal to theoretical value Null Hypothesis rejected.
Difference between sample mean and population mean: • t = [x – u ] / SE • t = [x – u ] / SD/ n1/2 • Degree of freedom: n -1 • Example: • mean Hb. Level of 25 school children is 10.6 gm% with SD of 1.15 gm. / dl. Is it significantly different from mean value of 11.0 gm%.
For difference between two small sample Proportion: • t = p1 – p2 / [PQ (1/n1 + 1/n2)]1/2 • P = p1 n1 + p2n2 / n1 + n2 Q = 1- P • df = n1+ n2 -2. • Example: Proportion of infant with frequent diarrhea by type of feeding habits is given. Is there significant difference between the incidence of frequent diarrhea among EBF babies and not EBF babies.
Paired t test: • Pre-requisite: • When each individual is providing a pair of result. • When the pair of results are correlated. • t = mean d – 0 /SE (d) • t = mean d / SD/ (n)1/2 • SE = SD / (n)1/2 = [SD2 / n ] 1/2 • SD2 = Σ (d - mean d)2 / n-1 • Σ (d - mean d)2 = Σ d2 – (Σ d)2/n
Example: The fat fold at triceps was recorded on 12 children before and at the end of commencement of feeding programme. Is there any significant change in the fat fold at triceps at the end of the programme?
t = mean d – 0 /SE (d) = mean d / SD/ (n)1/2 • Σ (d - mean d)2 = Σ d2 – (Σ d)2/n = 27 – 81/12 = 27 – 6.75 = 20.25 • SD2 = Σ (d - mean d)2 / n-1 = 20.25 / 11 = 1.84 • SE = SD / (n)1/2 = [SD2 / n ] 1/2 = [1.84 / 12]1/2 = [0.1533]1/2 = 0.3917 • t = 0.75 / 0.3917 = 1.92 • df = n -1 = 11 • calculated t value is < t0.05 at 11 df. Difference is not statistically significant.
Chi Square (Ϫ2) test: Underlying theory: If the two variables are not associated the value of observed and expected frequencies should be close to each to each other and any discrepancies should be due to randomization only. • Non-parametric test. • Statistical significance for bivariate tabular analysis. • Evaluate differences between experimental or observed data and expected or hypothetical data.
Ϫ2 Assumptions: 1. Quantitative data.2. One or more categories.3. Independent observations.4. Adequate sample size.5. Simple random sample.6. Data in frequency form.
Contingency table: • A frequency table where sample classified in to two different attributes. • A contingency table may be 2 x 2 table or r x c table. • Marginal total = (a + b) or (a + c) or (c + d) or (b +d) • Grand total = N = a + b + c + d • Expected value (E) = R X C / N where R = row total, C = Column total and N = Grand total.
Calculation: = (O – E) 2 / E • Degree of freedom: df = (r-1) (c-1) • for 2x2 table: Ϫ2 = (ad – bc)2 N / (a+b) (b+d) (c+d) (a+c) with 1 df
In given example calculation of expected value: Ea = 10 x100 / 200 = 5 O – Ea = 1 (O – Ea)2= 1 Eb = 10 x100 / 200 = 5 O –Eb = -1 (O-Ea)2 = 1 Ec= 190 x 100 /200 = 95 O- Ec = 95 -96 = 1 (O-Ec)2 = 1 Ed = 190 x 100 /200 = 95 O- Ed = -1 (O-Ed)2 = 1 • Ϫ2 = 4 at 1 df • Calculated value Ϫ2 < Ϫ2 at 0.05 for 1 df. The difference is statistically significant
Yates's continuity correction: • Described by F. Yates. • When the value in a 2x2 table is fairly small , correction for continuity is required. • No precise rule for situation in which the Yates correction needs to be applied. • Generally it is applied if the grand total is < 100 or a Expected value is < 5 in any cell. • Ϫ2 = [(ad – bc) –N/2]2 N / (a+b) (b+d) (c+d) (a+c)
Exact Probability test or Fisher’s Exact test: Cochran’s Criteria: • Recommended by W. G. Cochran in 1954. • Fisher’s Exact test should be used if: • If n < 20 • 40 < n >20 and smallest expected value is less than 5. • For contingency table more than 1 df the criteria states that if Expected value < 5 in more than 20% of cells. • What if the observed value is 0 in one cell? • Chi square can still be applied if it fulfills the above criteria of expected value.
Fisher’s Exact test……. • Devised by Fisher, Yates and Irwin. • Example: Survival rate after two different types of treatments: • Is the difference in survival statistically significant? • No. of tables possible with marginal total is 4 = lowest total marginal +1.
Exact probability P value = • The P value for each table is 0.O71, 0.429, 0.429 and 0.071. • Table 2 is similar to the test table. • Final P value: • Conventional Approach: P = P of observed set + extreme value = O.429 +0.071 = 0.5 • Mid P approach given by Armitage and Berry: P = 0.5 X observed P + Extreme value = 0.2145 + 0.071 = 0.286 • Exact probability is essentially One sided. • For two sided test double the P value.
Significance test for Correlation Coefficients: • Sample correlation coefficient (r) and Population with correlation coefficient (r = 0 in population). • Is the sample correlation coefficient r is from the population with correlation coefficient o? • Valid if at least one variable follow normal distribution. • Null hypothesis H0 p = 0. Sample correlation coefficient is zero). • Std Error of r = [(1-r2)/ n-2] 1/2 • For small sample test: t = r – 0 / SE (r) = r / SE ( r) at n-2 df.
Example: • Correlation coefficient between intake of calories and protein in adults is 0.8652. The sample size studied was 12. Is this r value statistically significant? • First calculate SE(r ) = [ 1-(0.8652)2/ 10]1/2 = 0.1585 • t = r – 0 / SE (r) t = 0. 8652 / 0.1585 = 5.458 • df = n -2 = 10 • t value is > t value at 0.001 for 10 df. • so the r value is highly significant.
Two independent correlation coefficient. • r1 and r2 are two independent correlation coefficient based on n1 and n2 sample size. • First z transformation: (also known as Fisher’s Z transformation). Z1 = ½ log 1+r1 / 1-r2 and Z2 = ½ log 1+r2 / 1-r1 • For small sample t test is used: t = Z1 - Z2 / [1/ n1 -3 + 1/n2-3]1/2 at n1 + n2 – 6 df. • For large sample test of significance: Z = Z1 - Z2 / [1/ n1 -3 + 1/n2-3]1/2 • Z value follow normal distribution.
Example: Correlation coefficient between protein and calorie intakes calculated from two samples of 1200 and 1600 are 0.8912 and 0.8482 respectively. Do the two estimates differs significantly? n1 = 1200 n2 = 1600 r 1 = 0.8912 and r2 = 0.8482 • then Z1 = 1.4276and Z2 = 1.2496 from fisher’s table • Z = Z1 - Z2 / [1/ n1 -3 + 1/n2-3]1/2 = 4.659 • Z calculated > Z at 0.001 level. • The difference in correlation between two sample is highly significant.
Effect of Sample Size: • If sample size is 12 and 16. • Data given: n1 = 12 n2 = 16 and r 1 = 0.8912 , r2 = 0.8482 • Z1 = 1.4276and Z2 = 1.2496 from fisher’s table • t = Z1 - Z2 / [1/ n1 -3 + 1/n2-3]1/2 • t = 0.41. • Df = n1 + n2 – 6 = 22 • Calculated t < t 0.05 • So P > 0.05. • No difference between correlation Coefficient.
Conclusion: Significance of test of Significance ? • Strength of association? • Result is meaningful in practical sense ? • Result fails the test of significance doesn’t mean there is no relationship between two variables. • Significance only relates to probability of result being commonly or rarely by chance. • The results are statistically significant but no clinical or biochemical significance. • Assumption for test of significance: • Group to be equal in all respect other than the factor under study. • Random selection of the patient for each group. • Factors where significance test is not full proof: • Small Sample size. • Matching