1 / 55

PSICOMETRÍA

PSICOMETRÍA. Unit 6.2 EVALUATION OF THE MEASURING INSTRUMENT: VALIDITY II. Salvador Chacón Moscoso Susana Sanduvete Chaves. Unit 6.2 EVALUATION OF THE MEASURING INSTRUMENT: VALIDITY II. 1. VALIDATIONN WITH SEVERAL PREDICTORS AND AN INDICATOR OF THE CRITERION

Download Presentation

PSICOMETRÍA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PSICOMETRÍA Unit 6.2EVALUATION OF THE MEASURING INSTRUMENT: VALIDITY II Salvador Chacón Moscoso Susana Sanduvete Chaves

  2. Unit 6.2EVALUATION OF THE MEASURING INSTRUMENT: VALIDITY II 1. VALIDATIONN WITH SEVERAL PREDICTORS AND AN INDICATOR OF THE CRITERION 1.1. Multiple coefficient validity 1.2 The multiple linear regression model 1.2.1. Regression ecuations 1.2.2. Residual variance or error variance and standard error of multiple estimation 1.3. Interpretation of the coefficient of multiple validity: 1.3.1. Multiple determination oefficient 1.3.2. Multiple alineation coefficient 1.3.3. Multiple predictive value coefficient. 1.4. Methods for selecting the most adecuate preidictor variables 2. VALIDITY AND USEFULNESS OF THE DECISIONS 2.1. Validity indexes 2.2. Where to place the cutoff 2.3. Example 2.4. Selection models 2.5. How to determinate the effectiveness os a selection 3. FACTORS INFLUENCYING THE VALIDITY COEFFICIENT 3.1. Variability of the sample 3.2. Reliability of the test scores and the criterion. 3.3. Validity and legth. 4. VALIDITY GENERALIZATION 5. Bibliography

  3. REFERRED VALIDATION CRITERION; SEVERAL PREDICTORS AND AN INDICATOR OF THE CRITERION

  4. X1 Y X2 Validation with several predictors and a single indicator of criterion Several predictors and an indicator of criteria Rarely a single predictor is used to make decisions. The model of " Simple Linear Regression " in practice is insufficient Which poses problems with the prediction and interpretation of results. The problem is that the predictors well be related to the criterion can be sure they are interrelated Example : select auto trade . Criterion = number of sales ( Y). Predictors : extraversion (X1 ) and verbal ability ( X2). Probably both predictors are correlated sí ¿en extent and variability is due to X1 or X2 or the interaction ?

  5. X1 Y X2 Validation with several predictors and a single indicator of criterion Varios predictores y un indicador del criterio: One way to control it is by partial and semipartial correlation. 1. Partial Correlation: to interpret the correlation between the criterion variable ( Y) and one of the predictor variables , eliminating in advance the effect on this correlation may be exerting other variables The correlation between the number of sales ( Y) and extraversion (X1 ), eliminating the influence that this correlation holds verbal fluency ( X2 )

  6. X1 Y X2 Validation with several predictors and a single indicator of criterion 2. Semi- partial correlation: to interpret the correlation between the criterion variable ( Y) and one of the predictor variables , eliminating beforehand the effect on this predictor can be exerting other variables The correlation between the number of sales ( Y) and extraversion (X1 ) X1 eliminate influences on verbal fluency ( x2 )

  7. MULTIPLE VALIDITY COEFFICIENT Several predictors and an indicator of criteria: • Coefficient of validity: is given by the correlation between the predictor and indicator criteria. • When we have only one predictor; and 1 criterion we use the simple correlation . • However, when we have several predictors (two for example), the analog is the multiple correlation:

  8. Multiple lineal regression model Varios predictores y un indicador del criterio: "Multiple Linear Regression" which is a generalization of the simple.  obtain a prediction equation that adequately weigh each of the predictors to predict the criterion. And remove the predictors that provide little information  In short, it is common to work with multiple predictors Where: a = intercept b1, , b2 ,...,bn = regression coefficients  ; random error

  9. Multiple lineal regression model. Regression ecuations: Usually expressed in matrix notation due to the large volume of transactions that should be performed. In this case Where: y = vector of scores of the N participants in the dependent variable or criterion (N*1). b= vector with (p+1) regression coefficients. X= matrix of scores in the p variables (predictors), with a first column of ones.  = random errors vector (N*1).

  10. 10 1.The vector of scores observed in the test , is equal to the matrix of scores observed in the predictors , the regression coefficient vector + random error vector 2.For example, a seller has sold 13 cars , one 10 and the other 15. And that is equal to the matrix of scores of subjects in extraversion and verbal fluency ( first seller 4y 8) , for the regression coefficients plus error vector The Multiple Linear Regression is a generalization of the Simple

  11. Residual variance or error variance and standard error of multiple estimation Residual variance: Indicate the effectiveness of the set of predictor variables to estimate the criterion.  If you recall, the validity coefficient indicates the effectiveness of the predictor to estimate the coefficient of multiple validity criteria Residual variance, error variance , or standard error of estimationA variance of all ( YY' )

  12. Confidence intervals • Debido a los errores de estimación, mas que estimaciones puntuales es conveniente hacerlas por intervalos. • Asumiendo que los errores se distribuyen normalmente: • Determinar el nivel de confianza y su puntuación típica asociada (NC. 99% Zc=2.58). • Calcular el error típico de estimación (a mayor error, más amplios serán los intervalos). • Calcular el error máximo que estamos dispuesto a asumir (Emax = Zc Sy.x). • Aplicar la ecuación de regresión y obtener la puntuación pronosticada. (Y`= a + b1X1+ b2X2) • Establecer el intervalo de confianza. Y`Emax

  13. Interpretation of the evidence obtained due to the predictor capacity of the variables As in the case of simple regression, the total variance of the scores obtained on the criterion can be decomposed into the variance of scores over the variance predicted error. If we divide all members variance in scores on the criterion we obtain the standard error of estimate can be obtained from the variance of scores on the criterion and the coef. Multiple validity:

  14. VALIDITY REFERRED TO THE CRITERION Interpretation of the coefficient of multiple validity: Interpretation of the coefficient of multiple validity: 1. Coefficient of multiple determination: coeficiende validity equal to the square and represents the proportion of variance of scores of subjects in the criteria can be predicted from the set of predictor variables. The C.D is bounded on the interval [ 0_1 ] The standard error of estimate will be small and therefore the CD will come to one values. When the error variance is small, implies that the predicted values and are close to the real Expresses the ratio of change of Y -linked predictors , determined by vv.predictoras explained by the vv . predictor, or can be predicted from the vv . predictors

  15. 2. Coefficient of multiple alienation indicates the share of the standard error of multiple estimation for the standard deviation of scores on the criterion The C.A is bounded on the interval [ 0_1 ] The standard error of estimate will be high and therefore the CA will next one values. When the error variance is high, implies that the predicted values are far and real. Insecurity affecting random forecasts  Expresses the rate of change of Y that is not linked to all vv.predictoras , determined by vv.predictoras , vv.predictoras explained by, or that can not be predicted from them.

  16. 16 3. Coefficient of multiple predictive value : complementary to CA, is another way of expressing the ability of the test to predict the criterion The C.V.P is bounded on the interval [ 0_1 ] When higher the lower CA will test the ability to predict the criterion .

  17. You want to find out whether verbal fluency and extraversion are variables that favor the number of commercial car sales . It has selected a sample of six vendors that have been evaluated by two tests extraversion (X1 ) and verbal fluency ( X2 ), respectively.

  18. We calculated intercorrlaciones between the variables: 2. Next , we already have all the data to calculate the coefficient of multiple correlation or coef. Validity : Since the maximum value of the validity coefficient is 1 can say that X1 and X2 have good predictive ability .

  19. 19 3. Thirdly , we should calculate the partial and semipartial correlations Partial -Correlación between extraversion and sales: 1. If, before the correlation between extraversion and sales were 0.79 , now , when we have eliminated the influence of F.verbal is 0.82 . That is, it has increased verbal fluency  that may be negatively affecting . Partial -Correlación between F.verbal and sales: 2. If, before the correlation between sales and F.verbal was 0.30 , -0.46  now extraversion is impacting positively on the correlation.

  20. Partial -Correlación between extraversion and sales: 1. If, before the correlation between extraversion and sales were 0.79 , now , when we have eliminated the effect of extraversion F.Verbal becomes 0.78. Partial -Correlación between F.verbal and sales: 2. If, before the correlation between F.Verbal and sales were 0.30 , now, when we removed the effect of extraversion F.Verbal becomes -0.28 . This indicates that , as far as possible avoid high correlations between predictors ( Extraversion with F.verbal ; r = 0.65)

  21. 4. Calculate the regression ecuation: Y=a+b1X1+ b2X2 Y´=4.64+0.66X1 –0.36X2 Y(x1=2; x2=4)=4.64+0.66*2-0.36*4=4.52

  22. Once built the regression , we can predict for each value of X , what subject would score on the criterion ( Y)

  23. Dado que Cov(e,Y`)=0

  24. Methods for selecting the most appropriate predictor variables According to Thorndike and Hagen (1989 ) , forecasters must be : a) Relevant: To what extent the indicator corresponds to the criterion ? b) Free of bias: avoid selecting variables that affect differentially between groups. c ) Reliable: measured data must be accurate and remain stable over time . d ) Accessible

  25. Methods for selecting the most appropriate predictor variables Redundant information, which makes it over- estimate the coefficient of determination  existence of high correlations between predictors ( a predictor can be explained by the linear combination of others)  The more best predictors , as R increases . However, we must pay special attention to the MULTICOLINEALIDAD

  26. Methods for selecting the most adequate predictor variables: Methods for selecting variables: Method " Forward" (forward) 1. Calculate the intercorrelations between the variables. 2. Select the predictor variable whose correlation with the criterion to be higher and the regression equation is constructed. 3. They are added to the equation, one by one, the other variables in terms of its contribution according to the semi - partial correlation . 4. Whenever a variable is entered , the increase is calculated on the percentage of variance explained by analyzing whether statistically significant. The process stops when the increase is not significant .

  27. Method " Backward " (backward) • 1.Calculate the squared multiple correlation between the criterion and the set of predictor variables. • 2.Are removed , one by one, the less important variables , calculating the reduction that occurs in the coefficient of determination . • 3.The process is stopped , unlike in the previous case , when the observed reduction is significant.

  28. 28 VALIDITY AND UTILITY OF DECISIONS; TRC

  29. A number of methods that analyze the validity of decisions taken from the scores on a test in relation to a dichotomous criterion are included. In this case, the test score (; ill -no ill - suited unfit etc ) were dichotomized . We differentiate between: - Point Cut in the test: test score difference between subjects that are above or below the cutoff ( apt - unfit ; clinical -no clinical , ... ) . - Cutting point criteria: score on the criterion above which the result is considered a success, or clinical case , for example. Test and match criterion in the classifications (A and D). And menor number of incorrect decisions, is not matching (B and C). The aim is that the test can be taken as many decisions corrects

  30. VALIDITY INDEXES: From the data, it is necessary to obtain some measure of validity: 1. Kappa coefficient: evaluates the consistency of classifications , or to what extent test ratings and criteria have coincided by chance. Where: Fc= number of cases where test and criterion coincide (A+D) Fa= number of cases where there is random coincidence

  31. 2. Proportion of correct classifications degree to which test criteria and agree rankings 3. Sensitivity (true positive rate) the degree to which the test is good at detecting only those with the disorder

  32. 32 4. Specificity (true negative rate) the degree to which the test is good to exclude those who do not actually have the disorder. 5. Reason for effectiveness: the degree to which the test is good to select subjects who have the disorder.

  33. Other types of indexes are SELECTION: • Reason for appropriateness: proportion of subjects who passed the cutoff on the criterion • Reason for selection: proportion of subjects who passed the cutoff in the test.

  34. C A B D ¿Where do we locate the cutpoint? Since it is necessary dichotomized scores, where the breakpoint is placed will have consequences on the decisions made with the test. Aceptados P. Corte del test CRITERIO Rechazados TEST Rechazados Aceptados

  35. C A B D 35 VALIDEZ Y UTILIDAD DE LAS DECISIONES. ¿Dónde situar el punto de corte? Consequences of moving the cutoff test towards stricter criterion right - Positive Effect: Decreases the rate of false positives. - Negative result: Increases the rate of false negatives. Aceptados CRITERIO Rechazados TEST Rechazados Aceptados

  36. C A B D VALIDEZ Y UTILIDAD DE LAS DECISIONES. ¿Dónde situar el punto de corte? Consequences of tightening the cutoff criteria: Positive Effect: Decreases the false negative rate. Negative result: Increases the rate of false postive. Aceptados CRITERIO Rechazados TEST Rechazados Aceptados

  37. Model selection: How to combine all the information about an individual to make a decision (tests, interviews , attitude, ... ) . 1.Compensatory : additive model , in which the subject is assigned a single overall score . Therefore, you can compensate for a low score on the test, for example, with a high score on the criterion. Not always make sense, since the absence of a skill does not have to be compensated by another. 2.Connective : minimum that must meet both tests subjects previously posted. 3.Disjunctive : only required beyond a certain level in any of the tests used

  38. 2-1: connective - compensatory: at first the connective model is applied (pass all the tests) , and then an overall score ( compensatory ) estimates . 3-1: disjunctive - compensatory: a first selection is made using the disjunctive model (overcome at least some of the evidence), and then is applied compensatory ( overall score) .

  39. FACTORS AFFECTING THE VALIDITY COEFFICIENT

  40. FACTORS AFFECTING THE VALIDITY COEFFICIENT Validity coefficient: correlation between test scores and the criterion very sensitive to certain aspects of the variables used. Schmidt and Hunter (1990) reported 11 aspects that can alter the sizes of the correlations : 1.Sampling error, or difference between sample and population coefficient of correlation. 2.Measurement error , or absence of perfect reliability in the predictor variable . 3.Measurement error in the criterion variable . 4.Use very simplified criteria , reduced to two values ( apt - Fail)

  41. 5. Dichotomization of the predictor variable. 6. Changes in variability in the criterion variable in other samples or conditions. 7. Changes of variability in v . predictor in other samples or conditions 8. Incorrect definition of the construct in the predictor variable. 9. Wrong definition of the construct in the criterion. 10. Coding errors, calculation , etc. 11. extraneous factors related to the characteristics of the sample ( experience, age,...)

  42. Factors affecting the coefficient. Validity: • -Variability • -Reliability • -Length

  43. Muestra total de aspirantes Muestra de seleccionados Sample variability Assumptions for estimating the validity variability in different samples : 1.The slope of the regression line is the same in the group of applicants (R ) in the selected ( r ) . 2. At the beginning of homocedasticity, standard errors of estimate are equal in the two groups

  44. Scores reliability 2. Reliability test scores and criterion: the test scores and criterion are affected by measurement errors that may be affecting the validity coefficient deemed. Spearman (1904) proposed formulas that correct attenuation or decrease the attenuation coefficient validity due to measurement errors. 2.1. Estimated validity coefficient in the case of both test as a criterion had a perfect reliability. Calculate the correlation between test scores and true criterion .

  45. 2.2. Estimated validity coefficient in case the TEST had a perfecta reliability Calculate the correlation between test scores and true empirical criterion.

  46. 2.3. Estimated validity coefficient in case the Criterion had a perfecta. Reliability calculate the empirical correlation between test scores and true criterion. The validity coefficient increases in all cases. However, this is hypothetical assumptions because it is impossible to eliminate errors , but you can try to reduce improving the reliability coefficients . To do this, we have the following situations

  47. 2.4. Estimated validity coefficient in the event that both the reliability and the TEST CRITERIA be improved.

  48. 2.5. Validity coefficient estimate in the event that the reliability of the test is improved

  49. 2.6. Validity coefficient estimate in the event that reliability be improved CRITERION

  50. The maximum validity coefficient is lower - like reliability index . To do this, we know that the coefficient of validity when measurement errors are eliminated is lower - like 1 1. Assuming that actually is 1. It follows that the coef . validity is less - equal to the product of the roots of coef. reliability of the test by the criterion 2. And assuming that the maximum value of the coef. reliability criterion is 1, then the coef. validity is less - like root coef. reliability test 3. And since the root of coef. Reliability is the reliability index , then the coef. validity is less - well as the reliability index

More Related