190 likes | 1.3k Views
Standardized Testing Norm-referenced tests : are standardized, have information about reliability and validity, and can be used to compare an individual or groups performance on a test to individuals or groups in the standardization population, often called a “formal” test.
E N D
Standardized Testing • Norm-referenced tests: are standardized, have information about reliability and validity, and can be used to compare an individual or groups performance on a test to individuals or groups in the standardization population, often called a “formal” test. • Criterion-referenced tests: are not standardized, often do not have reliability and validity data, a person or group is compared to a fixed standard, often labeled an “informal” test.
Basic Measurement Terms • Scales of Measurement: nominal, ordinal, interval, ratio • Measures of Central Tendency: mean, median, mode • Measures of Dispersion: range, variance, standard deviation • Normal Curve:a common type of distribution
Basic Measurement Terms continued • Correlations: tell us about the degree of relationship between two variables, including the strength and direction of the relationship. • Multiple Correlation: a statistical technique for determining the relationship between one variable and several other variables
Basic Measurement Terms (3) • Types of Scores: Percentile rank-derived scores that permit us to determine an individual’s position relative to the sample. Standard scores- raw scores that have been transformed to have a designated mean and standard deviation. Grade equivalent or age-equivalent scores-average score obtained on a test by different groups of children who vary in age or grade placement.
Basic Measurement Terms (4) Normal curve equivalents- standard scores with a mean of 100 and a standard deviation of 21.06 Stanines-a single digit scoring system with a mean of 5 and a SD of 2. (1-9)
Interpreting Test Scores • Reliability: the consistency of measurements Test-retest reliability - consistency of scores on two separate administrations of a test. Alternate-form reliability- consistency of scores on two equivalent forms of a test. Split-half reliability – degree to which individual test items are related or measure the same abilities.
Interpreting Test Scores continued • True score: hypothetical mean of all scores if test were administered many times. • Standard error of measurement: estimate of the amount of error associated with the obtained score. • Confidence interval or precision range: a range within which true scores might be found.
Interpreting Test Scores (3) • Validity: the extent to which a test measures what it is supposed to measure. Content validity -whether the items on a test represent the domain that the test is supposed to Measure. Criterion-related validity- the relationship between test scores and some type of criterion or outcome.
Interpreting Test Scores (4) Concurrent validity and predictive validity –are test scores related to a current criterion or future performance on a relevant criterion? Construct validity – the extent to which a test measures a psychological construct. • The relationship between reliability and validity: to be valid, a test must be reliable.
Types of Standardized Tests Format • Screening, Diagnostic, Placement • Group Versus Individual • Multiple Skill Versus Single Skill • Formal Versus Informal Function or Domain • Achievement
Types of Standardized Tests continued • Aptitude-academic, vocational, leisure • Cognitive ability/intellectual ability- • Social/ Emotional-objective versus projective • Behavioral- rating scales, observations
Major Issues in Assessment • What is the difference between assessment and testing? • How are tests being used in the United States: readiness, national progress, minimal competency, accreditation • Advantages in taking tests: bias, culture-fair tests. • Coaching and test-taking skills: special training, familiarity with procedures, study skills
New Directions in Assessment National, State and Local Levels: • Assessing learning potential: LPAD, ELP, etc. • Authentic assessment: performance based, portfolios, constructed response formats. • Curriculum-based assessment and measurement.