150 likes | 342 Views
Measurement. All measures contain errorRandom error leads to unreliabilitySystematic error leads to invalidityTrue Score = Obtained Score Random Error. True Scores and Error Scores. A true score is the real and unchanging measure of the human characteristic. The error score is a positive or n
E N D
1. Measurement EDRS 6301
Summer 2001
Dr. Kielborn
2. Measurement All measures contain error
Random error leads to unreliability
Systematic error leads to invalidity
True Score = Obtained Score + Random Error
3. True Scores and Error Scores A true score is the real and unchanging measure of the human characteristic The error score is a positive or negative value that results from uncontrolled and unrealized variability in the measurement
4. Error Error can be the result of the way we observe or test the individual
Observational (difficulty, broad test with few samples matching each concept; items unclear to the participant)
Procedural (inconsistent administration, recording, scoring or interpretation)
5. Error continued Subject - (Individuals performing differently; reaction of participant to instrument or experiment)
6. Reliability Reliability is consistency in measurement
Consistency is specific to the group being assessed
If there is consistency, there is confidence in the results
7. Reliability The consistency of measurement
It is the extent to which observations/ experimental design can be replicated by another independent researcher
How consistently a data collection process measures whatever it measures
8. Sources of Unreliability Faulty items and observations (tricky, ambiguous, or confusing) questions or format
Excessively difficult elements of the data collection process (participants guess) Excessively easy elements of the data collection process
Inadequate number of observations or items
9. Sources of Unreliability Accidentally focusing on multiple outcomes (all or most test items or questions in an interview/survey refer to the same characteristic)
Faulty scoring Characteristics of the respondents (inability to concentrate, mood)
Faulty administration (room may be hot or cold or full of distractions)
10. Validity Does it measure what we think it is measuring?
High validity - high amount of accuracy
11. Validity Establish rapport
Minimize disruptions
Use unobtrusive methods for recording data
Triangulation - confirming results through more than one data source
12. Internal validity The extent to which the results of a study are supported by the methodology. A well-controlled study is said to have high internal validity and believable conclusions.
13. Threats to Internal Validity History - An event occurring between pre and post tests
Maturation - A change that occurs because a participant has grown older or gained experience
Instrumentation - A change that occurs because the testing procedures are unreliable or have altered unintentionally
14. Threats to Internal Validity Testing - A change that occurs because the test has sensitized the participants to the nature of the research Regression - The tendency of a very low score or a very high score to move toward the mean
15. References Marshall, J. (2001). Assessment for educational improvement workshop, UWG, Carrollton, GA, June 16, 2001.
Schloss, P.J., & Smith, M.A. (1999) Conducting research. Upper Saddle River, NJ: Merrill.
Vockell, E.L., & Asher, J.W. (1995) Educational Research (2nd Ed.). Upper Saddle River, NJ: Merrill.