1 / 17

Reliability and Validity in Research

Reliability and Validity in Research. Spring 2013 University of Missouri -St. Louis. Believing what you read?. There is a need for reliable and valid data on student learning outcomes.

dom
Download Presentation

Reliability and Validity in Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reliability andValidity in Research Spring 2013University of Missouri -St. Louis

  2. Believing what you read? • There is a need for reliable and valid data on student learning outcomes. Reliability: the extent to which an assessment tool is consistent or free from random error in measurement • Validity concerns the degree to which inferences about students based on their test scores are warranted. Validity: the extent to which an assessment tool measures what it is intended to measure

  3. Validity • Validity has been defined as referring to the appropriateness, correctness, meaningfulness, and usefulness of the specific inferences researchers make based on the data they collect. • It is the most important idea to consider when preparing or selecting an instrument. • Validation is the process of collecting and analyzing evidence to support such inferences.

  4. Evidence of Validity • There are 3 types of evidence a researcher might collect: • Content-related evidence of validity • Content and format of the instrument • Criterion-related evidence of validity • Relationship between scores obtained using the instrument and scores obtained • Construct-related evidence of validity • Psychological construct being measured by the instrument

  5. Content-related Evidence • A key element is the adequacy of the sampling of the domain it is supposed to represent. • The other aspect of content validation is the format of the instrument. • Attempts to obtain evidence that the items measure what they are supposed to measure typify the process of content-related evidence.

  6. Criterion-related Evidence • A criterion is a second test presumed to measure the same variable. • There are two forms of criterion-related validity: • Predictive validity: time interval elapses between administering the instrument and obtaining criterion scores • Concurrent validity: instrument data and criterion data are gathered and compared at the same time • A Correlation Coefficient (r) indicates the degree of relationship that exists between the scores of individuals obtained by two instruments.

  7. Construct-related Evidence • Considered the broadest of the three categories. • There is no single piece of evidence that satisfies construct-related validity. • Researchers attempt to collect a variety of types of evidence, including both content-related and criterion-related evidence. • The more evidence researchers have from different sources, the more confident they become about the interpretation of the instrument.

  8. How can validity be established? • Quantitative studies: • measurements, scores, instruments used, research design • Qualitative studies: • ways that researchers have devised to establish credibility: member checking, triangulation, thick description, peer reviews, external audits

  9. Reliability • Refers to the consistency of scores or answers provided by an instrument. • Scores obtained can be considered reliable but not valid. • An instrument should be reliable and valid, depending on the context in which an instrument is used.

  10. Reliability, continued • In statistics or measurement theory, a measurement or test is considered reliable if it produces consistent results over repeated tests. • Refers to how well we are measuring whatever it is that is being measured (regardless of whether or not it is the right quantity to measure).

  11. Reliability, continued • Unlike the common understanding, in these contexts “reliability” does not imply a value judgment • Your car always starts/doesn’t start • Your friend is always/ never late

  12. Reliability of Measurement

  13. Errors of Measurement • Because errors of measurement are always present to some degree, variation in test scores are common. • This is due to: • Differences in motivation • Energy • Anxiety • Different testing situation

  14. Observational Studies • Some characteristics cannot be measured through a test • Unobtrusiveness • Multiple sources of error • Reliability depends on the extent to which observers agree

  15. How can reliability be established? • Quantitative studies? • Assumption of repeatability • Qualitative studies? • Reframe as dependability and confirmability

  16. Reliability and Validity

  17. Reliability and Validity • Why do we bother? • Terms used in conjunction with one another • Quantitative Research: R & V are treated as separate terms • Qualitative Research: R & V are often all under another, all encompassing term • Semi-reciprocal relationship

More Related