10 likes | 108 Views
Content Validity of the Assistive Technology Outcome Measure (ATOM), v.2.0 M. Dharne 1 , OTR, J.A. Lenker 1 , PhD, OTR/L, F. Harris 2 , PhD, and S. Sprigle 2 , PT , PhD 1 Department of Rehabilitation Science, School of Public Health and Health Professions, University at Buffalo
E N D
Content Validity of the Assistive Technology Outcome Measure (ATOM), v.2.0 M. Dharne1, OTR, J.A. Lenker1, PhD, OTR/L, F. Harris2, PhD, and S. Sprigle2, PT, PhD 1Department of Rehabilitation Science, School of Public Health and Health Professions, University at Buffalo 2Mobility RERC, Center for Assistive Technology & Environmental Access, Georgia Institute of Technology Abstract The assistive technology field needs device-specific outcome instruments to measure the usability, effectiveness, and impact of particular classes of assistive devices. The Assistive Technology Outcome Measure (ATOM) is a new tool that seeks to measure the impact of wheeled mobility systems. This study establishes the content validity of ATOM, version 2.0. Eight experts from the wheeled mobility and seating field were recruited to rate the relevance and clarity of the 18 items comprising the revised ATOM. The content validity index based on the experts’ ratings was α=0.881. In response to expert comments and suggestions, three items were reframed, and minor changes were made to an additional 10 items. Given the success of the content validity phase, our next phase of research will evaluate the test-retest reliability and convergent validity of the ATOM. Keywords Wheelchair, wheeled mobility device, outcome measurement, assistive technology. Introduction The cost effectiveness of assistive technology devices (ATDs), assessment and training strategies, and service delivery programs can be partially demonstrated by capturing follow-up data that measures ATD impact [1]. Unfortunately, there are few fully validated ATD outcome measurement tools. The diversity of ATDs suggests the need for tools that focus on particular classes of ATDs, so-called device-specific measures [2]. In the specialty area of wheeled mobility and seating, a clinically friendly outcome measurement tool is needed that will respond to increasing calls from third-party funding agencies for evidence of clinical effectiveness. Literature Review We examined 17 peer-reviewed articles, published between 1991 and 2004, that reported outcomes of wheeled mobility and seating device interventions. Across these studies, three approaches were employed to measure ATD impact: (a) eight authors used a standardized tool; (b) seven used tools that were created specifically for the study being reported; and (c) two used open-ended interviews as part of a qualitative methodology. Among the eight articles using a standardized tool, no single tool was used twice. Clearly, no measurement tool has emerged as the gold standard for capturing wheeled mobility and seating device outcomes. The Assistive Technology Outcome Measure (ATOM) was conceived and originally pilot tested by researchers at Helen Hayes Hospital in West Haverstraw, NY. The goal was to establish an easily administered tool that would yield data appealing to consumers, clinicians, program administrators, and third-party funding agencies. The initial version of ATOM had 28 items that embodied a range of wheeled mobility and seating impacts: usage in different environments, community participation, functional activity, assistance, comfort, and hassle. Pilot data were captured from 56 participants who had received a wheeled mobility and seating intervention. Based on feedback from the participants and principal interviewer who were involved in the pilot testing, the ATOM was recently revised. The wording was clarified for virtually all items. The response options for many items were reworked in order to present an intuitive range of choices for respondents. Several new questions were added, and some others deleted. The revised ATOM includes 18 items. Heretofore, its psychometric properties have not been evaluated. The purpose of the current study was to establish the content validity of ATOM v.2.0. Method Participants The participants included a geographically dispersed convenience sample of eight wheeled mobility and seating experts, each having at least five years of experience in the field. All were identified through personal contacts of the co-authors. The experts were given a $40 honorarium in consideration for their time. • Future Research • We have recently completed data collection with a sample of 50 adult wheelchairs users in the Buffalo, NY area. Our psychometric analysis of the data will include evaluation of test-retest and alternate form reliability, concurrent validity (using the QUEST as a comparison tool), and scale analysis to ensure that the full range of the scale is being used for each item. Ultimately, we hope to establish a fully standardized version of ATOM that can be disseminated and used clinically with a variety of populations and clinical settings. • Acknowledgments • This research in this article was supported in part by: • Langeloth Foundation, NY, NY • RERC Wheeled Mobility, NIDRR, H133E030035 • Office of the Dean, School of Public Health & Health Professions, University at Buffalo • References • 1. Gelderblom, G. J. and Witte, L. P. (2002). The assessment of assistive technology outcomes, effects and costs. Technology and Disability, 14, 91-94. • 2. Lenker, J., Scherer, M., Fuhrer, M., Jutai, J., and DeRuyter, F. (2005). Psychometric and administrative properties of measures used in assistive technology device outcomes research. Assistive Technology, 17.1, 7-22. • 3. Waltz, C.F., Strickland, O., and Lenz, E. (2005). Measurement in nursing research (3rd ed.) Philadelphia: F.A. Davis. • Appendix • Content Validity Scale: Relevance • 1. Item is not relevant • 2. Item is somewhat relevant • 3. Item is very relevant • 4. Item is highly relevant • Content Validity Scale: Clarity • 1. Item is unclear • 2. Item needs substantial revision • 3. Item needs minor revision • Item is very clear Instrument A content validity scale (Appendix) was developed so that each expert could evaluate the ATOM according identical criteria. Experts evaluated each item for its "relevance" and "clarity"; both of which were rated on a scale of 1 to 4. Space was provided for additional comments and suggestions. Procedure Fourteen prospective participants were contacted via email to solicit their participation. The first eight who replied were enrolled in the study. The content validity form, participant consent form, and a stamped reply envelope were mailed to each enrollee. Each participant returned their completed consent forms and content validity ratings within 4 weeks. Subsequently, a teleconference was held with 5 of 8 experts and the first two authors in order to discuss modifications that had been proposed by individual experts. In particular, items receiving lower ratings for clarity were discussed in detail. Revisions to additional items were also considered based on qualitative user feedback. Subsequently, revisions were made in accordance with group consensus. Analysis There were three components to data analysis. First, we evaluated the relevance of individual items. In order to be retained, an item needed to receive a relevance rating of "3" or "4" from at least five of the eight experts. Second, we evaluated the clarity of individual items. Revision was required for any item receiving clarity ratings of "3" or less from two or more experts. Qualitative suggestions for revising individual items were incorporated into the group teleconference discussion. Third, the Content Validity Index (CVI) [3] was used to quantify overall expert agreement on the relevance and clarity of ATOM’s 18 items. Results All 18 items on the ATOM v.2.0 received a relevance rating of “3” or “4” from at least five of eight experts. Thus, no items were eliminated due to lack of relevance. Nine items received a clarity rating of “4” from at least 6 of 8 experts. The other nine items received clarity ratings of “3” or less from at least two experts. These nine items were discussed in detail during the teleconference call, and consensus was reached for rewordings of each. Four additional items were reworded as a byproduct of related teleconference discussions. The CVI for relevance was α=0.771, and the CVI for clarity was α=0.849. The overall CVI was α=0.881. Discussion Expert evaluation confirmed the relevance of all 18 items on the ATOM v.2.0. In all, fifteen items were revised in order to improve item clarity. Of these, 12 were minor revisions and 3 involved substantial modification. The order of items was also adjusted to present a more logical sequence of questions. Appendix C summarizes all changes that were made. The calculated values for the Content Validity Index indicate that the expert ratings were in substantial agreement with one another. In summary, the content validity of ATOM v.2.0 has been established using a formal, two-stage process of expert review. Experts encouraged development of the ATOM. Several expressed the sentiment that a tool of this nature was really needed. Most indicated that they would be willing to serve as a clinical test site once the ATOM’s basic psychometric properties have been established.