160 likes | 593 Views
Assessment Criteria. Jessie Johncock Mod. 2 SPE 536 October 7, 2012. Assessment.
E N D
Assessment Criteria Jessie Johncock Mod. 2 SPE 536 October 7, 2012
Assessment • “Assessment in special education is the process used to determine a student’s specific learning strengths and needs and to determine whether or not that student is eligible for special education services (Pierangelo & Giuliani, 2008).”
Descriptive Statistics • Summary statistics such as percent, averages and measures of variability that are calculated on a specific group of individuals. • Descriptive statistics are used to define groups of numerical data such as test scores, number or hours of instruction or the number of students enrolled in a particular classroom setting. (Farflex, 2012)
Dimensions of Descriptive statistics • Variable (x) - A measurable characteristic. Individual measurements of a variable are called varieties, observations, or cases. • Population (X) - All subjects or objects possessing some common specified characteristic. The population in a statistical investigation is arbitrarily defined by naming its unique properties. • Parameter - A measurable characteristic of a population. A measurable quantity derived from a population, such as population mean or standard deviation. • Sample - A smaller group of subjects or objects selected from a large group (population). • Frequency graph - A picture depicting the number of times an event occurred. • Bar graph or histogram - A frequency graph with number of blocks or length of bar representing the frequency of occurrence. • Frequency polygon - A modification of the bar graph with lines connecting the midpoints of the highest point on each bar. • Frequency curve - A modification of a frequency polygon with the sharp corners rounded. The area under the connecting line of the bar graph, frequency polygon, and frequency curve are equivalent and represent frequency of occurrence. • Mean (µ) or () Arithmetical mean - A number having an intermediate value between several other numbers in a group from which it was derived and of which it expressed the average value. It is the simple average formed by adding the numbers together and dividing by the number of numbers in the group (Keys, 1997)
Reliability • Reliability is used to describe the overall consistency of a measure. A measure is said to have a high reliability if it produces similar results under consistent conditions.
Validity • Validity is the degree to which the evidence supports the interpretation of test scores. (AERA, 1999)
Norm-Referenced Tests (NRTs) vs. Criterion-Referenced Tests (CRTs) These two tests differ in their proposed purposes, their content selection and the scoring process that is used to define how the test results should be interpreted.
Purpose CRT NRT • To rank each student with respect to the achievement of others in broad areas of knowledge • To discriminate between high and low achievers. • To conclude if each student has achieved specific skills or concepts and determine students’ initial background knowledge on specific content. • The goal with these tests is to decided if a student has displayed mastery performance of a certain skill or set of skills. (Huitt, 1996)
Content CRT NRT • Measures extensive skill areas taken from a variety of textbooks, syllabi and the judgments of curriculum experts. • Measures particular skills which make up a designated curriculum in which theses skills are identified by educators and each skill is expressed as an instructional objective. (Huitt, 1996)
Score Interpretation CRT NRT • Each individual is compared to other examinees and designated a score (typically expressed as a percentile or a grade equivalent score). • Each student is compared with a specific standard for acceptable achievement. The performance of other examinees is not taken into consideration. • Student achievement is reported for individual skills. (Huitt, 1996)
Conclusion • In order to properly assess a student’s abilities and comprehension, teachers themselves need to make sure that they are aware of different types of assessment, how to properly determine the results from the assessment and determine what the proper response for a student should be from the attained evidence (i.e. should the student receive special services).
References • American Educational Research Association, Psychological Association, & National Council on Measurement in Education. (1999). Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association • Huitt, W. (1996). Measurement and evaluation: Criterion- versus norm-referenced testing. Educational Psychology Interactive. Valdosta, GA: Valdosta State University. Retrieved October 1, 2012 fromhttp://www.edpsycinteractive.org/topics/measeval/crnmref.html • Key, J. P. (1997). Descriptive Statistics. Research Design in Occupational Education. Oklahoma City, OK: Oklahoma State University. Retrieved October 1, 2012 from http://www.okstate.edu/ag/agedcm4h/academic/aged5980a/5980/newpage22.htm • Pierangelo, R. & Guiliani, G. A. (Eds.). (2008). Understanding Assessment in the Special Education Process: A Step-by-Step Guide for Education. Thousand Oaks, CA: Corwin Press. ISBM 13:9781412917919 • Popham, J. W. (1975). Educational evaluation. Englewood Cliffs, New Jersey: Prentice-Hall, Inc.