NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
PDF on ERIC Download full text
ERIC Number: ED544674
Record Type: Non-Journal
Publication Date: 2014-Jan
Pages: 33
Abstractor: ERIC
ISBN: N/A
ISSN: ISSN-
EISSN: N/A
Comparing Estimates of Teacher Value-Added Based on Criterion- and Norm-Referenced Tests. REL 2014-004
Stuit, David; Austin, Megan J.; Berends, Mark; Gerdeman, R. Dean
Regional Educational Laboratory Midwest
Recent changes to state laws on accountability have prompted school districts to design teacher performance evaluation systems that incorporate student achievement (student growth) as a major component. As a consequence, some states and districts are considering teacher value-added models as part of teacher performance evaluations. Value-added models use statistical techniques to estimate teachers' (or schools') contributions to growth in student achievement over time. Designers of new performance evaluation systems need to understand the factors that can affect the validity and reliability of value-added results or other measures based on student assessment data used to evaluate teacher performance. This study provides new information on the degree to which value-added estimates of teachers differ by the assessment used to measure their students' achievement growth. The study used three analytic strategies to quantify the similarities and differences in estimates of teacher value-added from the ISTEP+ and MAP: correlations of value-added estimates based on the two assessments, comparisons of the quintile rankings of value-added estimates on the two assessments, and comparisons of the classifications of value-added estimates on the two assessments according to whether their 95 percent confidence intervals were above, below, or overlapping the sample average. Overall, the findings indicate variability between the estimates of teacher value-added from two different tests administered to the same students in the same years. Specific sources of the variability across assessments could not be isolated because of limitations in the data and research design. However, the research literature points to measurement error as an important contributor. The findings indicate that incorporating confidence intervals for value-added estimates reduces the likelihood that teachers' performance will be misclassified based on measurement error. Three appendices present: (1) Literature review; (2) About the data and the value-added model; and (3) Supplemental analysis of correlations of students' scores on the Indiana Statewide Testing for Educational Progress Plus and Measures of Academic Progress. (Contains 13 notes, 2 boxes, and 11 tables.) [For the summary of this report, see ED544673.]
Regional Educational Laboratory Midwest. Available from: American Institutes for Research. 1120 East Diehl Road Suite 200, Naperville, IL 60563. Tel: 866-730-6735; Tel: 630-649-6500; Fax: 630-649-6700; e-mail: relmidwest@air.org; Web site: http://www.relmidwest.org
Publication Type: Reports - Research
Education Level: Grade 4; Grade 5; Intermediate Grades; Elementary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: National Center for Education Evaluation and Regional Assistance (ED); Regional Educational Laboratory Midwest (ED)
Identifiers - Location: Indiana
Identifiers - Assessments and Surveys: Indiana Statewide Testing for Educational Progress Plus
IES Funded: Yes
Grant or Contract Numbers: N/A