NotesFAQContact Us
Search Tips
ERIC Number: ED503295
Record Type: Non-Journal
Publication Date: 2007-May
Pages: 155
Abstractor: As Provided
School Improvement under Test-Driven Accountability: A Comparison of High- and Low-Performing Middle Schools in California. CSE Report 717
Mintrop, Heinrich; Trujillo, Tina
National Center for Research on Evaluation, Standards, and Student Testing (CRESST)
Based on in-depth data from nine demographically similar schools, the study asks five questions in regard to key aspects of the improvement process and that speak to the consequential validity of accountability indicators: Do schools that differ widely according to system performance criteria also differ on the quality of the educational experience they provide to students? Are schools that have posted high growth on the state's performance index more effective organizationally? Do high-performing schools respond more productively to the messages of their state accountability system? Do high- and low-performing schools exhibit different approaches to organizational learning and teacher professionalism? Is district instructional management in an aligned state accountability system related to performance? We report our findings in three results papers (Mintrop & Trujillo, 2007a, 2007b; Trujillo & Mintrop, 2007) and this technical report. The results papers, in a nutshell, show that, across the nine case study schools, one positive performance outlier differed indeed in the quality of teaching, organizational effectiveness, response to accountability, and patterns of organizational learning. Across the other eight schools, however, the patterns blurred. We conclude that, save for performance differences on the extreme positive and negative margins, relationships between system-designated performance levels and improvement processes on the ground are uncertain and far from solid. The papers try to elucidate why this may be so. This final technical report summarizes the major components of the study design and methodology, including case selection, instrumentation, data collection, and data analysis techniques. We describe the context of the study as well as descriptive data on our cases and procedures. (Appended are: (A) Initial Interview Protocol; (B) Follow-Up Interview Protocol; (C) First Impressions Sheet; (D) Teacher and Student Questionnaire Variables; (E) Student Questionnaire Scales; (F) Teacher Questionnaire Scales; (G) Classroom Observation Measures; (H) 7th- and 8th-Grade English Language Arts Student Writing Scoring Rubric; (I) P-Weights Applied to Stratified Student Questionnaire Data; (J) Student Questionnaire Items and Scales: Descriptive Statistics; (K) Student Perception Scales: Survey Regression Results; (L) Classroom Observation Data: Descriptive Statistics; (M) Classroom Observation Data: Wilcoxon-Mann-Whitney Test Results; (N) P-Weights Applied to Stratified Student Writing Sample Data; (O) English Language Arts Writing Samples: Descriptive Statistics--Estimated Mean Scores and Standard Errors; (P) Student Writing Sample Scores: Survey Regression Results; (Q) Individual Teacher Questionnaire Items: Descriptive Statistics; (R) Teacher Questionnaire Scales: Descriptive Statistics; (S) P-Weights Applied to Teacher Questionnaire Data; (T) Teacher Questionnaire Scales: Survey Regression Results; (U) School Background Facts: Wilcoxon Mann-Whitney Results; and (V) Interview Codes. Contains 34 tables.)
National Center for Research on Evaluation, Standards, and Student Testing (CRESST). 300 Charles E Young Drive N, GSE&IS Building 3rd Floor, Mailbox 951522, Los Angeles, CA 90095-1522. Tel: 310-206-1532; Fax: 310-825-3883; Web site:
Publication Type: Reports - Evaluative
Education Level: Middle Schools
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: University of California, Los Angeles, Center for the Study of Evaluation
Grant or Contract Numbers: N/A