NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 37 results
Peer reviewed Peer reviewed
Direct linkDirect link
Ketterlin-Geller, Leanne R.; Yovanoff, Paul; Jung, EunJu; Liu, Kimy; Geller, Josh – Educational Assessment, 2013
In this article, we highlight the need for a precisely defined construct in score-based validation and discuss the contribution of cognitive theories to accurately and comprehensively defining the construct. We propose a framework for integrating cognitively based theoretical and empirical evidence to specify and evaluate the construct. We apply…
Descriptors: Test Validity, Construct Validity, Scores, Evidence
Peer reviewed Peer reviewed
Direct linkDirect link
Bell, Courtney A.; Gitomer, Drew H.; McCaffrey, Daniel F.; Hamre, Bridget K.; Pianta, Robert C.; Qi, Yi – Educational Assessment, 2012
This article develops a validity argument approach for use on observation protocols currently used to assess teacher quality for high-stakes personnel and professional development decisions. After defining the teaching quality domain, we articulate an interpretive argument for observation protocols. To illustrate the types of evidence that might…
Descriptors: Teacher Effectiveness, Teacher Evaluation, Observation, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Meyer, J. Patrick; Cash, Anne H.; Mashburn, Andrew – Educational Assessment, 2011
Student-teacher interactions are dynamic relationships that change and evolve over the course of a school year. Measuring classroom quality through observations that focus on these interactions presents challenges when observations are conducted throughout the school year. Variability in observed scores could reflect true changes in the quality of…
Descriptors: Observation, Reliability, Teacher Student Relationship, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Young, John W. – Educational Assessment, 2009
In this article, I specify a conceptual framework for test validity research on content assessments taken by English language learners (ELLs) in U.S. schools in grades K-12. This framework is modeled after one previously delineated by Willingham et al. (1988), which was developed to guide research on students with disabilities. In this framework…
Descriptors: Test Validity, Evaluation Research, Achievement Tests, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Banks, Kathleen – Educational Assessment, 2009
The purpose of this article is to describe and demonstrate a three-step process of using differential distractor functioning (DDF) in a post hoc analysis to understand sources of differential item functioning (DIF) in multiple-choice testing. The process is demonstrated on two multiple-choice tests that used complex alternatives (e.g., "No…
Descriptors: Test Bias, Multiple Choice Tests, Testing, Gender Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Hur, Eun Hye; Glassman, Michael; Kim, Yunhwan – Educational Assessment, Evaluation and Accountability, 2013
This paper developed a Democratic Classroom Survey to measure students' perceived democratic environment of the classroom. Perceived democratic environment is one of the most important variables for understanding classroom activity and indeed any type of group activity, but actually measuring perceptions in an objective manner has been…
Descriptors: Classroom Environment, Test Construction, Program Validation, Democratic Values
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrara, Steve – Educational Assessment, 2008
The No Child Left Behind Act of 2001 requires all states to assess the English proficiency of English language learners each school year. Under Title I and Title III of No Child Left Behind, states are required to measure the annual growth of students' English language development in reading, listening, writing, and speaking and in comprehension…
Descriptors: Speech Communication, Federal Legislation, Second Language Learning, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Abedi, Jamal – Educational Assessment, 2008
This article discusses the status of existing English language proficiency (ELP) tests, and compares the content coverage and psychometric characteristics of ELP assessments that existed prior to the implementation of the No Child Left Behind Act (NCLB) with those developed after NCLB Title III guidelines were introduced. The article argues that…
Descriptors: Federal Legislation, Psychometrics, English (Second Language), Language Proficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Amrein-Beardsley, Audrey; Barnett, Joshua H. – Educational Assessment, Evaluation and Accountability, 2012
Over the previous two decades, the era of accountability has amplified efforts to measure educational effectiveness more than Edward Thorndike, the father of educational measurement, likely would have imagined. Expressly, the measurement structure for evaluating educational effectiveness continues to rely increasingly on one sole…
Descriptors: Accountability, Educational Assessment, Educational Quality, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Falk, Beverly; Ort, Suzanne Wichterle; Moirs, Katie – Educational Assessment, 2007
This article describes the findings of studies conducted on a large-scale, classroom-based performance assessment of literacy for the early grades designed to provide information that is useful for reporting, as well as teaching. Technical studies found the assessment to be a promising instrument that is reliable and valid. Follow-up studies of…
Descriptors: Program Effectiveness, Performance Based Assessment, Student Evaluation, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Baker, Eva L. – Educational Assessment, 2007
This article describes the history, evidence warrants, and evolution of the Center for Research on Evaluation, Standards, and Student Testing's (CRESST) model-based assessments. It considers alternative interpretations of scientific or practical models and illustrates how model-based assessment addresses both definitions. The components of the…
Descriptors: Educational Testing, Computer Assisted Testing, Validity, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Briggs, Derek C.; Alonzo, Alicia C.; Schwab, Cheryl; Wilson, Mark – Educational Assessment, 2006
In this article we describe the development, analysis, and interpretation of a novel item format we call Ordered Multiple-Choice (OMC). A unique feature of OMC items is that they are linked to a model of student cognitive development for the construct being measured. Each of the possible answer choices in an OMC item is linked to developmental…
Descriptors: Diagnostic Tests, Multiple Choice Tests, Cognitive Development, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Gearhart, Maryl; Nagashima, Sam; Pfotenhauer, Jennifer; Clark, Shaunna; Schwab, Cheryl; Vendlinski, Terry; Osmundson, Ellen; Herman, Joan; Bernbaum, Diana J. – Educational Assessment, 2006
This article reports findings on growth in 3 science teachers' expertise with interpretation of student work over 1 year of participation in a program. The program was designed to strengthen classroom assessment. Using a framework for classroom assessment expertise, we analyze patterns of teacher learning, and the roles of the professional program…
Descriptors: Elementary Secondary Education, Science Teachers, Knowledge Level, Instructional Materials
Peer reviewed Peer reviewed
Direct linkDirect link
Kelly, Anthony; Downey, Christopher – Educational Assessment, Evaluation and Accountability, 2010
Value-added measures can be used to allocate funding to schools, to identify those institutions in need of special attention and to underpin government guidance on targets. In England, there has been a tendency to include in these measures an ever-greater number of contextualising variables and to develop ever-more complex models that encourage…
Descriptors: School Effectiveness, Foreign Countries, Academic Achievement, Educational Finance
Peer reviewed Peer reviewed
Direct linkDirect link
Spillane, James P.; Pareja, Amber Stitziel; Dorner, Lisa; Barnes, Carol; May, Henry; Huff, Jason; Camburn, Eric – Educational Assessment, Evaluation and Accountability, 2010
In this paper we described how we mixed research approaches in a Randomized Control Trial (RCT) of a school principal professional development program. Using examples from our study we illustrate how combining qualitative and quantitative data can address some key challenges from validating instruments and measures of mediator variables to…
Descriptors: Research Methodology, Statistical Analysis, Qualitative Research, Principals
Previous Page | Next Page ยป
Pages: 1  |  2  |  3