NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Audience
Teachers1
Showing 1 to 15 of 34 results
Peer reviewed Peer reviewed
Direct linkDirect link
Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary – Educational Assessment, 2014
Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…
Descriptors: Error of Measurement, Scores, Reports, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cassady, Jerrell C.; Finch, W. Holmes – Educational Assessment, 2014
This study validated the factor structure of a popular assessment of learner's cognitive test anxiety. Following recent findings in a study with Argentinean students' use of the Spanish version of the Cognitive Test Anxiety Scale (CTAS), this study tested the factor structure using data from 742 students who completed the original…
Descriptors: Factor Structure, Test Anxiety, Cognitive Tests, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Hooper, Jay; Cowell, Ryan – Educational Assessment, 2014
There has been much research and discussion on the principles of standards-based grading, and there is a growing consensus of best practice. Even so, the actual process of implementing standards-based grading at a school or district level can be a significant challenge. There are very practical questions that remain unclear, such as how the grades…
Descriptors: True Scores, Grading, Academic Standards, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Alavi, Sayyed Mohammad; Taghizadeh, Mahboubeh – Educational Assessment, 2014
Dynamic assessment is a procedure in which development is simultaneously assessed and improved with regard to the individual's or group's Zone of Proximal Development (ZPD; Lantolf & Poehner, 2004). This study aimed to follow dynamic assessment and investigate the impact of three types of implicit and explicit feedback on the essay…
Descriptors: Foreign Countries, Alternative Assessment, Writing Evaluation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Sandberg Patton, Karen L.; Reschly, Amy L.; Appleton, James – Educational Assessment, 2014
With the concurrent emphasis on accountability, prevention, and early intervention, curriculum-based measurement of reading (R-CBM) is playing an increasingly important role in the educational process. This study investigated the differences in diagnostic accuracy and utility between commercial norms and local norms when making high-stakes, local…
Descriptors: Curriculum Based Assessment, Reading Tests, Test Norms, Local Norms
Peer reviewed Peer reviewed
Direct linkDirect link
Suppes, Patrick; Holland, Paul W.; Hu, Yuanan; Vu, Minh-thien – Educational Assessment, 2013
Stanford University's Education Program for Gifted Youth (EPGY) conducted a randomized-treatment experiment during the 2006-2007 school year to test the efficacy, for Title I students, of the technological and individualized EPGY Kindergarten through Grade 5 Mathematics Course Sequence, modified for the Title I schools. Restricting attention…
Descriptors: Instructional Effectiveness, Individualized Instruction, Online Courses, Elementary School Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Sparfeldt, Jorn R.; Kimmel, Rumena; Lowenkamp, Lena; Steingraber, Antje; Rost, Detlef H. – Educational Assessment, 2012
Multiple-choice (MC) reading comprehension test items comprise three components: text passage, questions about the text, and MC answers. The construct validity of this format has been repeatedly criticized. In three between-subjects experiments, fourth graders (N[subscript 1] = 230, N[subscript 2] = 340, N[subscript 3] = 194) worked on three…
Descriptors: Test Items, Reading Comprehension, Construct Validity, Grade 4
Peer reviewed Peer reviewed
Direct linkDirect link
Huffman, Loreen; Adamopoulos, Anthony; Murdock, Gwendolyn; Cole, AmyKay; McDermid, Robert – Educational Assessment, 2011
Accountability in higher education has increased, with more institutions requiring standardized tests. These tests are high stakes for institutions, but low-stakes test for students, who seldom experience consequences for their performance. This study describes how one psychology department improved students' scores on the Psychology Area…
Descriptors: Student Motivation, Undergraduate Students, Program Evaluation, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor, Catherine S.; Lee, Yoonsun – Educational Assessment, 2011
This article presents a study of ethnic Differential Item Functioning (DIF) for 4th-, 7th-, and 10th-grade reading items on a state criterion-referenced achievement test. The tests, administered 1997 to 2001, were composed of multiple-choice and constructed-response items. Item performance by focal groups (i.e., students from Asian/Pacific Island,…
Descriptors: Test Bias, Test Items, Pacific Islanders, American Indians
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Lee, Hee-Sun; Linn, Marcia C. – Educational Assessment, 2010
To improve student science achievement in the United States we need inquiry-based instruction that promotes coherent understanding and assessments that are aligned with the instruction. Instead, current textbooks often offer fragmented ideas and most assessments only tap recall of details. In this study we implemented 10 inquiry-based science…
Descriptors: Inquiry, Active Learning, Science Achievement, Science Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Fagioli, Loris P. – Educational Assessment, Evaluation and Accountability, 2014
This study compared a value-added approach to school accountability to the currently used metrics of accountability in California of Adequate Yearly Progress (AYP) and Academic Performance Index (API). Five-year student panel data (N?=?53,733) from 29 elementary schools in a large California school district were used to address the research…
Descriptors: Accountability, Achievement Gains, Measurement, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Fulmer, Gavin W.; Polikoff, Morgan S. – Educational Assessment, Evaluation and Accountability, 2014
An essential component in school accountability efforts is for assessments to be well-aligned with the standards or curriculum they are intended to measure. However, relatively little prior research has explored methods to determine statistical significance of alignment or misalignment. This study explores analyses of alignment as a special case…
Descriptors: Alignment (Education), Educational Assessment, Academic Standards, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Jinok; Herman, Joan L. – Educational Assessment, 2009
In this three-state study, the authors estimate the magnitudes of achievement gaps between English learner (EL) students and their non-EL peers, while avoiding typical caveats in cross-sectional studies. The authors further compare the observed achievement gaps across three distinct dimensions (content areas, grades, and states) and report…
Descriptors: English (Second Language), Second Language Learning, Achievement Gap, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Barton, Karen; Meyer, Patrick – Educational Assessment, 2009
The No Child Left Behind act resulted in an increased reliance on large-scale standardized tests to assess the progress of individual students as well as schools. In addition, emphasis was placed on including all students in the testing programs as well as those with disabilities. As a result, the role of testing accommodations has become more…
Descriptors: Test Bias, Testing Accommodations, Standardized Tests, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Banks, Kathleen – Educational Assessment, 2009
The purpose of this article is to describe and demonstrate a three-step process of using differential distractor functioning (DDF) in a post hoc analysis to understand sources of differential item functioning (DIF) in multiple-choice testing. The process is demonstrated on two multiple-choice tests that used complex alternatives (e.g., "No…
Descriptors: Test Bias, Multiple Choice Tests, Testing, Gender Differences
Previous Page | Next Page ยป
Pages: 1  |  2  |  3