NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 91 to 105 of 520 results
Peer reviewed Peer reviewed
Direct linkDirect link
Stone, Elizabeth; Cook, Linda; Cahalan-Laitusis, Cara; Cline, Frederick – Applied Measurement in Education, 2010
This validity study examined differential item functioning (DIF) results on large-scale state standards-based English-language arts assessments at grades 4 and 8 for students without disabilities taking the test under standard conditions and students who are blind or visually impaired taking the test with either a large print or braille form.…
Descriptors: Test Bias, Large Type Materials, Testing Accommodations, Language Arts
Peer reviewed Peer reviewed
Direct linkDirect link
Cook, Linda; Eignor, Daniel; Sawaki, Yasuyo; Steinberg, Jonathan; Cline, Frederick – Applied Measurement in Education, 2010
This study compared the underlying factors measured by a state standards-based grade 4 English-Language Arts (ELA) assessment given to several groups of students. The focus of the research was to gather evidence regarding whether or not the tests measured the same construct or constructs for students without disabilities who took the test under…
Descriptors: Language Arts, Educational Assessment, Grade 4, State Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Hein, Serge F.; Skaggs, Gary E. – Applied Measurement in Education, 2009
Only a small number of qualitative studies have investigated panelists' experiences during standard-setting activities or the thought processes associated with panelists' actions. This qualitative study involved an examination of the experiences of 11 panelists who participated in a prior, one-day standard-setting meeting in which either the…
Descriptors: Focus Groups, Standard Setting, Cutting Scores, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Davis, Susan L.; Buckendahl, Chad W. – Applied Measurement in Education, 2009
In response to a Congressional mandate, an evaluation of the National Assessment of Educational Progress (NAEP) was undertaken beginning in 2004. The evaluation design included a series of studies that encompassed the breadth and selected areas of depth of the NAEP program. Studies were identified with input from key stakeholders and were…
Descriptors: National Competency Tests, Evaluation Methods, Evaluation Criteria, Test Results
Peer reviewed Peer reviewed
Direct linkDirect link
Buckendahl, Chad W.; Plake, Barbara S.; Davis, Susan L. – Applied Measurement in Education, 2009
The National Assessment of Educational Progress (NAEP) program is a series of periodic assessments administered nationally to samples of students and designed to measure different content areas. This article describes a multi-year study that focused on the breadth of the development, administration, maintenance, and renewal of the assessments in…
Descriptors: National Competency Tests, Audits (Verification), Testing Programs, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Wells, Craig S.; Baldwin, Su; Hambleton, Ronald K.; Sireci, Stephen G.; Karatonis, Ana; Jirka, Stephen – Applied Measurement in Education, 2009
Score equity assessment is an important analysis to ensure inferences drawn from test scores are comparable across subgroups of examinees. The purpose of the present evaluation was to assess the extent to which the Grade 8 NAEP Math and Reading assessments for 2005 were equivalent across selected states. More specifically, the present study…
Descriptors: National Competency Tests, Test Bias, Equated Scores, Grade 8
Peer reviewed Peer reviewed
Direct linkDirect link
Noell, Jay; Ginsburg, Alan – Applied Measurement in Education, 2009
The report, "Evaluation of the National Assessment of Educational Progress", provides a number of recommendations for addressing validity concerns about NAEP. This article identifies actions that could be taken by the Congress, the National Center for Education Statistics, and the National Assessment Governing Board--which share responsibility for…
Descriptors: National Competency Tests, Federal Government, Public Agencies, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Sireci, Stephen G.; Hauger, Jeffrey B.; Wells, Craig S.; Shea, Christine; Zenisky, April L. – Applied Measurement in Education, 2009
The National Assessment Governing Board used a new method to set achievement level standards on the 2005 Grade 12 NAEP Math test. In this article, we summarize our independent evaluation of the process used to set these standards. The evaluation data included observations of the standard-setting meeting, observations of advisory committee meetings…
Descriptors: Advisory Committees, Mathematics Tests, Standard Setting, National Competency Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Zenisky, April L.; Hambleton, Ronald K.; Sireci, Stephen G. – Applied Measurement in Education, 2009
How a testing agency approaches score reporting can have a significant impact on the perception of that assessment and the usefulness of the information among intended users and stakeholders. Too often, important decisions about reporting test data are left to the end of the test development cycle, but by considering the audience(s) and the kinds…
Descriptors: National Competency Tests, Scores, Test Results, Information Dissemination
Peer reviewed Peer reviewed
Direct linkDirect link
Hambleton, Ronald K.; Sireci, Stephen G.; Smith, Zachary R. – Applied Measurement in Education, 2009
In this study, we mapped achievement levels from the National Assessment of Educational Progress (NAEP) onto the score scales for selected assessments from the Trends in International Mathematics and Science Study (TIMSS) and the Program for International Student Achievement (PISA). The mapping was conducted on NAEP, TIMSS, and PISA Mathematics…
Descriptors: National Competency Tests, Mathematics Achievement, Mathematics Tests, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Bo; Ohland, Matthew W. – Applied Measurement in Education, 2009
One major challenge in using group projects to assess student learning is accounting for the differences of contribution among group members so that the mark assigned to each individual actually reflects their performance. This research addresses the validity of grading group projects by evaluating different methods that derive individualized…
Descriptors: Monte Carlo Methods, Validity, Student Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, James J.; Yang, Tong; Chauvin, Sheila W. – Applied Measurement in Education, 2009
In some professions, speed and accuracy are as important as acquired requisite knowledge and skills. The availability of computer-based testing now facilitates examination of these two important aspects of student performance. We found that student response times in a conventional non-speeded multiple-choice test, at both the global and individual…
Descriptors: Reaction Time, Test Items, Student Reaction, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lane, Suzanne; Zumbo, Bruno D.; Abedi, Jamal; Benson, Jeri; Dossey, John; Elliott, Stephen N.; Kane, Michael; Linn, Robert; Paredes-Ziker, Cindy; Rodriguez, Michael; Schraw, Gregg; Slattery, Jean; Thomas, Veronica; Willhoft, Joe – Applied Measurement in Education, 2009
Given the changing landscape of educational accountability at the local, state, and national levels, and the changes in the uses of the National Assessment of Educational Progress (NAEP), including the evolving uses of NAEP as a policy tool to interpret state assessment and accountability systems, an explicit statement of the current and potential…
Descriptors: National Competency Tests, Academic Achievement, Accountability, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Leighton, Jacqueline P.; Cui, Ying; Cor, M. Ken – Applied Measurement in Education, 2009
The objective of the present investigation was to compare the adequacy of two cognitive models for predicting examinee performance on a sample of algebra I and II items from the March 2005 administration of the SAT[TM]. The two models included one generated from verbal reports provided by 21 examinees as they solved the SAT[TM] items, and the…
Descriptors: Test Items, Inferences, Cognitive Ability, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Osborn Popp, Sharon E.; Ryan, Joseph M.; Thompson, Marilyn S. – Applied Measurement in Education, 2009
Scoring rubrics are routinely used to evaluate the quality of writing samples produced for writing performance assessments, with anchor papers chosen to represent score points defined in the rubric. Although the careful selection of anchor papers is associated with best practices for scoring, little research has been conducted on the role of…
Descriptors: Writing Evaluation, Scoring Rubrics, Selection, Scoring
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  35