NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 45 results
Peer reviewed Peer reviewed
Direct linkDirect link
Clauser, Jerome C.; Clauser, Brian E.; Hambleton, Ronald K. – Applied Measurement in Education, 2014
The purpose of the present study was to extend past work with the Angoff method for setting standards by examining judgments at the judge level rather than the panel level. The focus was on investigating the relationship between observed Angoff standard setting judgments and empirical conditional probabilities. This relationship has been used as a…
Descriptors: Standard Setting (Scoring), Validity, Reliability, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Deunk, Marjolein I.; van Kuijk, Mechteld F.; Bosker, Roel J. – Applied Measurement in Education, 2014
Standard setting methods, like the Bookmark procedure, are used to assist education experts in formulating performance standards. Small group discussion is meant to help these experts in setting more reliable and valid cutoff scores. This study is an analysis of 15 small group discussions during two standards setting trajectories and their effect…
Descriptors: Cutting Scores, Standard Setting, Group Discussion, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T. – Applied Measurement in Education, 2014
Possible lack of motivation is a perpetual concern when tests have no stakes attached to performance. Specifically, the validity of test score interpretations may be compromised when examinees are unmotivated to exert their best efforts. Motivation filtering, a procedure that filters out apparently unmotivated examinees, was applied to the…
Descriptors: College Outcomes Assessment, Student Motivation, Sampling, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Leighton, Jacqueline P. – Applied Measurement in Education, 2013
The Standards for Educational and Psychological Testing indicate that multiple sources of validity evidence should be used to support the interpretation of test scores. In the past decade, examinee response processes, as a source of validity evidence, have received increased attention. However, there have been relatively few methodological studies…
Descriptors: Psychological Testing, Standards, Interviews, Protocol Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Hyun-Jeong; Lee, Jaehoon; Kingston, Neal – Applied Measurement in Education, 2012
This study examined the validity of test accommodation in third-eighth graders using differential item functioning (DIF) and mixture IRT models. Two data sets were used for these analyses. With the first data set (N = 51,591) we examined whether item type (i.e., story, explanation, straightforward) or item features were associated with item…
Descriptors: Testing Accommodations, Test Bias, Item Response Theory, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Wolf, Mikyung Kim; Kim, Jinok; Kao, Jenny – Applied Measurement in Education, 2012
Glossary and reading aloud test items are commonly allowed in many states' accommodation policies for English language learner (ELL) students for large-scale mathematics assessments. However, little research is available regarding the effects of these accommodations on ELL students' performance. Further, no research exists that examines how…
Descriptors: Testing Accommodations, Glossaries, Reading Aloud to Others, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Hee-Sun; Liu, Ou Lydia; Linn, Marcia C. – Applied Measurement in Education, 2011
This study explores measurement of a construct called knowledge integration in science using multiple-choice and explanation items. We use construct and instructional validity evidence to examine the role multiple-choice and explanation items plays in measuring students' knowledge integration ability. For construct validity, we analyze item…
Descriptors: Knowledge Level, Construct Validity, Validity, Scaffolding (Teaching Technique)
Peer reviewed Peer reviewed
Direct linkDirect link
Leighton, Jacqueline P.; Heffernan, Colleen; Cor, M. Kenneth; Gokiert, Rebecca J.; Cui, Ying – Applied Measurement in Education, 2011
The "Standards for Educational and Psychological Testing" indicate that test instructions, and by extension item objectives, presented to examinees should be sufficiently clear and detailed to help ensure that they respond as developers intend them to respond (Standard 3.20; AERA, APA, & NCME, 1999). The present study investigates the use of…
Descriptors: Test Construction, Validity, Evidence, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael T.; Mroch, Andrew A. – Applied Measurement in Education, 2010
In evaluating the relationship between two measures across different groups (i.e., in evaluating "differential validity") it is necessary to examine differences in correlation coefficients and in regression lines. Ordinary least squares (OLS) regression is the standard method for fitting lines to data, but its criterion for optimal fit (minimizing…
Descriptors: Least Squares Statistics, Regression (Statistics), Differences, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Lauress L. – Applied Measurement in Education, 2010
The articles in this special issue make two important contributions to our understanding of the impact of accommodations on test score validity. First, they illustrate a variety of methods for collection and rigorous analyses of empirical data that can supplant expert judgment of the impact of accommodations. These methods range from internal…
Descriptors: Reading Achievement, Educational Assessment, Test Reliability, Learning Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Allen, Jeff; Robbins, Steven B.; Sawyer, Richard – Applied Measurement in Education, 2010
Research on the validity of psychosocial factors (PSFs) and other noncognitive predictors of college outcomes has largely ignored the practical benefits implied by the validity. We summarize evidence of the validity of PSF measures as predictors of college outcomes and then explain how this validity directly translates into improved identification…
Descriptors: Institutional Research, Academic Persistence, Validity, At Risk Students
Peer reviewed Peer reviewed
Direct linkDirect link
Stone, Clement A.; Ye, Feifei; Zhu, Xiaowen; Lane, Suzanne – Applied Measurement in Education, 2010
Although reliability of subscale scores may be suspect, subscale scores are the most common type of diagnostic information included in student score reports. This research compared methods for augmenting the reliability of subscale scores for an 8th-grade mathematics assessment. Yen's Objective Performance Index, Wainer et al.'s augmented scores,…
Descriptors: Item Response Theory, Case Studies, Reliability, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Abedi, Jamal; Kao, Jenny C.; Leon, Seth; Mastergeorge, Ann M.; Sullivan, Lisa; Herman, Joan; Pope, Rita – Applied Measurement in Education, 2010
This study explores factors that affect the accessibility of reading comprehension assessments for students with disabilities in grade 8 public school classrooms. The study consisted of assessing students using reading comprehension passages that were broken down into shorter "segments" or "chunks" in order to assess the validity and effectiveness…
Descriptors: Reading Achievement, Educational Strategies, Recall (Psychology), Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Stone, Elizabeth; Cook, Linda; Cahalan-Laitusis, Cara; Cline, Frederick – Applied Measurement in Education, 2010
This validity study examined differential item functioning (DIF) results on large-scale state standards-based English-language arts assessments at grades 4 and 8 for students without disabilities taking the test under standard conditions and students who are blind or visually impaired taking the test with either a large print or braille form.…
Descriptors: Test Bias, Large Type Materials, Testing Accommodations, Language Arts
Peer reviewed Peer reviewed
Direct linkDirect link
Buckendahl, Chad W.; Plake, Barbara S.; Davis, Susan L. – Applied Measurement in Education, 2009
The National Assessment of Educational Progress (NAEP) program is a series of periodic assessments administered nationally to samples of students and designed to measure different content areas. This article describes a multi-year study that focused on the breadth of the development, administration, maintenance, and renewal of the assessments in…
Descriptors: National Competency Tests, Audits (Verification), Testing Programs, Program Evaluation
Previous Page | Next Page ยป
Pages: 1  |  2  |  3