NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 153 results
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall David – Educational Measurement: Issues and Practice, 2014
A polytomous item is one for which the responses are scored according to three or more categories. Given the increasing use of polytomous items in assessment practices, item response theory (IRT) models specialized for polytomous items are becoming increasingly common. The purpose of this ITEMS module is to provide an accessible overview of…
Descriptors: Item Response Theory, Test Items, Models, Equations (Mathematics)
Peer reviewed Peer reviewed
Direct linkDirect link
Feinberg, Richard A.; Wainer, Howard – Educational Measurement: Issues and Practice, 2014
Subscores are often used to indicate test-takers' relative strengths and weaknesses and so help focus remediation. But a subscore is not worth reporting if it is too unreliable to believe or if it contains no information that is not already contained in the total score. It is possible, through the use of a simple linear equation provided in…
Descriptors: Scores, Equations (Mathematics), Prediction, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis – Educational Measurement: Issues and Practice, 2013
Changes to the design and development of our educational assessments are resulting in the unprecedented demand for a large and continuous supply of content-specific test items. One way to address this growing demand is with automatic item generation (AIG). AIG is the process of using item models to generate test items with the aid of computer…
Descriptors: Educational Assessment, Test Items, Automation, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Templin, Jonathan; Hoffman, Lesa – Educational Measurement: Issues and Practice, 2013
Diagnostic classification models (aka cognitive or skills diagnosis models) have shown great promise for evaluating mastery on a multidimensional profile of skills as assessed through examinee responses, but continued development and application of these models has been hindered by a lack of readily available software. In this article we…
Descriptors: Classification, Models, Language Tests, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Jinghua; Dorans, Neil J. – Educational Measurement: Issues and Practice, 2013
We make a distinction between two types of test changes: inevitable deviations from specifications versus planned modifications of specifications. We describe how score equity assessment (SEA) can be used as a tool to assess a critical aspect of construct continuity, the equivalence of scores, whenever planned changes are introduced to testing…
Descriptors: Tests, Test Construction, Test Format, Change
Peer reviewed Peer reviewed
Direct linkDirect link
Williamson, David M.; Xi, Xiaoming; Breyer, F. Jay – Educational Measurement: Issues and Practice, 2012
A framework for evaluation and use of automated scoring of constructed-response tasks is provided that entails both evaluation of automated scoring as well as guidelines for implementation and maintenance in the context of constantly evolving technologies. Consideration of validity issues and challenges associated with automated scoring are…
Descriptors: Automation, Scoring, Evaluation, Guidelines
Peer reviewed Peer reviewed
Direct linkDirect link
Huggins, Anne C.; Penfield, Randall D. – Educational Measurement: Issues and Practice, 2012
A goal for any linking or equating of two or more tests is that the linking function be invariant to the population used in conducting the linking or equating. Violations of population invariance in linking and equating jeopardize the fairness and validity of test scores, and pose particular problems for test-based accountability programs that…
Descriptors: Equated Scores, Tests, Test Bias, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Finney, Sara J.; Pastor, Dena A. – Educational Measurement: Issues and Practice, 2012
To address the shortage of professionals in measurement, it is essential that we make young career-seekers aware that measurement is an option as a profession. In this paper, we discuss how creating a strong pipeline of students into our field involves personal interactions between faculty representing the graduate programs in measurement and…
Descriptors: Recruitment, Labor Market, Labor Supply, Supply and Demand
Peer reviewed Peer reviewed
Direct linkDirect link
Bakker, Steven – Educational Measurement: Issues and Practice, 2012
A particular trait of the educational system under socialist reign was accountability at the input side--appropriate facilities, centrally decided curriculum, approved text-books, and uniformly trained teachers--but no control on the output. It was simply assumed that it met the agreed standards, which was, in turn, proven by the statistics…
Descriptors: Accountability, Social Problems, Ethics, Foreign Students
Peer reviewed Peer reviewed
Direct linkDirect link
Sireci, Stephen G.; Forte, Ellen – Educational Measurement: Issues and Practice, 2012
Current educational policies rely on educational assessments. However, the technical aspects of assessments are often unknown to policy makers, which is dangerous because sound assessment policy requires knowledge of the strengths and limitations of educational tests. In this article, we discuss the importance of informing policy makers of…
Descriptors: Educational Assessment, Psychometrics, Educational Policy, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Zenisky, April L.; Hambleton, Ronald K. – Educational Measurement: Issues and Practice, 2012
Test scores matter these days. Test-takers want to understand how they performed, and test score reports, particularly those for individual examinees, are the vehicles by which most people get the bulk of this information. Historically, score reports have not always met the examinees' information or usability needs, but this is clearly changing…
Descriptors: Scores, Psychometrics, Test Results, Usability
Peer reviewed Peer reviewed
Direct linkDirect link
Allalouf, Avi; Alderoqui-Pinus, Diana – Educational Measurement: Issues and Practice, 2012
This article deals with a pioneering project currently being developed, namely, the Exhibition on Testing and Measurement. This interactive traveling exhibition will be presented in science museums in Israel, the United States, and other countries. It has been conceived as an innovative means of familiarizing the public with educational…
Descriptors: Museums, Exhibits, Measurement, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Camara, Wayne J.; Shaw, Emily J. – Educational Measurement: Issues and Practice, 2012
The measurement community needs to better understand how to interact with the media to effectively disseminate important findings from educational testing efforts. To this end, the current paper will review media coverage of educational testing and related issues and elaborate on areas of concern and opportunities for improved communication…
Descriptors: Test Results, Educational Testing, Measurement, Information Dissemination
Peer reviewed Peer reviewed
Direct linkDirect link
Myford, Carol M. – Educational Measurement: Issues and Practice, 2012
Over the last several decades, researchers have studied many and varied aspects of rater cognition. Those interested in pursuing basic research have focused on gaining an understanding of raters' thought processes as they score different types of performances and products, striving to understand how raters' mental representations and the cognitive…
Descriptors: Evidence, Validity, Cognitive Processes, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Bejar, Issac I. – Educational Measurement: Issues and Practice, 2012
The scoring process is critical in the validation of tests that rely on constructed responses. Documenting that readers carry out the scoring in ways consistent with the construct and measurement goals is an important aspect of score validity. In this article, rater cognition is approached as a source of support for a validity argument for scores…
Descriptors: Scores, Inferences, Validity, Scoring
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11