NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 166 to 180 of 1,104 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ali, Usama S.; Walker, Michael E. – ETS Research Report Series, 2014
Two methods are currently in use at Educational Testing Service (ETS) for equating observed item difficulty statistics. The first method involves the linear equating of item statistics in an observed sample to reference statistics on the same items. The second method, or the item response curve (IRC) method, involves the summation of conditional…
Descriptors: Difficulty Level, Test Items, Equated Scores, Causal Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dorans, Neil J. – ETS Research Report Series, 2014
Simulations are widely used. Simulations produce numbers that are deductive demonstrations of what a model says will happen.They produce numerical results that are consistent with the premises of the model used to generate the numbers. These simulated numerical results are not empirical data that address aspects of the world that lies outside the…
Descriptors: Simulation, Equated Scores, Scores, Scientific Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Rutkowski, Leslie; Svetina, Dubravka – Educational and Psychological Measurement, 2014
In the field of international educational surveys, equivalence of achievement scale scores across countries has received substantial attention in the academic literature; however, only a relatively recent emphasis on scale score equivalence in nonachievement education surveys has emerged. Given the current state of research in multiple-group…
Descriptors: International Programs, Educational Assessment, Surveys, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
LaFlair, Geoffrey T.; Isbell, Daniel; May, L. D. Nicolas; Gutierrez Arvizu, Maria Nelly; Jamieson, Joan – Language Testing, 2017
Language programs need multiple test forms for secure administrations and effective placement decisions, but can they have confidence that scores on alternate test forms have the same meaning? In large-scale testing programs, various equating methods are available to ensure the comparability of forms. The choice of equating method is informed by…
Descriptors: Language Tests, Equated Scores, Testing Programs, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Strietholt, Rolf; Rosén, Monica – Measurement: Interdisciplinary Research and Perspectives, 2016
Since the start of the new millennium, international comparative large-scale studies have become one of the most well-known areas in the field of education. However, the International Association for the Evaluation of Educational Achievement (IEA) has already been conducting international comparative studies for about half a century. The present…
Descriptors: Reading Tests, Comparative Analysis, Comparative Education, Trend Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Sadler, Philip M.; Sonnert, Gerhard; Coyle, Harold P.; Miller, Kelly A. – Educational Assessment, 2016
The psychometrically sound development of assessment instruments requires pilot testing of candidate items as a first step in gauging their quality, typically a time-consuming and costly effort. Crowdsourcing offers the opportunity for gathering data much more quickly and inexpensively than from most targeted populations. In a simulation of a…
Descriptors: Test Items, Test Construction, Psychometrics, Biological Sciences
Northwest Evaluation Association, 2016
Northwest Evaluation Association™ (NWEA™) is committed to providing partners with useful tools to help make inferences from Measures of Academic Progress® (MAP®) interim assessment scores. One important tool is the concordance table between MAP and state summative assessments. Concordance tables have been used for decades to relate scores on…
Descriptors: Tables (Data), Benchmarking, Scoring Formulas, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Humphry, Stephen M.; McGrane, Joshua A. – Australian Educational Researcher, 2015
This paper presents a method for equating writing assessments using pairwise comparisons which does not depend upon conventional common-person or common-item equating designs. Pairwise comparisons have been successfully applied in the assessment of open-ended tasks in English and other areas such as visual art and philosophy. In this paper,…
Descriptors: Writing Evaluation, Evaluation Methods, Comparative Analysis, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Guo, Hongwen; Oh, Hyeonjoo J.; Eignor, Daniel – Journal of Educational Measurement, 2013
In operational equating situations, frequency estimation equipercentile equating is considered only when the old and new groups have similar abilities. The frequency estimation assumptions are investigated in this study under various situations from both the levels of theoretical interest and practical use. It shows that frequency estimation…
Descriptors: Equated Scores, Computation, Statistical Analysis, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Taherbhai, Husein; Seo, Daeryong – Educational Measurement: Issues and Practice, 2013
Calibration and equating is the quintessential necessity for most large-scale educational assessments. However, there are instances when no consideration is given to the equating process in terms of context and substantive realization, and the methods used in its execution. In the view of the authors, equating is not merely an exhibit of the…
Descriptors: Item Response Theory, Equated Scores, Measurement, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Huggins, Anne C.; Penfield, Randall D. – Educational Measurement: Issues and Practice, 2012
A goal for any linking or equating of two or more tests is that the linking function be invariant to the population used in conducting the linking or equating. Violations of population invariance in linking and equating jeopardize the fairness and validity of test scores, and pose particular problems for test-based accountability programs that…
Descriptors: Equated Scores, Tests, Test Bias, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Antal, Judit; Proctor, Thomas P.; Melican, Gerald J. – Applied Measurement in Education, 2014
In common-item equating the anchor block is generally built to represent a miniature form of the total test in terms of content and statistical specifications. The statistical properties frequently reflect equal mean and spread of item difficulty. Sinharay and Holland (2007) suggested that the requirement for equal spread of difficulty may be too…
Descriptors: Test Items, Equated Scores, Difficulty Level, Item Response Theory
Kim, YoungKoung; DeCarlo, Lawrence T. – College Board, 2016
Because of concerns about test security, different test forms are typically used across different testing occasions. As a result, equating is necessary in order to get scores from the different test forms that can be used interchangeably. In order to assure the quality of equating, multiple equating methods are often examined. Various equity…
Descriptors: Equated Scores, Evaluation Methods, Sampling, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Brennan, Robert L. – Measurement: Interdisciplinary Research and Perspectives, 2015
Koretz, in his article published in this issue, provides compelling arguments that the high stakes currently associated with accountability testing lead to behavioral changes in students, teachers, and other stakeholders that often have negative consequences, such as inflated scores. Koretz goes on to argue that these negative consequences require…
Descriptors: Accountability, High Stakes Tests, Behavior Change, Student Behavior
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Puhan, Gautam; Walker, Michael – ETS Research Report Series, 2013
In this study we investigated when an equating conversion line is problematic in terms of gaps and clumps. We suggest using the conditional standard error of measurement (CSEM) to measure the scale scores that are inappropriate in the overall raw-to-scale transformation.
Descriptors: Equated Scores, Test Items, Evaluation Criteria, Error of Measurement
Pages: 1  |  ...  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  16  |  ...  |  74