NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 41 results
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Aryadoust, Vahid – International Journal of Testing, 2015
Research shows that test method can exert a significant impact on test takers' performance and thereby contaminate test scores. We argue that common test method can exert the same effect as common stimuli and violate the conditional independence assumption of item response theory models because, in general, subsets of items which have a…
Descriptors: Test Format, Item Response Theory, Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Jurich, Daniel P.; Bradshaw, Laine P. – International Journal of Testing, 2014
The assessment of higher-education student learning outcomes is an important component in understanding the strengths and weaknesses of academic and general education programs. This study illustrates the application of diagnostic classification models, a burgeoning set of statistical models, in assessing student learning outcomes. To facilitate…
Descriptors: College Outcomes Assessment, Classification, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Chu, Man-Wai; Babenko, Oksana; Cui, Ying; Leighton, Jacqueline P. – International Journal of Testing, 2014
The study examines the role that perceptions or impressions of learning environments and assessments play in students' performance on a large-scale standardized test. Hierarchical linear modeling (HLM) was used to test aspects of the Learning Errors and Formative Feedback model to determine how much variation in students' performance was…
Descriptors: Hierarchical Linear Modeling, Secondary School Students, Student Attitudes, Educational Environment
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun; Geisinger, Kurt F. – International Journal of Testing, 2014
Differential item functioning (DIF) analysis is important in terms of test fairness. While DIF analyses have mainly been conducted with manifest grouping variables, such as gender or race/ethnicity, it has been recently claimed that not only the grouping variables but also contextual variables pertaining to examinees should be considered in DIF…
Descriptors: Test Bias, Gender Differences, Regression (Statistics), Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – International Journal of Testing, 2013
This tutorial addresses possible sources of confusion in interpreting trait scores from the bifactor model. The bifactor model may be used when subscores are desired, either for formative feedback on an achievement test or for theoretically different constructs on a psychological test. The bifactor model is often chosen because it requires fewer…
Descriptors: Test Interpretation, Scores, Models, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Tay, Louis; Vermunt, Jeroen K.; Wang, Chun – International Journal of Testing, 2013
We evaluate the item response theory with covariates (IRT-C) procedure for assessing differential item functioning (DIF) without preknowledge of anchor items (Tay, Newman, & Vermunt, 2011). This procedure begins with a fully constrained baseline model, and candidate items are tested for uniform and/or nonuniform DIF using the Wald statistic.…
Descriptors: Item Response Theory, Test Bias, Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lim, Gad S.; Geranpayeh, Ardeshir; Khalifa, Hanan; Buckendahl, Chad W. – International Journal of Testing, 2013
Standard setting theory has largely developed with reference to a typical situation, determining a level or levels of performance for one exam for one context. However, standard setting is now being used with international reference frameworks, where some parameters and assumptions of classical standard setting do not hold. We consider the…
Descriptors: Standard Setting (Scoring), Validity, Models, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Evers, Arne – International Journal of Testing, 2012
In this article, the characteristics of five test review models are described. The five models are the US review system at the Buros Center for Testing, the German Test Review System of the Committee on Tests, the Brazilian System for the Evaluation of Psychological Tests, the European EFPA Review Model, and the Dutch COTAN Evaluation System for…
Descriptors: Program Evaluation, Test Reviews, Trend Analysis, International Education
Peer reviewed Peer reviewed
Direct linkDirect link
Lindley, Patricia A.; Bartram, Dave – International Journal of Testing, 2012
In this article, we present the background to the development of test reviewing by the British Psychological Society (BPS) in the United Kingdom. We also describe the role played by the BPS in the development of the EFPA test review model and its adaptation for use in test reviewing in the United Kingdom. We conclude with a discussion of lessons…
Descriptors: Test Reviews, Professional Associations, Psychology, Global Approach
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis – International Journal of Testing, 2012
Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…
Descriptors: Foreign Countries, Psychometrics, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Mo; Williamson, David M.; Breyer, F. Jay; Trapani, Catherine – International Journal of Testing, 2012
This article describes two separate, related studies that provide insight into the effectiveness of "e-rater" score calibration methods based on different distributional targets. In the first study, we developed and evaluated a new type of "e-rater" scoring model that was cost-effective and applicable under conditions of absent human rating and…
Descriptors: Automation, Scoring, Models, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Young-Sun; Park, Yoon Soo; Taylan, Didem – International Journal of Testing, 2011
Studies of international mathematics achievement such as the Trends in Mathematics and Science Study (TIMSS) have employed classical test theory and item response theory to rank individuals within a latent ability continuum. Although these approaches have provided insights into comparisons between countries, they have yet to examine how specific…
Descriptors: Mathematics Achievement, Achievement Tests, Models, Cognitive Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
D'Agostino, Jerome; Karpinski, Aryn; Welsh, Megan – International Journal of Testing, 2011
After a test is developed, most content validation analyses shift from ascertaining domain definition to studying domain representation and relevance because the domain is assumed to be set once a test exists. We present an approach that allows for the examination of alternative domain structures based on extant test items. In our example based on…
Descriptors: Expertise, Test Items, Mathematics Tests, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David; Raiche, Gilles; Beland, Sebastien; Gerard, Paul – International Journal of Testing, 2011
We present an extension of the logistic regression procedure to identify dichotomous differential item functioning (DIF) in the presence of more than two groups of respondents. Starting from the usual framework of a single focal group, we propose a general approach to estimate the item response functions in each group and to test for the presence…
Descriptors: Language Skills, Identification, Foreign Countries, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Svetina, Dubravka; Gorin, Joanna S.; Tatsuoka, Kikumi K. – International Journal of Testing, 2011
As a construct definition, the current study develops a cognitive model describing the knowledge, skills, and abilities measured by critical reading test items on a high-stakes assessment used for selection decisions in the United States. Additionally, in order to establish generalizability of the construct meaning to other similarly structured…
Descriptors: Reading Tests, Reading Comprehension, Critical Reading, Test Items
Previous Page | Next Page ยป
Pages: 1  |  2  |  3