NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Audience
Researchers1
Showing 1 to 15 of 90 results
Peer reviewed Peer reviewed
Direct linkDirect link
Liang, Longjuan; Browne, Michael W. – Journal of Educational and Behavioral Statistics, 2015
If standard two-parameter item response functions are employed in the analysis of a test with some newly constructed items, it can be expected that, for some items, the item response function (IRF) will not fit the data well. This lack of fit can also occur when standard IRFs are fitted to personality or psychopathology items. When investigating…
Descriptors: Item Response Theory, Statistical Analysis, Goodness of Fit, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David – Journal of Educational and Behavioral Statistics, 2015
The purpose of this note is to study the equivalence of observed and expected (Fisher) information functions with polytomous item response theory (IRT) models. It is established that observed and expected information functions are equivalent for the class of divide-by-total models (including partial credit, generalized partial credit, rating…
Descriptors: Item Response Theory, Models, Statistics, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Nydick, Steven W. – Journal of Educational and Behavioral Statistics, 2014
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Descriptors: Probability, Item Response Theory, Models, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Bennink, Margot; Croon, Marcel A.; Keuning, Jos; Vermunt, Jeroen K. – Journal of Educational and Behavioral Statistics, 2014
In educational measurement, responses of students on items are used not only to measure the ability of students, but also to evaluate and compare the performance of schools. Analysis should ideally account for the multilevel structure of the data, and school-level processes not related to ability, such as working climate and administration…
Descriptors: Academic Ability, Educational Assessment, Educational Testing, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Rijmen, Frank; Jeon, Minjeong; von Davier, Matthias; Rabe-Hesketh, Sophia – Journal of Educational and Behavioral Statistics, 2014
Second-order item response theory models have been used for assessments consisting of several domains, such as content areas. We extend the second-order model to a third-order model for assessments that include subdomains nested in domains. Using a graphical model framework, it is shown how the model does not suffer from the curse of…
Descriptors: Item Response Theory, Models, Educational Assessment, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Buchholz, Janine; Hartig, Johannes; Janssen, Rianne – Journal of Educational and Behavioral Statistics, 2014
In this article, the change in examinee effort during an assessment, which we will refer to as persistence, is modeled as an effect of item position. A multilevel extension is proposed to analyze hierarchically structured data and decompose the individual differences in persistence. Data from the 2009 Program of International Student Achievement…
Descriptors: Reading Tests, International Programs, Testing Programs, Individual Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Chun – Journal of Educational and Behavioral Statistics, 2014
Many latent traits in social sciences display a hierarchical structure, such as intelligence, cognitive ability, or personality. Usually a second-order factor is linearly related to a group of first-order factors (also called domain abilities in cognitive ability measures), and the first-order factors directly govern the actual item responses.…
Descriptors: Measurement, Accuracy, Item Response Theory, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Thissen-Roe, Anne; Thissen, David – Journal of Educational and Behavioral Statistics, 2013
Extreme response set, the tendency to prefer the lowest or highest response option when confronted with a Likert-type response scale, can lead to misfit of item response models such as the generalized partial credit model. Recently, a series of intrinsically multidimensional item response models have been hypothesized, wherein tendency toward…
Descriptors: Likert Scales, Responses, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Kuhn, Jorg-Tobias – Journal of Educational and Behavioral Statistics, 2013
It is common practice to log-transform response times before analyzing them with standard factor analytical methods. However, sometimes the log-transformation is not capable of linearizing the relation between the response times and the latent traits. Therefore, a more general approach to response time analysis is proposed in the current…
Descriptors: Item Response Theory, Simulation, Reaction Time, Least Squares Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Jeon, Minjeong; Rijmen, Frank; Rabe-Hesketh, Sophia – Journal of Educational and Behavioral Statistics, 2013
The authors present a generalization of the multiple-group bifactor model that extends the classical bifactor model for categorical outcomes by relaxing the typical assumption of independence of the specific dimensions. In addition to the means and variances of all dimensions, the correlations among the specific dimensions are allowed to differ…
Descriptors: Test Bias, Generalization, Models, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Hung, Lai-Fa; Wang, Wen-Chung – Journal of Educational and Behavioral Statistics, 2012
In the human sciences, ability tests or psychological inventories are often repeatedly conducted to measure growth. Standard item response models do not take into account possible autocorrelation in longitudinal data. In this study, the authors propose an item response model to account for autocorrelation. The proposed three-level model consists…
Descriptors: Item Response Theory, Correlation, Models, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David; Raiche, Gilles; Beland, Sebastien – Journal of Educational and Behavioral Statistics, 2012
This paper focuses on two likelihood-based indices of person fit, the index "l[subscript z]" and the Snijders's modified index "l[subscript z]*". The first one is commonly used in practical assessment of person fit, although its asymptotic standard normal distribution is not valid when true abilities are replaced by sample ability estimates. The…
Descriptors: Goodness of Fit, Item Response Theory, Computation, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Andrich, David; Hagquist, Curt – Journal of Educational and Behavioral Statistics, 2012
The literature in modern test theory on procedures for identifying items with differential item functioning (DIF) among two groups of persons includes the Mantel-Haenszel (MH) procedure. Generally, it is not recognized explicitly that if there is real DIF in some items which favor one group, then as an artifact of this procedure, artificial DIF…
Descriptors: Test Bias, Test Items, Item Response Theory, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Andrich, David; Marais, Ida; Humphry, Stephen – Journal of Educational and Behavioral Statistics, 2012
Andersen (1995, 2002) proves a theorem relating variances of parameter estimates from samples and subsamples and shows its use as an adjunct to standard statistical analyses. The authors show an application where the theorem is central to the hypothesis tested, namely, whether random guessing to multiple choice items affects their estimates in the…
Descriptors: Test Items, Item Response Theory, Multiple Choice Tests, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Jeon, Minjeong; Rabe-Hesketh, Sophia – Journal of Educational and Behavioral Statistics, 2012
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Descriptors: Maximum Likelihood Statistics, Computation, Models, Factor Structure
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6