Publication Date
| In 2015 | 2 |
| Since 2014 | 7 |
| Since 2011 (last 5 years) | 26 |
| Since 2006 (last 10 years) | 54 |
| Since 1996 (last 20 years) | 89 |
Descriptor
| Item Response Theory | 90 |
| Models | 38 |
| Simulation | 28 |
| Computation | 26 |
| Test Items | 21 |
| Foreign Countries | 15 |
| Maximum Likelihood Statistics | 15 |
| Bayesian Statistics | 14 |
| Monte Carlo Methods | 14 |
| Markov Processes | 13 |
| More ▼ | |
Source
| Journal of Educational and… | 90 |
Author
| Junker, Brian W. | 5 |
| Patz, Richard J. | 4 |
| Wainer, Howard | 4 |
| De Boeck, Paul | 3 |
| Jeon, Minjeong | 3 |
| Rabe-Hesketh, Sophia | 3 |
| Sinharay, Sandip | 3 |
| von Davier, Matthias | 3 |
| Andrich, David | 2 |
| Bolt, Daniel M. | 2 |
| More ▼ | |
Publication Type
| Journal Articles | 90 |
| Reports - Research | 40 |
| Reports - Descriptive | 29 |
| Reports - Evaluative | 20 |
| Opinion Papers | 2 |
| Speeches/Meeting Papers | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Elementary Education | 7 |
| Grade 8 | 5 |
| Junior High Schools | 5 |
| Middle Schools | 5 |
| Secondary Education | 5 |
| Elementary Secondary Education | 3 |
| Higher Education | 3 |
| Grade 4 | 2 |
| Grade 5 | 2 |
| High Schools | 2 |
| More ▼ | |
Audience
| Researchers | 1 |
Showing 1 to 15 of 90 results
Liang, Longjuan; Browne, Michael W. – Journal of Educational and Behavioral Statistics, 2015
If standard two-parameter item response functions are employed in the analysis of a test with some newly constructed items, it can be expected that, for some items, the item response function (IRF) will not fit the data well. This lack of fit can also occur when standard IRFs are fitted to personality or psychopathology items. When investigating…
Descriptors: Item Response Theory, Statistical Analysis, Goodness of Fit, Bayesian Statistics
Magis, David – Journal of Educational and Behavioral Statistics, 2015
The purpose of this note is to study the equivalence of observed and expected (Fisher) information functions with polytomous item response theory (IRT) models. It is established that observed and expected information functions are equivalent for the class of divide-by-total models (including partial credit, generalized partial credit, rating…
Descriptors: Item Response Theory, Models, Statistics, Computation
Nydick, Steven W. – Journal of Educational and Behavioral Statistics, 2014
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Descriptors: Probability, Item Response Theory, Models, Classification
Bennink, Margot; Croon, Marcel A.; Keuning, Jos; Vermunt, Jeroen K. – Journal of Educational and Behavioral Statistics, 2014
In educational measurement, responses of students on items are used not only to measure the ability of students, but also to evaluate and compare the performance of schools. Analysis should ideally account for the multilevel structure of the data, and school-level processes not related to ability, such as working climate and administration…
Descriptors: Academic Ability, Educational Assessment, Educational Testing, Test Bias
Rijmen, Frank; Jeon, Minjeong; von Davier, Matthias; Rabe-Hesketh, Sophia – Journal of Educational and Behavioral Statistics, 2014
Second-order item response theory models have been used for assessments consisting of several domains, such as content areas. We extend the second-order model to a third-order model for assessments that include subdomains nested in domains. Using a graphical model framework, it is shown how the model does not suffer from the curse of…
Descriptors: Item Response Theory, Models, Educational Assessment, Computation
Debeer, Dries; Buchholz, Janine; Hartig, Johannes; Janssen, Rianne – Journal of Educational and Behavioral Statistics, 2014
In this article, the change in examinee effort during an assessment, which we will refer to as persistence, is modeled as an effect of item position. A multilevel extension is proposed to analyze hierarchically structured data and decompose the individual differences in persistence. Data from the 2009 Program of International Student Achievement…
Descriptors: Reading Tests, International Programs, Testing Programs, Individual Differences
Wang, Chun – Journal of Educational and Behavioral Statistics, 2014
Many latent traits in social sciences display a hierarchical structure, such as intelligence, cognitive ability, or personality. Usually a second-order factor is linearly related to a group of first-order factors (also called domain abilities in cognitive ability measures), and the first-order factors directly govern the actual item responses.…
Descriptors: Measurement, Accuracy, Item Response Theory, Adaptive Testing
Thissen-Roe, Anne; Thissen, David – Journal of Educational and Behavioral Statistics, 2013
Extreme response set, the tendency to prefer the lowest or highest response option when confronted with a Likert-type response scale, can lead to misfit of item response models such as the generalized partial credit model. Recently, a series of intrinsically multidimensional item response models have been hypothesized, wherein tendency toward…
Descriptors: Likert Scales, Responses, Item Response Theory, Models
Ranger, Jochen; Kuhn, Jorg-Tobias – Journal of Educational and Behavioral Statistics, 2013
It is common practice to log-transform response times before analyzing them with standard factor analytical methods. However, sometimes the log-transformation is not capable of linearizing the relation between the response times and the latent traits. Therefore, a more general approach to response time analysis is proposed in the current…
Descriptors: Item Response Theory, Simulation, Reaction Time, Least Squares Statistics
Jeon, Minjeong; Rijmen, Frank; Rabe-Hesketh, Sophia – Journal of Educational and Behavioral Statistics, 2013
The authors present a generalization of the multiple-group bifactor model that extends the classical bifactor model for categorical outcomes by relaxing the typical assumption of independence of the specific dimensions. In addition to the means and variances of all dimensions, the correlations among the specific dimensions are allowed to differ…
Descriptors: Test Bias, Generalization, Models, Item Response Theory
Hung, Lai-Fa; Wang, Wen-Chung – Journal of Educational and Behavioral Statistics, 2012
In the human sciences, ability tests or psychological inventories are often repeatedly conducted to measure growth. Standard item response models do not take into account possible autocorrelation in longitudinal data. In this study, the authors propose an item response model to account for autocorrelation. The proposed three-level model consists…
Descriptors: Item Response Theory, Correlation, Models, Longitudinal Studies
Magis, David; Raiche, Gilles; Beland, Sebastien – Journal of Educational and Behavioral Statistics, 2012
This paper focuses on two likelihood-based indices of person fit, the index "l[subscript z]" and the Snijders's modified index "l[subscript z]*". The first one is commonly used in practical assessment of person fit, although its asymptotic standard normal distribution is not valid when true abilities are replaced by sample ability estimates. The…
Descriptors: Goodness of Fit, Item Response Theory, Computation, Ability
Andrich, David; Hagquist, Curt – Journal of Educational and Behavioral Statistics, 2012
The literature in modern test theory on procedures for identifying items with differential item functioning (DIF) among two groups of persons includes the Mantel-Haenszel (MH) procedure. Generally, it is not recognized explicitly that if there is real DIF in some items which favor one group, then as an artifact of this procedure, artificial DIF…
Descriptors: Test Bias, Test Items, Item Response Theory, Statistical Analysis
Andrich, David; Marais, Ida; Humphry, Stephen – Journal of Educational and Behavioral Statistics, 2012
Andersen (1995, 2002) proves a theorem relating variances of parameter estimates from samples and subsamples and shows its use as an adjunct to standard statistical analyses. The authors show an application where the theorem is central to the hypothesis tested, namely, whether random guessing to multiple choice items affects their estimates in the…
Descriptors: Test Items, Item Response Theory, Multiple Choice Tests, Guessing (Tests)
Jeon, Minjeong; Rabe-Hesketh, Sophia – Journal of Educational and Behavioral Statistics, 2012
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Descriptors: Maximum Likelihood Statistics, Computation, Models, Factor Structure

Peer reviewed
Direct link
