NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 76 to 90 of 484 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Aydin, Burak; Leite, Walter L.; Algina, James – Educational and Psychological Measurement, 2016
We investigated methods of including covariates in two-level models for cluster randomized trials to increase power to detect the treatment effect. We compared multilevel models that included either an observed cluster mean or a latent cluster mean as a covariate, as well as the effect of including Level 1 deviation scores in the model. A Monte…
Descriptors: Error of Measurement, Predictor Variables, Randomized Controlled Trials, Experimental Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2016
The frequently neglected and often misunderstood relationship between classical test theory and item response theory is discussed for the unidimensional case with binary measures and no guessing. It is pointed out that popular item response models can be directly obtained from classical test theory-based models by accounting for the discrete…
Descriptors: Test Theory, Item Response Theory, Models, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational and Psychological Measurement, 2016
Partially compensatory models may capture the cognitive skills needed to answer test items more realistically than compensatory models, but estimating the model parameters may be a challenge. Data were simulated to follow two different partially compensatory models, a model with an interaction term and a product model. The model parameters were…
Descriptors: Item Response Theory, Models, Thinking Skills, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Matlock, Ki Lynn; Turner, Ronna – Educational and Psychological Measurement, 2016
When constructing multiple test forms, the number of items and the total test difficulty are often equivalent. Not all test developers match the number of items and/or average item difficulty within subcontent areas. In this simulation study, six test forms were constructed having an equal number of items and average item difficulty overall.…
Descriptors: Item Response Theory, Computation, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Wetzel, Eunike; Böhnke, Jan R.; Rose, Norman – Educational and Psychological Measurement, 2016
The impact of response styles such as extreme response style (ERS) on trait estimation has long been a matter of concern to researchers and practitioners. This simulation study investigated three methods that have been proposed for the correction of trait estimates for ERS effects: (a) mixed Rasch models, (b) multidimensional item response models,…
Descriptors: Response Style (Tests), Simulation, Methods, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2015
A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…
Descriptors: Correlation, Computation, Statistical Analysis, Hierarchical Linear Modeling
Peer reviewed Peer reviewed
Direct linkDirect link
Lai, Emily R.; Wolfe, Edward W.; Vickers, Daisy – Educational and Psychological Measurement, 2015
This report summarizes an empirical study that addresses two related topics within the context of writing assessment--illusory halo and how much unique information is provided by multiple analytic scores. Specifically, we address the issue of whether unique information is provided by analytic scores assigned to student writing, beyond what is…
Descriptors: Writing Tests, Scores, Bias, Holistic Approach
Peer reviewed Peer reviewed
Direct linkDirect link
Reckase, Mark D.; Xu, Jing-Ru – Educational and Psychological Measurement, 2015
How to compute and report subscores for a test that was originally designed for reporting scores on a unidimensional scale has been a topic of interest in recent years. In the research reported here, we describe an application of multidimensional item response theory to identify a subscore structure in a test designed for reporting results using a…
Descriptors: English, Language Skills, English Language Learners, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Bishara, Anthony J.; Hittner, James B. – Educational and Psychological Measurement, 2015
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…
Descriptors: Research Methodology, Monte Carlo Methods, Correlation, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Wetzel, Eunike; Xu, Xueli; von Davier, Matthias – Educational and Psychological Measurement, 2015
In large-scale educational surveys, a latent regression model is used to compensate for the shortage of cognitive information. Conventionally, the covariates in the latent regression model are principal components extracted from background data. This operational method has several important disadvantages, such as the handling of missing data and…
Descriptors: Surveys, Regression (Statistics), Models, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Kam, Chester Chun Seng; Zhou, Mingming – Educational and Psychological Measurement, 2015
Previous research has found the effects of acquiescence to be generally consistent across item "aggregates" within a single survey (i.e., essential tau-equivalence), but it is unknown whether this phenomenon is consistent at the" individual item" level. This article evaluated the often assumed but inadequately tested…
Descriptors: Test Items, Surveys, Criteria, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos – Educational and Psychological Measurement, 2015
A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…
Descriptors: Psychometrics, Correlation, Validity, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Deng, Lifang; Marcoulides, George A.; Yuan, Ke-Hai – Educational and Psychological Measurement, 2015
Certain diversity among team members is beneficial to the growth of an organization. Multiple measures have been proposed to quantify diversity, although little is known about their psychometric properties. This article proposes several methods to evaluate the unidimensionality and reliability of three measures of diversity. To approximate the…
Descriptors: Likert Scales, Psychometrics, Cultural Differences, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Can, Seda; van de Schoot, Rens; Hox, Joop – Educational and Psychological Measurement, 2015
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation…
Descriptors: Factor Analysis, Comparative Analysis, Maximum Likelihood Statistics, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas – Educational and Psychological Measurement, 2014
An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…
Descriptors: Sampling, Test Items, Effect Size, Scaling
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  33