NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…3659
What Works Clearinghouse Rating
Showing 1 to 15 of 3,659 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Schweizer, Karl; Reiß, Siegbert; Troche, Stefan – Educational and Psychological Measurement, 2019
The article reports three simulation studies conducted to find out whether the effect of a time limit for testing impairs model fit in investigations of structural validity, whether the representation of the assumed source of the effect prevents impairment of model fit and whether it is possible to identify and discriminate this method effect from…
Descriptors: Timed Tests, Testing, Barriers, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2019
This note discusses the merits of coefficient alpha and their conditions in light of recent critical publications that miss out on significant research findings over the past several decades. That earlier research has demonstrated the empirical relevance and utility of coefficient alpha under certain empirical circumstances. The article highlights…
Descriptors: Test Validity, Test Reliability, Test Items, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Biancarosa, Gina; Kennedy, Patrick C.; Carlson, Sarah E.; Yoon, HyeonJin; Seipel, Ben; Liu, Bowen; Davison, Mark L. – Educational and Psychological Measurement, 2019
Prior research suggests that subscores from a single achievement test seldom add value over a single total score. Such scores typically correspond to subcontent areas in the total content domain, but content subdomains might not provide a sound basis for subscores. Using scores on an inferential reading comprehension test from 625 third, fourth,…
Descriptors: Scores, Scoring, Achievement Tests, Grade 3
Peer reviewed Peer reviewed
Direct linkDirect link
Nicewander, W. Alan – Educational and Psychological Measurement, 2019
This inquiry is focused on three indicators of the precision of measurement--conditional on fixed values of ?, the latent variable of item response theory (IRT). The indicators that are compared are (1) The traditional, conditional standard errors, s(eX|?) = CSEM; (2) the IRT-based conditional standard errors, s[subscript irt](eX|?)=C[subscript…
Descriptors: Measurement, Accuracy, Scores, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
List, Marit Kristine; Köller, Olaf; Nagy, Gabriel – Educational and Psychological Measurement, 2019
Tests administered in studies of student achievement often have a certain amount of not-reached items (NRIs). The propensity for NRIs may depend on the proficiency measured by the test and on additional covariates. This article proposes a semiparametric model to study such relationships. Our model extends Glas and Pimentel's item response theory…
Descriptors: Educational Assessment, Item Response Theory, Multivariate Analysis, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Trafimow, David; Wang, Tonghui; Wang, Cong – Educational and Psychological Measurement, 2019
Two recent publications in "Educational and Psychological Measurement" advocated that researchers consider using the a priori procedure. According to this procedure, the researcher specifies, prior to data collection, how close she wishes her sample mean(s) to be to the corresponding population mean(s), and the desired probability of…
Descriptors: Statistical Distributions, Sample Size, Equations (Mathematics), Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Yanyun; Xia, Yan – Educational and Psychological Measurement, 2019
When item scores are ordered categorical, categorical omega can be computed based on the parameter estimates from a factor analysis model using frequentist estimators such as diagonally weighted least squares. When the sample size is relatively small and thresholds are different across items, using diagonally weighted least squares can yield a…
Descriptors: Scores, Sample Size, Bayesian Statistics, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational and Psychological Measurement, 2019
Previous work showing that revised parallel analysis can be effective with dichotomous items has used a two-parameter model and normally distributed abilities. In this study, both two- and three-parameter models were used with normally distributed and skewed ability distributions. Relatively minor skew and kurtosis in the underlying ability…
Descriptors: Item Analysis, Models, Error of Measurement, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Dueber, David M.; Love, Abigail M. A.; Toland, Michael D.; Turner, Trisha A. – Educational and Psychological Measurement, 2019
One of the most cited methodological issues is with the response format, which is traditionally a single-response Likert response format. Therefore, our study aims to elucidate and illustrate an alternative response format and analytic technique, Thurstonian item response theory (IRT), for analyzing data from surveys using an alternate response…
Descriptors: Item Response Theory, Surveys, Measurement Techniques, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Xia, Yan; Green, Samuel B.; Xu, Yuning; Thompson, Marilyn S. – Educational and Psychological Measurement, 2019
Past research suggests revised parallel analysis (R-PA) tends to yield relatively accurate results in determining the number of factors in exploratory factor analysis. R-PA can be interpreted as a series of hypothesis tests. At each step in the series, a null hypothesis is tested that an additional factor accounts for zero common variance among…
Descriptors: Effect Size, Factor Analysis, Hypothesis Testing, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Harrison, Michael; Menold, Natalja – Educational and Psychological Measurement, 2019
This note confronts the common use of a single coefficient alpha as an index informing about reliability of a multicomponent measurement instrument in a heterogeneous population. Two or more alpha coefficients could instead be meaningfully associated with a given instrument in finite mixture settings, and this may be increasingly more likely the…
Descriptors: Statistical Analysis, Test Reliability, Measures (Individuals), Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Chuan-Ju; Chang, Hua-Hua – Educational and Psychological Measurement, 2019
For item selection in cognitive diagnostic computerized adaptive testing (CD-CAT), ideally, a single item selection index should be created to simultaneously regulate precision, exposure status, and attribute balancing. For this purpose, in this study, we first proposed an attribute-balanced item selection criterion, namely, the standardized…
Descriptors: Test Items, Selection Criteria, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Jaki, Thomas; Kim, Minjung; Lamont, Andrea; George, Melissa; Chang, Chi; Feaster, Daniel; Van Horn, M. Lee – Educational and Psychological Measurement, 2019
Regression mixture models are a statistical approach used for estimating heterogeneity in effects. This study investigates the impact of sample size on regression mixture's ability to produce "stable" results. Monte Carlo simulations and analysis of resamples from an application data set were used to illustrate the types of problems that…
Descriptors: Sample Size, Computation, Regression (Statistics), Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J.; Lorenzo-Seva, Urbano – Educational and Psychological Measurement, 2019
Measures initially designed to be single-trait often yield data that are compatible with both an essentially unidimensional factor-analysis (FA) solution and a correlated-factors solution. For these cases, this article proposes an approach aimed at providing information for deciding which of the two solutions is the most appropriate and useful.…
Descriptors: Factor Analysis, Computation, Reliability, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Bolin, Jocelyn H.; Finch, W. Holmes; Stenger, Rachel – Educational and Psychological Measurement, 2019
Multilevel data are a reality for many disciplines. Currently, although multiple options exist for the treatment of multilevel data, most disciplines strictly adhere to one method for multilevel data regardless of the specific research design circumstances. The purpose of this Monte Carlo simulation study is to compare several methods for the…
Descriptors: Hierarchical Linear Modeling, Computation, Statistical Analysis, Maximum Likelihood Statistics
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  244