NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…3822
What Works Clearinghouse Rating
Showing 151 to 165 of 3,822 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Yanyun; Xia, Yan – Educational and Psychological Measurement, 2019
When item scores are ordered categorical, categorical omega can be computed based on the parameter estimates from a factor analysis model using frequentist estimators such as diagonally weighted least squares. When the sample size is relatively small and thresholds are different across items, using diagonally weighted least squares can yield a…
Descriptors: Scores, Sample Size, Bayesian Statistics, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational and Psychological Measurement, 2019
Previous work showing that revised parallel analysis can be effective with dichotomous items has used a two-parameter model and normally distributed abilities. In this study, both two- and three-parameter models were used with normally distributed and skewed ability distributions. Relatively minor skew and kurtosis in the underlying ability…
Descriptors: Item Analysis, Models, Error of Measurement, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Dueber, David M.; Love, Abigail M. A.; Toland, Michael D.; Turner, Trisha A. – Educational and Psychological Measurement, 2019
One of the most cited methodological issues is with the response format, which is traditionally a single-response Likert response format. Therefore, our study aims to elucidate and illustrate an alternative response format and analytic technique, Thurstonian item response theory (IRT), for analyzing data from surveys using an alternate response…
Descriptors: Item Response Theory, Surveys, Measurement Techniques, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Xia, Yan; Green, Samuel B.; Xu, Yuning; Thompson, Marilyn S. – Educational and Psychological Measurement, 2019
Past research suggests revised parallel analysis (R-PA) tends to yield relatively accurate results in determining the number of factors in exploratory factor analysis. R-PA can be interpreted as a series of hypothesis tests. At each step in the series, a null hypothesis is tested that an additional factor accounts for zero common variance among…
Descriptors: Effect Size, Factor Analysis, Hypothesis Testing, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Shi, Dexin; Lee, Taehun; Maydeu-Olivares, Alberto – Educational and Psychological Measurement, 2019
This study investigated the effect the number of observed variables (p) has on three structural equation modeling indices: the comparative fit index (CFI), the Tucker--Lewis index (TLI), and the root mean square error of approximation (RMSEA). The behaviors of the population fit indices and their sample estimates were compared under various…
Descriptors: Structural Equation Models, Goodness of Fit, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Olvera Astivia, Oscar L.; Kroc, Edward – Educational and Psychological Measurement, 2019
Within the context of moderated multiple regression, mean centering is recommended both to simplify the interpretation of the coefficients and to reduce the problem of multicollinearity. For almost 30 years, theoreticians and applied researchers have advocated for centering as an effective way to reduce the correlation between variables and thus…
Descriptors: Multiple Regression Analysis, Computation, Correlation, Statistical Distributions
Peer reviewed Peer reviewed
Direct linkDirect link
Marcoulides, Katerina M.; Raykov, Tenko – Educational and Psychological Measurement, 2019
A procedure that can be used to evaluate the variance inflation factors and tolerance indices in linear regression models is discussed. The method permits both point and interval estimation of these factors and indices associated with explanatory variables considered for inclusion in a regression model. The approach makes use of popular latent…
Descriptors: Regression (Statistics), Statistical Analysis, Computation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Minjeong; Wu, Amery D. – Educational and Psychological Measurement, 2019
Item response tree (IRTree) models are recently introduced as an approach to modeling response data from Likert-type rating scales. IRTree models are particularly useful to capture a variety of individuals' behaviors involving in item responding. This study employed IRTree models to investigate response styles, which are individuals' tendencies to…
Descriptors: Item Response Theory, Models, Likert Scales, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Harrison, Allyson G.; Butt, Kaitlyn; Armstrong, Irene – Educational and Psychological Measurement, 2019
There has been a marked increase in accommodation requests from students with disabilities at both the postsecondary education level and on high-stakes examinations. As such, accurate identification and quantification of normative impairment is essential for equitable provision of accommodations. Considerable diversity currently exists in methods…
Descriptors: Achievement Tests, Test Norms, Age, Instructional Program Divisions
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A.; Guo, Wenjing – Educational and Psychological Measurement, 2019
Rater effects, or raters' tendencies to assign ratings to performances that are different from the ratings that the performances warranted, are well documented in rater-mediated assessments across a variety of disciplines. In many real-data studies of rater effects, researchers have reported that raters exhibit more than one effect, such as a…
Descriptors: Evaluators, Bias, Scoring, Data Collection
Konstantopoulos, Spyros; Li, Wei; Miller, Shazia; van der Ploeg, Arie – Educational and Psychological Measurement, 2019
This study discusses quantile regression methodology and its usefulness in education and social science research. First, quantile regression is defined and its advantages vis-à-vis vis ordinary least squares regression are illustrated. Second, specific comparisons are made between ordinary least squares and quantile regression methods. Third, the…
Descriptors: Regression (Statistics), Statistical Analysis, Educational Research, Social Science Research
Peer reviewed Peer reviewed
Direct linkDirect link
Bürkner, Paul-Christian; Schulte, Niklas; Holling, Heinz – Educational and Psychological Measurement, 2019
Forced-choice questionnaires have been proposed to avoid common response biases typically associated with rating scale questionnaires. To overcome ipsativity issues of trait scores obtained from classical scoring approaches of forced-choice items, advanced methods from item response theory (IRT) such as the Thurstonian IRT model have been…
Descriptors: Item Response Theory, Measurement Techniques, Questionnaires, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Zopluoglu, Cengiz – Educational and Psychological Measurement, 2019
Researchers frequently use machine-learning methods in many fields. In the area of detecting fraud in testing, there have been relatively few studies that have used these methods to identify potential testing fraud. In this study, a technical review of a recently developed state-of-the-art algorithm, Extreme Gradient Boosting (XGBoost), is…
Descriptors: Identification, Test Items, Deception, Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T.; Dimitrov, Dimiter M.; Al-Mashary, Faisal – Educational and Psychological Measurement, 2019
The "D"-scoring method for scoring and equating tests with binary items proposed by Dimitrov offers some of the advantages of item response theory, such as item-level difficulty information and score computation that reflects the item difficulties, while retaining the merits of classical test theory such as the simplicity of number…
Descriptors: Test Construction, Scoring, Test Items, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kalinowski, Steven T. – Educational and Psychological Measurement, 2019
Item response theory (IRT) is a statistical paradigm for developing educational tests and assessing students. IRT, however, currently lacks an established graphical method for examining model fit for the three-parameter logistic model, the most flexible and popular IRT model in educational testing. A method is presented here to do this. The graph,…
Descriptors: Item Response Theory, Educational Assessment, Goodness of Fit, Probability
Pages: 1  |  ...  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  ...  |  255