NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…3768
What Works Clearinghouse Rating
Showing 46 to 60 of 3,768 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Stella Y.; Lee, Won-Chan; Kolen, Michael J. – Educational and Psychological Measurement, 2020
A theoretical and conceptual framework for true-score equating using a simple-structure multidimensional item response theory (SS-MIRT) model is developed. A true-score equating method, referred to as the SS-MIRT true-score equating (SMT) procedure, also is developed. SS-MIRT has several advantages over other complex multidimensional item response…
Descriptors: Item Response Theory, Equated Scores, True Scores, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Hayes, Timothy; Usami, Satoshi – Educational and Psychological Measurement, 2020
Recently, quantitative researchers have shown increased interest in two-step factor score regression (FSR) approaches to structural model estimation. A particularly promising approach proposed by Croon involves first extracting factor scores for each latent factor in a larger model, then correcting the variance-covariance matrix of the factor…
Descriptors: Regression (Statistics), Structural Equation Models, Statistical Bias, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Shi, Dexin; Lee, Taehun; Fairchild, Amanda J.; Maydeu-Olivares, Alberto – Educational and Psychological Measurement, 2020
This study compares two missing data procedures in the context of ordinal factor analysis models: pairwise deletion (PD; the default setting in Mplus) and multiple imputation (MI). We examine which procedure demonstrates parameter estimates and model fit indices closer to those of complete data. The performance of PD and MI are compared under a…
Descriptors: Factor Analysis, Statistical Analysis, Computation, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Dowling, N. Maritza; Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2020
Equating of psychometric scales and tests is frequently required and conducted in educational, behavioral, and clinical research. Construct comparability or equivalence between measuring instruments is a necessary condition for making decisions about linking and equating resulting scores. This article is concerned with a widely applicable method…
Descriptors: Evaluation Methods, Psychometrics, Screening Tests, Dementia
Peer reviewed Peer reviewed
Direct linkDirect link
Marland, Joshua; Harrick, Matthew; Sireci, Stephen G. – Educational and Psychological Measurement, 2020
Student assessment nonparticipation (or opt out) has increased substantially in K-12 schools in states across the country. This increase in opt out has the potential to impact achievement and growth (or value-added) measures used for educator and institutional accountability. In this simulation study, we investigated the extent to which…
Descriptors: Value Added Models, Teacher Effectiveness, Teacher Evaluation, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
LaVoie, Noelle; Parker, James; Legree, Peter J.; Ardison, Sharon; Kilcullen, Robert N. – Educational and Psychological Measurement, 2020
Automated scoring based on Latent Semantic Analysis (LSA) has been successfully used to score essays and constrained short answer responses. Scoring tests that capture open-ended, short answer responses poses some challenges for machine learning approaches. We used LSA techniques to score short answer responses to the Consequences Test, a measure…
Descriptors: Semantics, Evaluators, Essays, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Cohn, Sophie; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2020
The purpose of this study is to evaluate whether a recently developed semiordered model can be used to explore the functioning of neutral response options in rating scale data. Huggins-Manley, Algina, and Zhou developed a class of unidimensional models for semiordered data within scale items (i.e., items with both ordered response categories and…
Descriptors: Models, Responses, Evaluation Methods, Preservice Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes – Educational and Psychological Measurement, 2020
Exploratory factor analysis (EFA) is widely used by researchers in the social sciences to characterize the latent structure underlying a set of observed indicator variables. One of the primary issues that must be resolved when conducting an EFA is determination of the number of factors to retain. There exist a large number of statistical tools…
Descriptors: Factor Analysis, Goodness of Fit, Social Sciences, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Al-Qataee, Abdullah A.; Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2020
A procedure for evaluation of validity related coefficients and their differences is discussed, which is applicable when one or more frequently used assumptions in empirical educational, behavioral and social research are violated. The method is developed within the framework of the latent variable modeling methodology and accomplishes point and…
Descriptors: Validity, Evaluation Methods, Social Science Research, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Zopluoglu, Cengiz – Educational and Psychological Measurement, 2020
A mixture extension of Samejima's continuous response model for continuous measurement outcomes and its estimation through a heuristic approach based on limited-information factor analysis is introduced. Using an empirical data set, it is shown that two groups of respondents that differ both qualitatively and quantitatively in their response…
Descriptors: Item Response Theory, Measurement, Models, Heuristics
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Maxwell; Steedle, Jeffrey T.; Cheng, Ying – Educational and Psychological Measurement, 2020
Insufficient effort responding (IER) affects many forms of assessment in both educational and psychological contexts. Much research has examined different types of IER, IER's impact on the psychometric properties of test scores, and preprocessing procedures used to detect IER. However, there is a gap in the literature in terms of practical advice…
Descriptors: Responses, Psychometrics, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Himelfarb, Igor; Marcoulides, Katerina M.; Fang, Guoliang; Shotts, Bruce L. – Educational and Psychological Measurement, 2020
The chiropractic clinical competency examination uses groups of items that are integrated by a common case vignette. The nature of the vignette items violates the assumption of local independence for items nested within a vignette. This study examines via simulation a new algorithmic approach for addressing the local independence violation problem…
Descriptors: Allied Health Occupations Education, Allied Health Personnel, Competence, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020
Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…
Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Murrah, William M. – Educational and Psychological Measurement, 2020
Multiple regression is often used to compare the importance of two or more predictors. When the predictors being compared are measured with error, the estimated coefficients can be biased and Type I error rates can be inflated. This study explores the impact of measurement error on comparing predictors when one is measured with error, followed by…
Descriptors: Error of Measurement, Statistical Bias, Multiple Regression Analysis, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Ulitzsch, Esther; von Davier, Matthias; Pohl, Steffi – Educational and Psychological Measurement, 2020
So far, modeling approaches for not-reached items have considered one single underlying process. However, missing values at the end of a test can occur for a variety of reasons. On the one hand, examinees may not reach the end of a test due to time limits and lack of working speed. On the other hand, examinees may not attempt all items and quit…
Descriptors: Item Response Theory, Test Items, Response Style (Tests), Computer Assisted Testing
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  252