Publication Date

In 2022 | 0 |

Since 2021 | 1 |

Since 2018 (last 5 years) | 13 |

Since 2013 (last 10 years) | 28 |

Since 2003 (last 20 years) | 33 |

Descriptor

Source

Educational and Psychological… | 42 |

Author

Marcoulides, George A. | 42 |

Raykov, Tenko | 29 |

Dimitrov, Dimiter M. | 5 |

Goldstein, Zvi | 4 |

Harrison, Michael | 3 |

Li, Tatyana | 3 |

Li, Tenglong | 3 |

Menold, Natalja | 3 |

Chang, Chi | 2 |

Dowling, N. Maritza | 2 |

Lee, Chun-Lung | 2 |

More ▼ |

Publication Type

Journal Articles | 42 |

Reports - Evaluative | 20 |

Reports - Research | 12 |

Reports - Descriptive | 9 |

Opinion Papers | 1 |

Education Level

Higher Education | 2 |

High Schools | 1 |

Postsecondary Education | 1 |

Audience

Location

Germany | 2 |

Saudi Arabia | 1 |

Laws, Policies, & Programs

Assessments and Surveys

Computer Anxiety Scale | 1 |

Law School Admission Test | 1 |

Raven Advanced Progressive… | 1 |

What Works Clearinghouse Rating

Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2021

The population discrepancy between unstandardized and standardized reliability of homogeneous multicomponent measuring instruments is examined. Within a latent variable modeling framework, it is shown that the standardized reliability coefficient for unidimensional scales can be markedly higher than the corresponding unstandardized reliability…

Descriptors: Test Reliability, Computation, Measures (Individuals), Research Problems

Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2020

This note raises caution that a finding of a marked pseudo-guessing parameter for an item within a three-parameter item response model could be spurious in a population with substantial unobserved heterogeneity. A numerical example is presented wherein each of two classes the two-parameter logistic model is used to generate the data on a…

Descriptors: Guessing (Tests), Item Response Theory, Test Items, Models

Dowling, N. Maritza; Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2020

Equating of psychometric scales and tests is frequently required and conducted in educational, behavioral, and clinical research. Construct comparability or equivalence between measuring instruments is a necessary condition for making decisions about linking and equating resulting scores. This article is concerned with a widely applicable method…

Descriptors: Evaluation Methods, Psychometrics, Screening Tests, Dementia

Raykov, Tenko; Marcoulides, George A.; Harrison, Michael; Menold, Natalja – Educational and Psychological Measurement, 2019

This note confronts the common use of a single coefficient alpha as an index informing about reliability of a multicomponent measurement instrument in a heterogeneous population. Two or more alpha coefficients could instead be meaningfully associated with a given instrument in finite mixture settings, and this may be increasingly more likely the…

Descriptors: Statistical Analysis, Test Reliability, Measures (Individuals), Computation

Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2019

This note discusses the merits of coefficient alpha and their conditions in light of recent critical publications that miss out on significant research findings over the past several decades. That earlier research has demonstrated the empirical relevance and utility of coefficient alpha under certain empirical circumstances. The article highlights…

Descriptors: Test Validity, Test Reliability, Test Items, Correlation

Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Harrison, Michael – Educational and Psychological Measurement, 2019

This note highlights and illustrates the links between item response theory and classical test theory in the context of polytomous items. An item response modeling procedure is discussed that can be used for point and interval estimation of the individual true score on any item in a measuring instrument or item set following the popular and widely…

Descriptors: Correlation, Item Response Theory, Test Items, Scores

Dowling, N. Maritza; Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2019

Longitudinal studies have steadily grown in popularity across the educational and behavioral sciences, particularly with the increased availability of technological devices that allow the easy collection of repeated measures on multiple dimensions of substantive relevance. This article discusses a procedure that can be used to evaluate population…

Descriptors: Longitudinal Studies, Older Adults, Cognitive Processes, Dementia

Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Harrison, Michael – Educational and Psychological Measurement, 2019

Building on prior research on the relationships between key concepts in item response theory and classical test theory, this note contributes to highlighting their important and useful links. A readily and widely applicable latent variable modeling procedure is discussed that can be used for point and interval estimation of the individual person…

Descriptors: True Scores, Item Response Theory, Test Items, Test Theory

Raykov, Tenko; Marcoulides, George A.; Dimitrov, Dimiter M.; Li, Tatyana – Educational and Psychological Measurement, 2018

This article extends the procedure outlined in the article by Raykov, Marcoulides, and Tong for testing congruence of latent constructs to the setting of binary items and clustering effects. In this widely used setting in contemporary educational and psychological research, the method can be used to examine if two or more homogeneous…

Descriptors: Tests, Psychometrics, Test Items, Construct Validity

Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Li, Tatyana; Menold, Natalja – Educational and Psychological Measurement, 2018

A latent variable modeling method for studying measurement invariance when evaluating latent constructs with multiple binary or binary scored items with no guessing is outlined. The approach extends the continuous indicator procedure described by Raykov and colleagues, utilizes similarly the false discovery rate approach to multiple testing, and…

Descriptors: Models, Statistical Analysis, Error of Measurement, Test Bias

Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2018

This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…

Descriptors: Measurement Techniques, Factor Analysis, Item Response Theory, Likert Scales

Raykov, Tenko; Goldammer, Philippe; Marcoulides, George A.; Li, Tatyana; Menold, Natalja – Educational and Psychological Measurement, 2018

A readily applicable procedure is discussed that allows evaluation of the discrepancy between the popular coefficient alpha and the reliability coefficient of a scale with second-order factorial structure that is frequently of relevance in empirical educational and psychological research. The approach is developed within the framework of the…

Descriptors: Test Reliability, Factor Structure, Statistical Analysis, Computation

Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2018

This note extends the results in the 2016 article by Raykov, Marcoulides, and Li to the case of correlated errors in a set of observed measures subjected to principal component analysis. It is shown that when at least two measures are fallible, the probability is zero for any principal component--and in particular for the first principal…

Descriptors: Factor Analysis, Error of Measurement, Correlation, Reliability

Raykov, Tenko; Marcoulides, George A.; Akaeze, Hope O. – Educational and Psychological Measurement, 2017

This note is concerned with examining the relationship between within-group and between-group variances in two-level nested designs. A latent variable modeling approach is outlined that permits point and interval estimation of their ratio and allows their comparison in a multilevel study. The procedure can also be used to test various hypotheses…

Descriptors: Comparative Analysis, Models, Statistical Analysis, Hierarchical Linear Modeling

Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2017

The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…

Descriptors: Error of Measurement, Factor Analysis, Research Methodology, Psychometrics