NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…579
Audience
What Works Clearinghouse Rating
Showing 1 to 15 of 579 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Al-Qataee, Abdullah A.; Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2020
A procedure for evaluation of validity related coefficients and their differences is discussed, which is applicable when one or more frequently used assumptions in empirical educational, behavioral and social research are violated. The method is developed within the framework of the latent variable modeling methodology and accomplishes point and…
Descriptors: Validity, Evaluation Methods, Social Science Research, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Dowling, N. Maritza; Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2020
Equating of psychometric scales and tests is frequently required and conducted in educational, behavioral, and clinical research. Construct comparability or equivalence between measuring instruments is a necessary condition for making decisions about linking and equating resulting scores. This article is concerned with a widely applicable method…
Descriptors: Evaluation Methods, Psychometrics, Screening Tests, Dementia
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2020
This note raises caution that a finding of a marked pseudo-guessing parameter for an item within a three-parameter item response model could be spurious in a population with substantial unobserved heterogeneity. A numerical example is presented wherein each of two classes the two-parameter logistic model is used to generate the data on a…
Descriptors: Guessing (Tests), Item Response Theory, Test Items, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Fujimoto, Ken A.; Neugebauer, Sabina R. – Educational and Psychological Measurement, 2020
Although item response theory (IRT) models such as the bifactor, two-tier, and between-item-dimensionality IRT models have been devised to confirm complex dimensional structures in educational and psychological data, they can be challenging to use in practice. The reason is that these models are multidimensional IRT (MIRT) models and thus are…
Descriptors: Bayesian Statistics, Item Response Theory, Sample Size, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Feuerstahler, Leah M.; Waller, Niels; MacDonald, Angus, III – Educational and Psychological Measurement, 2020
Although item response models have grown in popularity in many areas of educational and psychological assessment, there are relatively few applications of these models in experimental psychopathology. In this article, we explore the use of item response models in the context of a computerized cognitive task designed to assess visual working memory…
Descriptors: Item Response Theory, Psychopathology, Intelligence Tests, Psychological Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2019
This note discusses the merits of coefficient alpha and their conditions in light of recent critical publications that miss out on significant research findings over the past several decades. That earlier research has demonstrated the empirical relevance and utility of coefficient alpha under certain empirical circumstances. The article highlights…
Descriptors: Test Validity, Test Reliability, Test Items, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Harrison, Michael; Menold, Natalja – Educational and Psychological Measurement, 2019
This note confronts the common use of a single coefficient alpha as an index informing about reliability of a multicomponent measurement instrument in a heterogeneous population. Two or more alpha coefficients could instead be meaningfully associated with a given instrument in finite mixture settings, and this may be increasingly more likely the…
Descriptors: Statistical Analysis, Test Reliability, Measures (Individuals), Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2017
The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…
Descriptors: Error of Measurement, Factor Analysis, Research Methodology, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Wei; Konstantopoulos, Spyros – Educational and Psychological Measurement, 2017
Field experiments in education frequently assign entire groups such as schools to treatment or control conditions. These experiments incorporate sometimes a longitudinal component where for example students are followed over time to assess differences in the average rate of linear change, or rate of acceleration. In this study, we provide methods…
Descriptors: Educational Experiments, Field Studies, Models, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Wilcox, Rand R.; Serang, Sarfaraz – Educational and Psychological Measurement, 2017
The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…
Descriptors: Hypothesis Testing, Bayesian Statistics, Computation, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Jeff – Educational and Psychological Measurement, 2017
Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that hypothesis…
Descriptors: Hypothesis Testing, Testing Problems, Test Validity, Relevance (Education)
Peer reviewed Peer reviewed
Direct linkDirect link
García-Pérez, Miguel A. – Educational and Psychological Measurement, 2017
Null hypothesis significance testing (NHST) has been the subject of debate for decades and alternative approaches to data analysis have been proposed. This article addresses this debate from the perspective of scientific inquiry and inference. Inference is an inverse problem and application of statistical methods cannot reveal whether effects…
Descriptors: Hypothesis Testing, Statistical Inference, Effect Size, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Cousineau, Denis; Laurencelle, Louis – Educational and Psychological Measurement, 2017
Assessing global interrater agreement is difficult as most published indices are affected by the presence of mixtures of agreements and disagreements. A previously proposed method was shown to be specifically sensitive to global agreement, excluding mixtures, but also negatively biased. Here, we propose two alternatives in an attempt to find what…
Descriptors: Interrater Reliability, Evaluation Methods, Statistical Bias, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2016
The frequently neglected and often misunderstood relationship between classical test theory and item response theory is discussed for the unidimensional case with binary measures and no guessing. It is pointed out that popular item response models can be directly obtained from classical test theory-based models by accounting for the discrete…
Descriptors: Test Theory, Item Response Theory, Models, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Devlieger, Ines; Mayer, Axel; Rosseel, Yves – Educational and Psychological Measurement, 2016
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…
Descriptors: Regression (Statistics), Comparative Analysis, Structural Equation Models, Monte Carlo Methods
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  39