NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…206
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 206 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sim, Mikyung; Kim, Su-Young; Suh, Youngsuk – Educational and Psychological Measurement, 2022
Mediation models have been widely used in many disciplines to better understand the underlying processes between independent and dependent variables. Despite their popularity and importance, the appropriate sample sizes for estimating those models are not well known. Although several approaches (such as Monte Carlo methods) exist, applied…
Descriptors: Sample Size, Statistical Analysis, Predictor Variables, Path Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison J.; Myers, Aaron J. – Educational and Psychological Measurement, 2021
Contamination of responses due to extreme and midpoint response style can confound the interpretation of scores, threatening the validity of inferences made from survey responses. This study incorporated person-level covariates in the multidimensional item response tree model to explain heterogeneity in response style. We include an empirical…
Descriptors: Response Style (Tests), Item Response Theory, Longitudinal Studies, Adolescents
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Eunsook; von der Embse, Nathaniel – Educational and Psychological Measurement, 2021
Although collecting data from multiple informants is highly recommended, methods to model the congruence and incongruence between informants are limited. Bauer and colleagues suggested the trifactor model that decomposes the variances into common factor, informant perspective factors, and item-specific factors. This study extends their work to the…
Descriptors: Probability, Models, Statistical Analysis, Congruence (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Murrah, William M. – Educational and Psychological Measurement, 2020
Multiple regression is often used to compare the importance of two or more predictors. When the predictors being compared are measured with error, the estimated coefficients can be biased and Type I error rates can be inflated. This study explores the impact of measurement error on comparing predictors when one is measured with error, followed by…
Descriptors: Error of Measurement, Statistical Bias, Multiple Regression Analysis, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Sorjonen, Kimmo; Melin, Bo; Ingre, Michael – Educational and Psychological Measurement, 2019
The present simulation study indicates that a method where the regression effect of a predictor (X) on an outcome at follow-up (Y1) is calculated while adjusting for the outcome at baseline (Y0) can give spurious findings, especially when there is a strong correlation between X and Y0 and when the test-retest correlation between Y0 and Y1 is…
Descriptors: Predictor Variables, Regression (Statistics), Correlation, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Trafimow, David – Educational and Psychological Measurement, 2018
Because error variance alternatively can be considered to be the sum of systematic variance associated with unknown variables and randomness, a tripartite assumption is proposed that total variance in the dependent variable can be partitioned into three variance components. These are variance in the dependent variable that is explained by the…
Descriptors: Statistical Analysis, Correlation, Experiments, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
No, Unkyung; Hong, Sehee – Educational and Psychological Measurement, 2018
The purpose of the present study is to compare performances of mixture modeling approaches (i.e., one-step approach, three-step maximum-likelihood approach, three-step BCH approach, and LTB approach) based on diverse sample size conditions. To carry out this research, two simulation studies were conducted with two different models, a latent class…
Descriptors: Sample Size, Classification, Comparative Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cain, Meghan K.; Zhang, Zhiyong; Bergeman, C. S. – Educational and Psychological Measurement, 2018
This article serves as a practical guide to mediation design and analysis by evaluating the ability of mediation models to detect a significant mediation effect using limited data. The cross-sectional mediation model, which has been shown to be biased when the mediation is happening over time, is compared with longitudinal mediation models:…
Descriptors: Mediation Theory, Case Studies, Longitudinal Studies, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Hamby, Tyler; Taylor, Wyn – Educational and Psychological Measurement, 2016
This study examined the predictors and psychometric outcomes of survey satisficing, wherein respondents provide quick, "good enough" answers (satisficing) rather than carefully considered answers (optimizing). We administered surveys to university students and respondents--half of whom held college degrees--from a for-pay survey website,…
Descriptors: Surveys, Test Reliability, Test Validity, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Aydin, Burak; Leite, Walter L.; Algina, James – Educational and Psychological Measurement, 2016
We investigated methods of including covariates in two-level models for cluster randomized trials to increase power to detect the treatment effect. We compared multilevel models that included either an observed cluster mean or a latent cluster mean as a covariate, as well as the effect of including Level 1 deviation scores in the model. A Monte…
Descriptors: Error of Measurement, Predictor Variables, Randomized Controlled Trials, Experimental Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Nugent, William Robert; Moore, Matthew; Story, Erin – Educational and Psychological Measurement, 2015
The standardized mean difference (SMD) is perhaps the most important meta-analytic effect size. It is typically used to represent the difference between treatment and control population means in treatment efficacy research. It is also used to represent differences between populations with different characteristics, such as persons who are…
Descriptors: Error of Measurement, Error Correction, Predictor Variables, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Paulhus, Delroy L.; Dubois, Patrick J. – Educational and Psychological Measurement, 2014
The overclaiming technique is a novel assessment procedure that uses signal detection analysis to generate indices of knowledge accuracy (OC-accuracy) and self-enhancement (OC-bias). The technique has previously shown robustness over varied knowledge domains as well as low reactivity across administration contexts. Here we compared the OC-accuracy…
Descriptors: Educational Assessment, Knowledge Level, Accuracy, Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Shaw, Emily J.; Marini, Jessica P.; Mattern, Krista D. – Educational and Psychological Measurement, 2013
The current study evaluated the relationship between various operationalizations of the Advanced Placement[R] (AP) exam and course information with first-year grade point average (FYGPA) in college to better understand the role of AP in college admission decisions. In particular, the incremental validity of the different AP variables, above…
Descriptors: Advanced Placement Programs, Grade Point Average, College Freshmen, College Admission
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Lee, Chun-Lung; Marcoulides, George A.; Chang, Chi – Educational and Psychological Measurement, 2013
The relationship between saturated path-analysis models and their fit to data is revisited. It is demonstrated that a saturated model need not fit perfectly or even well a given data set when fit to the raw data is examined, a criterion currently frequently overlooked by researchers utilizing path analysis modeling techniques. The potential of…
Descriptors: Structural Equation Models, Goodness of Fit, Path Analysis, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Shear, Benjamin R.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2013
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Descriptors: Error of Measurement, Multiple Regression Analysis, Data Analysis, Computer Simulation
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  14