NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…212
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 16 to 30 of 212 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Aydin, Burak; Leite, Walter L.; Algina, James – Educational and Psychological Measurement, 2016
We investigated methods of including covariates in two-level models for cluster randomized trials to increase power to detect the treatment effect. We compared multilevel models that included either an observed cluster mean or a latent cluster mean as a covariate, as well as the effect of including Level 1 deviation scores in the model. A Monte…
Descriptors: Error of Measurement, Predictor Variables, Randomized Controlled Trials, Experimental Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Nugent, William Robert; Moore, Matthew; Story, Erin – Educational and Psychological Measurement, 2015
The standardized mean difference (SMD) is perhaps the most important meta-analytic effect size. It is typically used to represent the difference between treatment and control population means in treatment efficacy research. It is also used to represent differences between populations with different characteristics, such as persons who are…
Descriptors: Error of Measurement, Error Correction, Predictor Variables, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Paulhus, Delroy L.; Dubois, Patrick J. – Educational and Psychological Measurement, 2014
The overclaiming technique is a novel assessment procedure that uses signal detection analysis to generate indices of knowledge accuracy (OC-accuracy) and self-enhancement (OC-bias). The technique has previously shown robustness over varied knowledge domains as well as low reactivity across administration contexts. Here we compared the OC-accuracy…
Descriptors: Educational Assessment, Knowledge Level, Accuracy, Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Lee, Chun-Lung; Marcoulides, George A.; Chang, Chi – Educational and Psychological Measurement, 2013
The relationship between saturated path-analysis models and their fit to data is revisited. It is demonstrated that a saturated model need not fit perfectly or even well a given data set when fit to the raw data is examined, a criterion currently frequently overlooked by researchers utilizing path analysis modeling techniques. The potential of…
Descriptors: Structural Equation Models, Goodness of Fit, Path Analysis, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Shear, Benjamin R.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2013
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Descriptors: Error of Measurement, Multiple Regression Analysis, Data Analysis, Computer Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Shaw, Emily J.; Marini, Jessica P.; Mattern, Krista D. – Educational and Psychological Measurement, 2013
The current study evaluated the relationship between various operationalizations of the Advanced Placement[R] (AP) exam and course information with first-year grade point average (FYGPA) in college to better understand the role of AP in college admission decisions. In particular, the incremental validity of the different AP variables, above…
Descriptors: Advanced Placement Programs, Grade Point Average, College Freshmen, College Admission
Peer reviewed Peer reviewed
Direct linkDirect link
Kobrin, Jennifer L.; Kim, YoungKoung; Sackett, Paul R. – Educational and Psychological Measurement, 2012
There is much debate on the merits and pitfalls of standardized tests for college admission, with questions regarding the format (multiple-choice vs. constructed response), cognitive complexity, and content of these assessments (achievement vs. aptitude) at the forefront of the discussion. This study addressed these questions by investigating the…
Descriptors: Grade Point Average, Standardized Tests, Predictive Validity, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Le, Huy; Marcus, Justin – Educational and Psychological Measurement, 2012
This study used Monte Carlo simulation to examine the properties of the overall odds ratio (OOR), which was recently introduced as an index for overall effect size in multiple logistic regression. It was found that the OOR was relatively independent of study base rate and performed better than most commonly used R-square analogs in indexing model…
Descriptors: Monte Carlo Methods, Probability, Mathematical Concepts, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Wai – Educational and Psychological Measurement, 2009
A typical question in multiple regression analysis is to determine if a set of predictors gives the same degree of predictor power in two different populations. Olkin and Finn (1995) proposed two asymptotic-based methods for testing the equality of two population squared multiple correlations, [rho][superscript 2][subscript 1] and…
Descriptors: Multiple Regression Analysis, Intervals, Correlation, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Algina, James; Keselman, Harvey J.; Penfield, Randall J. – Educational and Psychological Measurement, 2008
A squared semipartial correlation coefficient ([Delta]R[superscript 2]) is the increase in the squared multiple correlation coefficient that occurs when a predictor is added to a multiple regression model. Prior research has shown that coverage probability for a confidence interval constructed by using a modified percentile bootstrap method with…
Descriptors: Intervals, Correlation, Probability, Multiple Regression Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hirschfeld, Robert R.; Thomas, Christopher H.; McNatt, D. Brian – Educational and Psychological Measurement, 2008
The authors explored implications of individuals' self-deception (a trait) for their self-reported intrinsic and extrinsic motivational dispositions and their actual learning performance. In doing so, a higher order structural model was developed and tested in which intrinsic and extrinsic motivational dispositions were underlying factors that…
Descriptors: Deception, Predictor Variables, Motivation, Incentives
Peer reviewed Peer reviewed
Direct linkDirect link
Algina, James; Keselman, H. J. – Educational and Psychological Measurement, 2008
Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)
Descriptors: Intervals, Sample Size, Validity, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Knofczynski, Gregory T.; Mundfrom, Daniel – Educational and Psychological Measurement, 2008
When using multiple regression for prediction purposes, the issue of minimum required sample size often needs to be addressed. Using a Monte Carlo simulation, models with varying numbers of independent variables were examined and minimum sample sizes were determined for multiple scenarios at each number of independent variables. The scenarios…
Descriptors: Sample Size, Monte Carlo Methods, Predictor Variables, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Andrew; Bagger, Jessica – Educational and Psychological Measurement, 2007
The Balanced Inventory of Desirable Responding (BIDR) is one of the most widely used social desirability scales. The authors conducted a reliability generalization study to examine the typical reliability coefficients of BIDR scores and explored factors that explained the variability of reliability estimates across studies. The results indicated…
Descriptors: Reliability, Generalization, Social Desirability, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas, Lisa L.; Kuncel, Nathan R.; Crede, Marcus – Educational and Psychological Measurement, 2007
The Non-Cognitive Questionnaire (NCQ) is a 23-item measure assessing eight noncognitive variables that are thought to predict the performance and retention of students in college. The NCQ is widely used in research and practice. This study is a meta-analytic review of the validity of scores on the NCQ across 47 independent samples for predicting…
Descriptors: School Holding Power, Measures (Individuals), Persistence, Grade Point Average
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  15