NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Olvera Astivia, Oscar Lorenzo; Kroc, Edward; Zumbo, Bruno D. – Educational and Psychological Measurement, 2020
Simulations concerning the distributional assumptions of coefficient alpha are contradictory. To provide a more principled theoretical framework, this article relies on the Fréchet-Hoeffding bounds, in order to showcase that the distribution of the items play a role on the estimation of correlations and covariances. More specifically, these bounds…
Descriptors: Test Items, Test Reliability, Computation, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Michelle Y.; Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2020
This study introduces a novel differential item functioning (DIF) method based on propensity score matching that tackles two challenges in analyzing performance assessment data, that is, continuous task scores and lack of a reliable internal variable as a proxy for ability or aptitude. The proposed DIF method consists of two main stages. First,…
Descriptors: Probability, Scores, Evaluation Methods, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Zumbo, Bruno D.; Kroc, Edward – Educational and Psychological Measurement, 2019
Chalmers recently published a critique of the use of ordinal a[alpha] proposed in Zumbo et al. as a measure of test reliability in certain research settings. In this response, we take up the task of refuting Chalmers' critique. We identify three broad misconceptions that characterize Chalmers' criticisms: (1) confusing assumptions with…
Descriptors: Test Reliability, Statistical Analysis, Misconceptions, Mathematical Models
Peer reviewed Peer reviewed
Direct linkDirect link
Olvera Astivia, Oscar L.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2015
To further understand the properties of data-generation algorithms for multivariate, nonnormal data, two Monte Carlo simulation studies comparing the Vale and Maurelli method and the Headrick fifth-order polynomial method were implemented. Combinations of skewness and kurtosis found in four published articles were run and attention was…
Descriptors: Data, Simulation, Monte Carlo Methods, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hidalgo, Mª Dolores; Gómez-Benito, Juana; Zumbo, Bruno D. – Educational and Psychological Measurement, 2014
The authors analyze the effectiveness of the R[superscript 2] and delta log odds ratio effect size measures when using logistic regression analysis to detect differential item functioning (DIF) in dichotomous items. A simulation study was carried out, and the Type I error rate and power estimates under conditions in which only statistical testing…
Descriptors: Regression (Statistics), Test Bias, Effect Size, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Shear, Benjamin R.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2013
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Descriptors: Error of Measurement, Multiple Regression Analysis, Data Analysis, Computer Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Gómez-Benito, Juana; Hidalgo, Maria Dolores; Zumbo, Bruno D. – Educational and Psychological Measurement, 2013
The objective of this article was to find an optimal decision rule for identifying polytomous items with large or moderate amounts of differential functioning. The effectiveness of combining statistical tests with effect size measures was assessed using logistic discriminant function analysis and two effect size measures: R[superscript 2] and…
Descriptors: Item Analysis, Test Items, Effect Size, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2012
There is a lack of research on the effects of outliers on the decisions about the number of factors to retain in an exploratory factor analysis, especially for outliers arising from unintended and unknowingly included subpopulations. The purpose of the present research was to investigate how outliers from an unintended and unknowingly included…
Descriptors: Factor Analysis, Factor Structure, Evaluation Research, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yan; Zumbo, Bruno D.; Wu, Amery D. – Educational and Psychological Measurement, 2012
Previous studies have rarely examined the impact of outliers on the decisions about the number of factors to extract in an exploratory factor analysis. The few studies that have investigated this issue have arrived at contradictory conclusions regarding whether outliers inflated or deflated the number of factors extracted. By systematically…
Descriptors: Factor Analysis, Data, Simulation, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas, D. Roland; Zumbo, Bruno D. – Educational and Psychological Measurement, 2012
There is such doubt in research practice about the reliability of difference scores that granting agencies, journal editors, reviewers, and committees of graduate students' theses have been known to deplore their use. This most maligned index can be used in studies of change, growth, or perhaps discrepancy between two measures taken on the same…
Descriptors: Statistical Analysis, Reliability, Scores, Change
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yan; Wu, Amery D.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2010
In a recent Monte Carlo simulation study, Liu and Zumbo showed that outliers can severely inflate the estimates of Cronbach's coefficient alpha for continuous item response data--visual analogue response format. Little, however, is known about the effect of outliers for ordinal item response data--also commonly referred to as Likert, Likert-type,…
Descriptors: Reliability, Computation, Monte Carlo Methods, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Richardson, Chris G.; Ratner, Pamela A.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2007
The purpose of this investigation was to test the age-related measurement invariance and temporal stability of the 13-item version of Antonovsky's Sense of Coherence Scale (SOC). Multigroup structural equation modeling of longitudinal data from the Canadian National Population Health Survey was used to examine the measurement invariance across 3…
Descriptors: Foreign Countries, Measures (Individuals), Rhetoric, Structural Equation Models
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2007
The impact of outliers on Cronbach's coefficient [alpha] has not been documented in the psychometric or statistical literature. This is an important gap because coefficient [alpha] is the most widely used measurement statistic in all of the social, educational, and health sciences. The impact of outliers on coefficient [alpha] is investigated for…
Descriptors: Psychometrics, Computation, Reliability, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Rupp, Andre A.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2006
One theoretical feature that makes item response theory (IRT) models those of choice for many psychometric data analysts is parameter invariance, the equality of item and examinee parameters from different examinee populations or measurement conditions. In this article, using the well-known fact that item and examinee parameters are identical only…
Descriptors: Psychometrics, Probability, Simulation, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Zimmerman, Donald W.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2005
Educational and psychological testing textbooks typically warn of the inappropriateness of performing arithmetic operations and statistical analysis on percentiles instead of raw scores. This seems inconsistent with the well-established finding that transforming scores to ranks and using nonparametric methods often improves the validity and power…
Descriptors: Statistical Analysis, Psychological Testing, Raw Scores, Evaluation Methods
Previous Page | Next Page »
Pages: 1  |  2