Publication Date
| In 2015 | 0 |
| Since 2014 | 1 |
| Since 2011 (last 5 years) | 6 |
| Since 2006 (last 10 years) | 10 |
| Since 1996 (last 20 years) | 16 |
Descriptor
| Correlation | 7 |
| Statistical Analysis | 7 |
| Error of Measurement | 6 |
| Test Items | 6 |
| Evaluation Methods | 5 |
| Psychometrics | 5 |
| Reliability | 5 |
| Data Analysis | 4 |
| Factor Analysis | 4 |
| Sample Size | 4 |
| More ▼ | |
Source
| Educational and Psychological… | 17 |
Author
| Zumbo, Bruno D. | 17 |
| Liu, Yan | 4 |
| Gómez-Benito, Juana | 2 |
| Rupp, Andre A. | 2 |
| Wu, Amery D. | 2 |
| Zimmerman, Donald W. | 2 |
| Aylesworth, Richard | 1 |
| Gelin, Michaela N. | 1 |
| Hay, Jana L. | 1 |
| Hidalgo, Maria Dolores | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 17 |
| Reports - Research | 9 |
| Reports - Evaluative | 5 |
| Reports - Descriptive | 2 |
Education Level
| Adult Education | 1 |
Audience
Showing 1 to 15 of 17 results
Hidalgo, Mª Dolores; Gómez-Benito, Juana; Zumbo, Bruno D. – Educational and Psychological Measurement, 2014
The authors analyze the effectiveness of the R[superscript 2] and delta log odds ratio effect size measures when using logistic regression analysis to detect differential item functioning (DIF) in dichotomous items. A simulation study was carried out, and the Type I error rate and power estimates under conditions in which only statistical testing…
Descriptors: Regression (Statistics), Test Bias, Effect Size, Test Items
Gómez-Benito, Juana; Hidalgo, Maria Dolores; Zumbo, Bruno D. – Educational and Psychological Measurement, 2013
The objective of this article was to find an optimal decision rule for identifying polytomous items with large or moderate amounts of differential functioning. The effectiveness of combining statistical tests with effect size measures was assessed using logistic discriminant function analysis and two effect size measures: R[superscript 2] and…
Descriptors: Item Analysis, Test Items, Effect Size, Statistical Analysis
Shear, Benjamin R.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2013
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Descriptors: Error of Measurement, Multiple Regression Analysis, Data Analysis, Computer Simulation
Liu, Yan; Zumbo, Bruno D.; Wu, Amery D. – Educational and Psychological Measurement, 2012
Previous studies have rarely examined the impact of outliers on the decisions about the number of factors to extract in an exploratory factor analysis. The few studies that have investigated this issue have arrived at contradictory conclusions regarding whether outliers inflated or deflated the number of factors extracted. By systematically…
Descriptors: Factor Analysis, Data, Simulation, Monte Carlo Methods
Thomas, D. Roland; Zumbo, Bruno D. – Educational and Psychological Measurement, 2012
There is such doubt in research practice about the reliability of difference scores that granting agencies, journal editors, reviewers, and committees of graduate students' theses have been known to deplore their use. This most maligned index can be used in studies of change, growth, or perhaps discrepancy between two measures taken on the same…
Descriptors: Statistical Analysis, Reliability, Scores, Change
Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2012
There is a lack of research on the effects of outliers on the decisions about the number of factors to retain in an exploratory factor analysis, especially for outliers arising from unintended and unknowingly included subpopulations. The purpose of the present research was to investigate how outliers from an unintended and unknowingly included…
Descriptors: Factor Analysis, Factor Structure, Evaluation Research, Evaluation Methods
Liu, Yan; Wu, Amery D.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2010
In a recent Monte Carlo simulation study, Liu and Zumbo showed that outliers can severely inflate the estimates of Cronbach's coefficient alpha for continuous item response data--visual analogue response format. Little, however, is known about the effect of outliers for ordinal item response data--also commonly referred to as Likert, Likert-type,…
Descriptors: Reliability, Computation, Monte Carlo Methods, Rating Scales
Richardson, Chris G.; Ratner, Pamela A.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2007
The purpose of this investigation was to test the age-related measurement invariance and temporal stability of the 13-item version of Antonovsky's Sense of Coherence Scale (SOC). Multigroup structural equation modeling of longitudinal data from the Canadian National Population Health Survey was used to examine the measurement invariance across 3…
Descriptors: Foreign Countries, Measures (Individuals), Rhetoric, Structural Equation Models
Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2007
The impact of outliers on Cronbach's coefficient [alpha] has not been documented in the psychometric or statistical literature. This is an important gap because coefficient [alpha] is the most widely used measurement statistic in all of the social, educational, and health sciences. The impact of outliers on coefficient [alpha] is investigated for…
Descriptors: Psychometrics, Computation, Reliability, Monte Carlo Methods
Rupp, Andre A.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2006
One theoretical feature that makes item response theory (IRT) models those of choice for many psychometric data analysts is parameter invariance, the equality of item and examinee parameters from different examinee populations or measurement conditions. In this article, using the well-known fact that item and examinee parameters are identical only…
Descriptors: Psychometrics, Probability, Simulation, Item Response Theory
Kristjansson, Elizabeth; Aylesworth, Richard; Mcdowell, Ian; Zumbo, Bruno D. – Educational and Psychological Measurement, 2005
Item bias is a major threat to measurement validity. Methods for detecting differential item functioning (DIF) are now commonly used to identify potentially biased items. DIF detection methods for dichotomous items are well developed, but those for ordinal items are less well developed. In this article, the authors compare four methods for…
Descriptors: Discriminant Analysis, Test Bias, Multivariate Analysis, Regression (Statistics)
Zimmerman, Donald W.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2005
Educational and psychological testing textbooks typically warn of the inappropriateness of performing arithmetic operations and statistical analysis on percentiles instead of raw scores. This seems inconsistent with the well-established finding that transforming scores to ranks and using nonparametric methods often improves the validity and power…
Descriptors: Statistical Analysis, Psychological Testing, Raw Scores, Evaluation Methods
Rupp, Andre A.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2004
Based on seminal work by Lord and Hambleton, Swaminathan, and Rogers, this article is an analytical, graphical, and conceptual reminder that item response theory (IRT) parameter invariance only holds for perfect model fit in multiple populations or across multiple conditions and is thus an ideal state. In practice, one attempts to quantify the…
Descriptors: Correlation, Item Response Theory, Statistical Analysis, Evaluation Methods
Peer reviewedGelin, Michaela N.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2003
Investigated potentially biased scale items on the Center for Epidemiological Studies Depression scale (CES-D; Radloff, 1977) in a sample of 600 adults. Overall, results indicate that the scoring method has an effect on differential item functioning (DIF), and that DIF is a property of the item, scoring method, and purpose of the assessment. (SLD)
Descriptors: Depression (Psychology), Item Bias, Scoring, Test Items
Peer reviewedHiggins, N. C.; Zumbo, Bruno D.; Hay, Jana L. – Educational and Psychological Measurement, 1999
Confirmatory factor analysis of data from 1,346 respondents to the Attributional Style Questionnaire (ASQ) (C. Peterson and others, 1982) reveals that adequate fit is provided by a three-factor attributional style model that includes context-dependent item sets. Results suggest that there is no such thing as a nonsituational attributional style.…
Descriptors: Adults, Attribution Theory, Construct Validity, Context Effect
Previous Page | Next Page »
Pages: 1 | 2
Direct link
