Publication Date
| In 2015 | 0 |
| Since 2014 | 8 |
| Since 2011 (last 5 years) | 29 |
| Since 2006 (last 10 years) | 124 |
| Since 1996 (last 20 years) | 199 |
Descriptor
| Factor Analysis | 531 |
| Factor Structure | 209 |
| Test Validity | 120 |
| Correlation | 95 |
| Measures (Individuals) | 75 |
| Psychometrics | 67 |
| Scores | 64 |
| Higher Education | 61 |
| Test Reliability | 60 |
| College Students | 57 |
| More ▼ | |
Author
| Michael, William B. | 12 |
| Kaiser, Henry F. | 8 |
| Finney, Sara J. | 6 |
| Gorsuch, Richard L. | 6 |
| Klein, Alice E. | 6 |
| Hakstian, A. Ralph | 5 |
| Thompson, Bruce | 5 |
| Johnson, Annabel M. | 4 |
| Johnson, William L. | 4 |
| Morris, John D. | 4 |
| More ▼ | |
Publication Type
Education Level
| Higher Education | 31 |
| High Schools | 19 |
| Postsecondary Education | 9 |
| Elementary Education | 6 |
| Secondary Education | 6 |
| Middle Schools | 4 |
| Elementary Secondary Education | 3 |
| Early Childhood Education | 2 |
| Grade 3 | 2 |
| Grade 6 | 2 |
| More ▼ | |
Audience
Showing 1 to 15 of 531 results
Attali, Yigal – Educational and Psychological Measurement, 2014
This article presents a comparative judgment approach for holistically scored constructed response tasks. In this approach, the grader rank orders (rather than rate) the quality of a small set of responses. A prior automated evaluation of responses guides both set formation and scaling of rankings. Sets are formed to have similar prior scores and…
Descriptors: Responses, Item Response Theory, Scores, Rating Scales
Plieninger, Hansjörg; Meiser, Thorsten – Educational and Psychological Measurement, 2014
Response styles, the tendency to respond to Likert-type items irrespective of content, are a widely known threat to the reliability and validity of self-report measures. However, it is still debated how to measure and control for response styles such as extreme responding. Recently, multiprocess item response theory models have been proposed that…
Descriptors: Validity, Item Response Theory, Rating Scales, Models
Wiley, Edward W.; Shavelson, Richard J.; Kurpius, Amy A. – Educational and Psychological Measurement, 2014
The name "SAT" has become synonymous with college admissions testing; it has been dubbed "the gold standard." Numerous studies on its reliability and predictive validity show that the SAT predicts college performance beyond high school grade point average. Surprisingly, studies of the factorial structure of the current version…
Descriptors: College Readiness, College Admission, College Entrance Examinations, Factor Analysis
Rutkowski, Leslie; Svetina, Dubravka – Educational and Psychological Measurement, 2014
In the field of international educational surveys, equivalence of achievement scale scores across countries has received substantial attention in the academic literature; however, only a relatively recent emphasis on scale score equivalence in nonachievement education surveys has emerged. Given the current state of research in multiple-group…
Descriptors: International Programs, Educational Assessment, Surveys, Measurement
Mylonas, Kostas; Furnham, Adrian; Divale, William; Leblebici, Cigdem; Gondim, Sonia; Moniz, Angela; Grad, Hector; Alvaro, Jose Luis; Cretu, Romeo Zeno; Filus, Ania; Boski, Pawel – Educational and Psychological Measurement, 2014
Several sources of bias can plague research data and individual assessment. When cultural groups are considered, across or even within countries, it is essential that the constructs assessed and evaluated are as free as possible from any source of bias and specifically from bias caused due to culturally specific characteristics. Employing the…
Descriptors: Test Bias, Measures (Individuals), Unemployment, Adults
McArdle, John J.; Hamagami, Fumiaki; Bautista, Randy; Onoye, Jane; Hishinuma, Earl S.; Prescott, Carol A.; Takeshita, Junji; Zonderman, Alan B.; Johnson, Ronald C. – Educational and Psychological Measurement, 2014
In this study, we reanalyzed the classic Hawai'i Family Study of Cognition (HFSC) data using contemporary multilevel modeling techniques. We used the HFSC baseline data ("N" = 6,579) and reexamined the factorial structure of 16 cognitive variables using confirmatory (restricted) measurement models in an explicit sequence. These…
Descriptors: Factor Analysis, Hierarchical Linear Modeling, Data Analysis, Structural Equation Models
von Eye, Alexander; Wiedermann, Wolfgang – Educational and Psychological Measurement, 2014
Approaches to determining direction of dependence in nonexperimental data are based on the relation between higher-than second-order moments on one side and correlation and regression models on the other. These approaches have experienced rapid development and are being applied in contexts such as research on partner violence, attention deficit…
Descriptors: Statistical Analysis, Factor Analysis, Structural Equation Models, Correlation
Hayduk, Leslie – Educational and Psychological Measurement, 2014
Researchers using factor analysis tend to dismiss the significant ill fit of factor models by presuming that if their factor model is close-to-fitting, it is probably close to being properly causally specified. Close fit may indeed result from a model being close to properly causally specified, but close-fitting factor models can also be seriously…
Descriptors: Factor Analysis, Goodness of Fit, Factor Structure, Structural Equation Models
Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon; Penfield, Randall D. – Educational and Psychological Measurement, 2013
The Rasch model, a member of a larger group of models within item response theory, is widely used in empirical studies. Detection of uniform differential item functioning (DIF) within the Rasch model typically employs null hypothesis testing with a concomitant consideration of effect size (e.g., signed area [SA]). Parametric equivalence between…
Descriptors: Test Bias, Effect Size, Item Response Theory, Comparative Analysis
Raykov, Tenko; Marcoulides, George A.; Millsap, Roger E. – Educational and Psychological Measurement, 2013
A multiple testing method for examining factorial invariance for latent constructs evaluated by multiple indicators in distinct populations is outlined. The procedure is based on the false discovery rate concept and multiple individual restriction tests and resolves general limitations of a popular factorial invariance testing approach. The…
Descriptors: Testing, Statistical Analysis, Factor Analysis, Statistical Significance
Harrell-Williams, Leigh M.; Wolfe, Edward W. – Educational and Psychological Measurement, 2013
Most research on confirmatory factor analysis using information-based fit indices (Akaike information criterion [AIC], Bayesian information criteria [BIC], bias-corrected AIC [AICc], and consistent AIC [CAIC]) has used a structural equation modeling framework. Minimal research has been done concerning application of these indices to item response…
Descriptors: Correlation, Goodness of Fit, Test Length, Item Response Theory
Wolf, Erika J.; Harrington, Kelly M.; Clark, Shaunna L.; Miller, Mark W. – Educational and Psychological Measurement, 2013
Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb.…
Descriptors: Sample Size, Structural Equation Models, Statistical Analysis, Statistical Bias
Liu, Yan; Zumbo, Bruno D.; Wu, Amery D. – Educational and Psychological Measurement, 2012
Previous studies have rarely examined the impact of outliers on the decisions about the number of factors to extract in an exploratory factor analysis. The few studies that have investigated this issue have arrived at contradictory conclusions regarding whether outliers inflated or deflated the number of factors extracted. By systematically…
Descriptors: Factor Analysis, Data, Simulation, Monte Carlo Methods
Dunn, Karee E.; Lo, Wen-Juo; Mulvenon, Sean W.; Sutcliffe, Rachel – Educational and Psychological Measurement, 2012
The Motivated Strategies for Learning Questionnaire (MSLQ) has dominated self-regulated learning research since the early 1990s. In this study, the two MSLQ subscales specifically designed to assess self-regulation--Metacognitive Self-Regulation subscale and Effort Regulation subscale--were examined. Results indicated that the structure of the two…
Descriptors: Questionnaires, Self Control, Learning Strategies, Metacognition
Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng – Educational and Psychological Measurement, 2012
Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure pi…
Descriptors: Reliability, Factor Analysis, Psychometrics, Item Response Theory

Peer reviewed
Direct link
