NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Harrison, Michael – Educational and Psychological Measurement, 2019
Building on prior research on the relationships between key concepts in item response theory and classical test theory, this note contributes to highlighting their important and useful links. A readily and widely applicable latent variable modeling procedure is discussed that can be used for point and interval estimation of the individual person…
Descriptors: True Scores, Item Response Theory, Test Items, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
France, Stephen L.; Batchelder, William H. – Educational and Psychological Measurement, 2015
Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…
Descriptors: Maximum Likelihood Statistics, Test Items, Difficulty Level, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational and Psychological Measurement, 2008
The graded response (GR) and generalized partial credit (GPC) models do not imply that examinees ordered by raw observed score will necessarily be ordered on the expected value of the latent trait (OEL). Factors were manipulated to assess whether increased violations of OEL also produced increased Type I error rates in differential item…
Descriptors: Test Items, Raw Scores, Test Theory, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Graham, James M. – Educational and Psychological Measurement, 2006
Coefficient alpha, the most commonly used estimate of internal consistency, is often considered a lower bound estimate of reliability, though the extent of its underestimation is not typically known. Many researchers are unaware that coefficient alpha is based on the essentially tau-equivalent measurement model. It is the violation of the…
Descriptors: Models, Test Theory, Reliability, Structural Equation Models
Peer reviewed Peer reviewed
Fan, Xitao – Educational and Psychological Measurement, 1998
This study empirically examined the behaviors of item and person statistics derived from item response theory and classical test theory, focusing on item and person statistics and using a large-scale statewide assessment. Findings show that the person and item statistics from the two measurement frameworks are quite comparable. (SLD)
Descriptors: Item Response Theory, State Programs, Statistical Analysis, Test Items
Peer reviewed Peer reviewed
Zumbo, Bruno D.; Pope, Gregory A.; Watson, Jackie E.; Hubley, Anita M. – Educational and Psychological Measurement, 1997
E. Roskam's (1985) conjecture that steeper item characteristic curve (ICC) "a" parameters (slopes) (and higher item total correlations in classical test theory) would be found with more concretely worded test items was tested with results from 925 young adults on the Eysenck Personality Questionnaire (H. Eysenck and S. Eysenck, 1975).…
Descriptors: Correlation, Personality Assessment, Personality Measures, Test Interpretation
Peer reviewed Peer reviewed
Bruno, James E.; Dirkzwager, A. – Educational and Psychological Measurement, 1995
Determining the optimal number of choices on a multiple-choice test is explored analytically from an information theory perspective. The analysis revealed that, in general, three choices seem optimal. This finding is in agreement with previous statistical and psychometric research. (SLD)
Descriptors: Distractors (Tests), Information Theory, Multiple Choice Tests, Psychometrics
Peer reviewed Peer reviewed
Engelhard, George, Jr. – Educational and Psychological Measurement, 1992
A historical perspective is provided of the concept of invariance in measurement theory, describing sample-invariant item calibration and item-invariant measurement of individuals. Invariance as a key measurement concept is illustrated through the measurement theories of E. L. Thorndike, L. L. Thurstone, and G. Rasch. (SLD)
Descriptors: Behavioral Sciences, Educational History, Measurement Techniques, Psychometrics
Peer reviewed Peer reviewed
Reuterberg, Sven-Eric; Gustafsson, Jan-Eric – Educational and Psychological Measurement, 1992
The use of confirmatory factor analysis by the LISREL program is demonstrated as an assumption-testing method when computing reliability coefficients under different model assumptions. Results indicate that reliability estimates are robust against departure from the assumption of parallelism of test items. (SLD)
Descriptors: Equations (Mathematics), Estimation (Mathematics), Mathematical Models, Robustness (Statistics)
Peer reviewed Peer reviewed
Feldt, Leonard S. – Educational and Psychological Measurement, 1984
The binomial error model includes form-to-form difficulty differences as error variance and leads to Ruder-Richardson formula 21 as an estimate of reliability. If the form-to-form component is removed from the estimate of error variance, the binomial model leads to KR 20 as the reliability estimate. (Author/BW)
Descriptors: Achievement Tests, Difficulty Level, Error of Measurement, Mathematical Formulas
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1983
This article provides unbiased estimates of the proportion of items in an item domain that an examinee would answer correctly if every item were attempted, when a closed sequential testing procedure is used. (Author)
Descriptors: Estimation (Mathematics), Psychometrics, Scores, Sequential Approach
Peer reviewed Peer reviewed
Reynolds, Thomas J. – Educational and Psychological Measurement, 1981
Cliff's Index "c" derived from an item dominance matrix is utilized in a clustering approach, termed extracting Reliable Guttman Orders (ERGO), to isolate Guttman-type item hierarchies. A comparison of factor analysis to the ERGO is made on social distance data involving multiple ethnic groups. (Author/BW)
Descriptors: Cluster Analysis, Difficulty Level, Factor Analysis, Item Analysis
Peer reviewed Peer reviewed
Cattell, Raymond B.; Krug, Samuel E. – Educational and Psychological Measurement, 1986
Critics have occasionally asserted that the number of factors in the 16PF tests is too large. This study discusses factor-analytic methodology and reviews more than 50 studies in the field. It concludes that the number of important primaries encapsulated in the series is no fewer than the stated number. (Author/JAZ)
Descriptors: Correlation, Cross Cultural Studies, Factor Analysis, Maximum Likelihood Statistics