Publication Date

In 2022 | 0 |

Since 2021 | 1 |

Since 2018 (last 5 years) | 4 |

Since 2013 (last 10 years) | 5 |

Since 2003 (last 20 years) | 10 |

Descriptor

Source

Educational and Psychological… | 227 |

Author

Kingma, Johannes | 7 |

Werts, Charles E. | 7 |

Wilcox, Rand R. | 6 |

Linn, Robert L. | 5 |

Goldstein, Zvi | 4 |

Marcoulides, George A. | 4 |

Werts, C. E. | 4 |

Aiken, Lewis R. | 3 |

Brown, R. L. | 3 |

De Ayala, R. J. | 3 |

Huck, Schuyler W. | 3 |

More ▼ |

Publication Type

Education Level

Audience

Location

Australia | 1 |

Canada | 1 |

Denmark | 1 |

Georgia | 1 |

Hong Kong | 1 |

Illinois (Chicago) | 1 |

Philippines | 1 |

United Kingdom (England) | 1 |

United Kingdom (London) | 1 |

Laws, Policies, & Programs

Assessments and Surveys

Iowa Tests of Educational… | 1 |

Minnesota Multiphasic… | 1 |

SAT (College Admission Test) | 1 |

Stanford Achievement Tests | 1 |

Wechsler Intelligence Scale… | 1 |

What Works Clearinghouse Rating

Thomas, Michael L.; Brown, Gregory G.; Patt, Virginie M.; Duffy, John R. – Educational and Psychological Measurement, 2021

The adaptation of experimental cognitive tasks into measures that can be used to quantify neurocognitive outcomes in translational studies and clinical trials has become a key component of the strategy to address psychiatric and neurological disorders. Unfortunately, while most experimental cognitive tests have strong theoretical bases, they can…

Descriptors: Adaptive Testing, Computer Assisted Testing, Cognitive Tests, Psychopathology

Zumbo, Bruno D.; Kroc, Edward – Educational and Psychological Measurement, 2019

Chalmers recently published a critique of the use of ordinal a[alpha] proposed in Zumbo et al. as a measure of test reliability in certain research settings. In this response, we take up the task of refuting Chalmers' critique. We identify three broad misconceptions that characterize Chalmers' criticisms: (1) confusing assumptions with…

Descriptors: Test Reliability, Statistical Analysis, Misconceptions, Mathematical Models

Lamprianou, Iasonas – Educational and Psychological Measurement, 2018

It is common practice for assessment programs to organize qualifying sessions during which the raters (often known as "markers" or "judges") demonstrate their consistency before operational rating commences. Because of the high-stakes nature of many rating activities, the research community tends to continuously explore new…

Descriptors: Social Networks, Network Analysis, Comparative Analysis, Innovation

Luo, Yong; Jiao, Hong – Educational and Psychological Measurement, 2018

Stan is a new Bayesian statistical software program that implements the powerful and efficient Hamiltonian Monte Carlo (HMC) algorithm. To date there is not a source that systematically provides Stan code for various item response theory (IRT) models. This article provides Stan code for three representative IRT models, including the…

Descriptors: Bayesian Statistics, Item Response Theory, Probability, Computer Software

Andrich, David – Educational and Psychological Measurement, 2016

This article reproduces correspondence between Georg Rasch of The University of Copenhagen and Benjamin Wright of The University of Chicago in the period from January 1966 to July 1967. This correspondence reveals their struggle to operationalize a unidimensional measurement model with sufficient statistics for responses in a set of ordered…

Descriptors: Statistics, Item Response Theory, Rating Scales, Mathematical Models

Tran, Ulrich S.; Formann, Anton K. – Educational and Psychological Measurement, 2009

Parallel analysis has been shown to be suitable for dimensionality assessment in factor analysis of continuous variables. There have also been attempts to demonstrate that it may be used to uncover the factorial structure of binary variables conforming to the unidimensional normal ogive model. This article provides both theoretical and empirical…

Descriptors: Simulation, Factor Analysis, Correlation, Evaluation Methods

Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2007

The impact of outliers on Cronbach's coefficient [alpha] has not been documented in the psychometric or statistical literature. This is an important gap because coefficient [alpha] is the most widely used measurement statistic in all of the social, educational, and health sciences. The impact of outliers on coefficient [alpha] is investigated for…

Descriptors: Psychometrics, Computation, Reliability, Monte Carlo Methods

Wilcox, Rand R. – Educational and Psychological Measurement, 2006

Consider the nonparametric regression model Y = m(X)+ [tau](X)[epsilon], where X and [epsilon] are independent random variables, [epsilon] has a median of zero and variance [sigma][squared], [tau] is some unknown function used to model heteroscedasticity, and m(X) is an unknown function reflecting some conditional measure of location associated…

Descriptors: Nonparametric Statistics, Mathematical Models, Regression (Statistics), Probability

Graham, James M. – Educational and Psychological Measurement, 2006

Coefficient alpha, the most commonly used estimate of internal consistency, is often considered a lower bound estimate of reliability, though the extent of its underestimation is not typically known. Many researchers are unaware that coefficient alpha is based on the essentially tau-equivalent measurement model. It is the violation of the…

Descriptors: Models, Test Theory, Reliability, Structural Equation Models

Rupp, Andre A.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2004

Based on seminal work by Lord and Hambleton, Swaminathan, and Rogers, this article is an analytical, graphical, and conceptual reminder that item response theory (IRT) parameter invariance only holds for perfect model fit in multiple populations or across multiple conditions and is thus an ideal state. In practice, one attempts to quantify the…

Descriptors: Correlation, Item Response Theory, Statistical Analysis, Evaluation Methods

Peer reviewed

Turner, Charles F. – Educational and Psychological Measurement, 1975

Methods for the path analysis of complex (i.e., not fully recursive) causal models are briefly discussed. A computer program which simplifies analysis of such models and provides an option for automatically deleting marginal paths is described. (Author)

Descriptors: Computer Programs, Mathematical Models, Path Analysis

Peer reviewed

Th.van der Kamp, Leo J.; Mellenbergh, Gideon J. – Educational and Psychological Measurement, 1976

Joreskog's model of cogeneric tests is used to analyze agreement between raters. Raters are treated as measuring instruments. The model of cogeneric tests, of which classical parallelism and tau-equivalence are shown to be special cases, is applied to teachers' ratings of students' responses on open-end questions. (Author/RC)

Descriptors: Goodness of Fit, Mathematical Models, Rating Scales, Statistical Analysis

Peer reviewed

Werts, C. E.; And Others – Educational and Psychological Measurement, 1976

A procedure is presented for the analysis of rating data with correlated intrajudge and uncorrelated interjudge measurement errors. Correlations between true scores on different rating dimensions, reliabilities for each judge on each dimension and correlations between intrajudge errors can be estimated given a minimum of three raters and two…

Descriptors: Correlation, Data Analysis, Error of Measurement, Error Patterns

Peer reviewed

Whitely, Susan E.; Dawis, Rene V. – Educational and Psychological Measurement, 1976

Systematically investigates the effects of test context on verbal analogy item difficulty, in terms of both simple percentage correct and easiness estimates from a parameter-invariant model (Rasch, 1960). (RC)

Descriptors: Analysis of Variance, High School Students, Item Analysis, Mathematical Models

Peer reviewed

Halperin, Silas – Educational and Psychological Measurement, 1976

Component analysis provides an attractive alternative to factor analysis, since component scores are easily determined while factor scores can only be estimated. The correct method of determining component scores is presented as well as several illustrations of how commonly used incorrect methods distort the meaning of the component solution. (RC)

Descriptors: Factor Analysis, Mathematical Models, Matrices, Scores