Publication Date
| In 2015 | 0 |
| Since 2014 | 1 |
| Since 2011 (last 5 years) | 2 |
| Since 2006 (last 10 years) | 2 |
| Since 1996 (last 20 years) | 4 |
Descriptor
| Criterion Referenced Tests | 5 |
| Test Construction | 4 |
| Test Reliability | 4 |
| Computer Programs | 3 |
| Item Banks | 3 |
| Item Response Theory | 3 |
| Latent Trait Theory | 3 |
| Models | 3 |
| Test Items | 3 |
| Test Length | 3 |
| More ▼ | |
Author
| Hambleton, Ronald K. | 12 |
| Allalouf, Avi | 1 |
| Cook, Linda L. | 1 |
| De Gruijter, Dato N. M. | 1 |
| Eignor, Daniel R. | 1 |
| Keller, Lisa A. | 1 |
| Liang, Tie | 1 |
| Meara, Kevin | 1 |
| Novick, Melvin R. | 1 |
| Sheehan, Daniel S. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 9 |
| Reports - Research | 6 |
| Reports - Evaluative | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
Showing all 12 results
Liang, Tie; Wells, Craig S.; Hambleton, Ronald K. – Journal of Educational Measurement, 2014
As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…
Descriptors: Item Response Theory, Measurement Techniques, Nonparametric Statistics, Models
Keller, Lisa A.; Hambleton, Ronald K. – Journal of Educational Measurement, 2013
Due to recent research in equating methodologies indicating that some methods may be more susceptible to the accumulation of equating error over multiple administrations, the sustainability of several item response theory methods of equating over time was investigated. In particular, the paper is focused on two equating methodologies: fixed common…
Descriptors: Item Response Theory, Scaling, Test Format, Equated Scores
Peer reviewedWainer, Howard; Hambleton, Ronald K.; Meara, Kevin – Journal of Educational Measurement, 1999
Redesigned five displays for communicating results from the National Assessment of Educational Progress (NAEP) 1994 reading assessment using principles developed by H. Wainer (1997). Responses from 10 educators show more accurate and faster responses to questions asked after the redesign. Discusses implications for information communication in…
Descriptors: Administrators, Diffusion (Communication), Elementary Secondary Education, Information Dissemination
Peer reviewedAllalouf, Avi; Hambleton, Ronald K.; Sireci, Stephen G. – Journal of Educational Measurement, 1999
Focused on whether differential item functioning (DIF) is related to item type in translated test items and the causes of DIF using data from an Israeli college entrance test in Hebrew and a Russian translation. Results from 24,304 college applicants indicate that 34% of items functioned differently across items. (SLD)
Descriptors: College Applicants, College Entrance Examinations, Foreign Countries, Hebrew
Peer reviewedHambleton, Ronald K. – Journal of Educational Measurement, 1978
The use of cut-off scores with criterion referenced tests is defended in this response to two papers by Gene Glass and Nancy Burton. Suggestions for setting cut-off scores are made. (JKS)
Descriptors: Academic Standards, Criterion Referenced Tests, Cutting Scores, Decision Making
Peer reviewedHambleton, Ronald K.; Eignor, Daniel R. – Journal of Educational Measurement, 1978
A set of guidelines for evaluating criterion-referenced tests is presented. Additionally, 11 sets of extant criterion-referenced tests are evaluated using these guidelines. (JKS)
Descriptors: Achievement Tests, Criterion Referenced Tests, Evaluation Criteria, Guidelines
Peer reviewedHambleton, Ronald K.; Cook, Linda L. – Journal of Educational Measurement, 1977
This article presents a non-mathematical introduction to latent trait test models and some of their features. Latent trait models are compared to classical test models. Two promising applications of latent trait models and available computer programs are discussed. (Author/JKS)
Descriptors: Computer Programs, Latent Trait Theory, Measurement, Models
Peer reviewedHambleton, Ronald K.; De Gruijter, Dato N. M. – Journal of Educational Measurement, 1983
Addressing the shortcomings of classical item statistics for selecting criterion-referenced test items, this paper describes an optimal item selection procedure utilizing item response theory (IRT) and offers examples in which random selection and optimal item selection methods are compared. Theoretical advantages of optimal selection based upon…
Descriptors: Criterion Referenced Tests, Cutting Scores, Item Banks, Latent Trait Theory
Peer reviewedHambleton, Ronald K.; And Others – Journal of Educational Measurement, 1983
A new method was developed to assist in the selection of a test length by utilizing computer simulation procedures and item response theory. A demonstration of the method presents results which address the influences of item pool heterogeneity matched to the objectives of interest and the method of item selection. (Author/PN)
Descriptors: Computer Programs, Criterion Referenced Tests, Item Banks, Latent Trait Theory
Peer reviewedHambleton, Ronald K.; Novick, Melvin R. – Journal of Educational Measurement, 1973
In this paper, an attempt has been made to synthesize some of the current thinking in the area of criterion-referenced testing as well as to provide the beginning of an integration of theory and method for such testing. (Editor)
Descriptors: Bayesian Statistics, Criterion Referenced Tests, Decision Making, Definitions
Peer reviewedHambleton, Ronald K.; And Others – Journal of Educational Measurement, 1993
Item parameter estimation errors in test development are highlighted. The problem is illustrated with several simulated data sets, and a conservative solution is offered for addressing the problem in item response theory test development practice. Steps that reduce the problem of capitalizing on chance in item selections are suggested. (SLD)
Descriptors: Computer Simulation, Error of Measurement, Estimation (Mathematics), Item Banks
Peer reviewedSheehan, Daniel S.; Hambleton, Ronald K. – Journal of Educational and Psychological Measurement, 1974
Descriptors: Computer Programs, Scoring, Scoring Formulas, Tests

Direct link
