Publication Date
| In 2024 | 8 |
| Since 2023 | 23 |
| Since 2020 (last 5 years) | 75 |
| Since 2015 (last 10 years) | 172 |
| Since 2005 (last 20 years) | 290 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 21 |
| Practitioners | 10 |
| Teachers | 6 |
| Students | 2 |
| Administrators | 1 |
| Parents | 1 |
Location
| Australia | 8 |
| United Kingdom | 6 |
| Canada | 5 |
| China | 5 |
| Germany | 5 |
| Florida | 4 |
| Malaysia | 4 |
| Nigeria | 4 |
| Turkey | 4 |
| California | 3 |
| Cyprus | 3 |
| More ▼ | |
Laws, Policies, & Programs
| Elementary and Secondary… | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedMueller, Daniel J.; Wasser, Virginia – Journal of Educational Measurement, 1977
Eighteen studies of the effects of changing initial answers to objective test items are reviewed. While students throughout the total test score range tended to gain more points than they lost, higher scoring students gain more than did lower scoring students. Suggestions for further research are made. (Author/JKS)
Descriptors: Guessing (Tests), Literature Reviews, Multiple Choice Tests, Objective Tests
Peer reviewedJensema, Carl – Educational and Psychological Measurement, 1976
A simple and economical method for estimating initial parameter values for the normal ogive or logistic latent trait mental test model is outlined. The accuracy of the method in comparison with maximum likelihood estimation is investigated through the use of Monte-Carlo data. (Author)
Descriptors: Guessing (Tests), Item Analysis, Latent Trait Theory, Measurement Techniques
Peer reviewedFischer, Frederick E. – Journal of Educational Measurement, 1970
The personalbiserial index is a correlation which measures the relationship between the difficulty of the items in a test for the person, as evidenced by this passes and failures, and the difficulty of the items as evidenced by group-determined item difficulties. Reliability and predictive validity are studiesstudied. (Author/RF)
Descriptors: Guessing (Tests), Item Analysis, Predictive Measurement, Predictor Variables
Peer reviewedDuncan, Carl P. – American Journal of Psychology, 1970
Descriptors: Error Patterns, Guessing (Tests), Problem Solving, Responses
Peer reviewedCressie, Noel; Holland, Paul W. – Psychometrika, 1983
The problem of characterizing the manifest probabilities of a latent trait model is considered. The approach taken here differs from the standard approach in that a population of examinees is being considered as opposed to a single examinee. Particular attention is given to the Rasch model. (Author/JKS)
Descriptors: Guessing (Tests), Item Analysis, Latent Trait Theory, Mathematical Models
Peer reviewedHutchinson, T. P. – Contemporary Educational Psychology, 1980
In scoring multiple-choice tests, a score of 1 is given to right answers, 0 to unanswered questions, and some negative score to wrong answers. This paper discusses the relation of this negative score to the assumption made about the partial knowledge with the subjects may have. (Author/GDC)
Descriptors: Guessing (Tests), Knowledge Level, Multiple Choice Tests, Scoring Formulas
Hasher, Lynn; And Others – Journal of Verbal Learning and Verbal Behavior, 1977
Subjects rated how certain they were that each of 60 statements was true or false. Embedded in the list was a set of statements that were either repeated across several sessions or were not repeated. Frequency of occurrence is apparently a criterion used to establish referential validity of plausible statements. (CHK)
Descriptors: Guessing (Tests), Memory, Recall (Psychology), Test Validity
Peer reviewedLeinhardt, Gaea; Schwarz, Baruch B. – Cognition and Instruction, 1997
Examines guessing as a heuristic for problem-solving presented in a taped lesson by George Polya. Analogical models transformed a complex problem to a simpler one and maintained problem identification. Instructional explanations fulfilled two goals simultaneously: (1) teach students how to use guessing as a problem-solving strategy to solve the…
Descriptors: Constructivism (Learning), Guessing (Tests), Mathematics Instruction, Metacognition
Ma, Lili; Lillard, Angeline S. – Child Development, 2006
This study examined 2- to 3-year-olds' ability to make a pretend-real distinction in the absence of content cues. Children watched two actors side by side. One was really eating, and the other was pretending to eat, but in neither case was information about content available. Following the displays, children were asked to retrieve the real food…
Descriptors: Young Children, Cues, Visual Discrimination, Food
Peer reviewedAlbanese, Mark A. – Journal of Educational Measurement, 1988
Estimates of the effects of use of formula scoring on the individual examinee's score are presented. Results for easy, moderate, and hard tests are examined. Using test characteristics from several studies shows that some examinees would increase scores substantially if they were to answer items omitted under formula directions. (SLD)
Descriptors: Difficulty Level, Guessing (Tests), Scores, Scoring Formulas
Peer reviewedWilcox, Rand R. – Journal of Educational Measurement, 1982
A new model for measuring misinformation is suggested. A modification of Wilcox's strong true-score model, to be used in certain situations, is indicated, since it solves the problem of correcting for guessing without assuming guessing is random. (Author/GK)
Descriptors: Achievement Tests, Guessing (Tests), Mathematical Models, Scoring Formulas
Peer reviewedHsu, Louis M. – Educational and Psychological Measurement, 1979
Though the Paired-Item-Score (Eakin and Long) (EJ 174 780) method of scoring true-false tests has certain advantages over the traditional scoring methods (percentage right and right minus wrong), these advantages are attained at the cost of a larger risk of misranking the examinees. (Author/BW)
Descriptors: Comparative Analysis, Guessing (Tests), Objective Tests, Probability
Peer reviewedLinn, Robert L. – Educational and Psychological Measurement, 1976
Testing procedures which involve testees assigning probabilities of correctness to all multiple choice alternatives is examined. Two basic assumptions in these procedures are reviewed. Empirical examinee response data are examined and it is suggested that these assumptions should not be taken lightly in empirical studies of personal probability…
Descriptors: Confidence Testing, Guessing (Tests), Measurement Techniques, Multiple Choice Tests
Peer reviewedChapman, Michael; McBride, Michelle L. – Developmental Psychology, 1992
Children of 4 to 10 years of age were given 2 class inclusion tasks. Younger children's performance was inflated by guessing. Scores were higher in the marked task than in the unmarked task as a result of differing rates of inclusion logic. Children's verbal justifications closely approximated estimates of their true competence. (GLR)
Descriptors: Children, Competence, Evaluative Thinking, Guessing (Tests)
Peer reviewedBrown, Jonathan R. – Psychology in the Schools, 1992
Notes that, by guessing, children may score within normal range on tests by chance alone. Describes one process, random guessing, for estimating "true blind guessing score" (range of scores) that, if known, would result in missing fewer at-risk children. Sensitizes test administrators to tests that do not address or have suspicious corrections for…
Descriptors: At Risk Persons, Guessing (Tests), Identification, Test Use

Direct link
