Publication Date
| In 2015 | 1 |
| Since 2014 | 2 |
| Since 2011 (last 5 years) | 3 |
| Since 2006 (last 10 years) | 6 |
| Since 1996 (last 20 years) | 11 |
Descriptor
| Achievement Tests | 91 |
| Test Validity | 22 |
| Test Construction | 18 |
| Scores | 17 |
| Standardized Tests | 17 |
| Test Reliability | 16 |
| Test Items | 15 |
| Testing Problems | 15 |
| Elementary Education | 13 |
| Latent Trait Theory | 13 |
| More ▼ | |
Author
| Linn, Robert L. | 4 |
| Yen, Wendy M. | 4 |
| Forsyth, Robert A. | 3 |
| Phillips, S. E. | 3 |
| Birenbaum, Menucha | 2 |
| Choi, Seung W. | 2 |
| Hoover, H. D. | 2 |
| Kim, Dong-In | 2 |
| Loyd, Brenda H. | 2 |
| Mehrens, William A. | 2 |
| More ▼ | |
Publication Type
| Journal Articles | 69 |
| Reports - Research | 47 |
| Reports - Evaluative | 14 |
| Information Analyses | 5 |
| Opinion Papers | 4 |
| Book/Product Reviews | 2 |
| Speeches/Meeting Papers | 2 |
| Reports - General | 1 |
Education Level
| Grade 8 | 1 |
| High Schools | 1 |
| Secondary Education | 1 |
Audience
| Researchers | 4 |
| Practitioners | 2 |
Showing 1 to 15 of 91 results
Sinharay, Sandip; Wan, Ping; Choi, Seung W.; Kim, Dong-In – Journal of Educational Measurement, 2015
With an increase in the number of online tests, the number of interruptions during testing due to unexpected technical issues seems to be on the rise. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees' scores. Researchers…
Descriptors: Computer Assisted Testing, Testing Problems, Scores, Statistical Analysis
Sinharay, Sandip; Wan, Ping; Whitaker, Mike; Kim, Dong-In; Zhang, Litong; Choi, Seung W. – Journal of Educational Measurement, 2014
With an increase in the number of online tests, interruptions during testing due to unexpected technical issues seem unavoidable. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees' scores. There is a lack of research on this…
Descriptors: Computer Assisted Testing, Testing Problems, Scores, Regression (Statistics)
Debeer, Dries; Janssen, Rianne – Journal of Educational Measurement, 2013
Changing the order of items between alternate test forms to prevent copying and to enhance test security is a common practice in achievement testing. However, these changes in item order may affect item and test characteristics. Several procedures have been proposed for studying these item-order effects. The present study explores the use of…
Descriptors: Item Response Theory, Test Items, Test Format, Models
Zwick, Rebecca; Greif Green, Jennifer – Journal of Educational Measurement, 2007
In studies of the SAT, correlations of SAT scores, high school grades, and socioeconomic factors (SES) are usually obtained using a university as the unit of analysis. This approach obscures an important structural aspect of the data: The high school grades received by a given institution come from a large number of high schools, all of which have…
Descriptors: Organizations (Groups), High School Students, Grades (Scholastic), Grading
Monahan, Patrick O.; Lee, Won-Chan; Ankenmann, Robert D. – Journal of Educational Measurement, 2007
A Monte Carlo simulation technique for generating dichotomous item scores is presented that implements (a) a psychometric model with different explicit assumptions than traditional parametric item response theory (IRT) models, and (b) item characteristic curves without restrictive assumptions concerning mathematical form. The four-parameter beta…
Descriptors: True Scores, Psychometrics, Monte Carlo Methods, Correlation
Wise, Steven L.; DeMars, Christine E. – Journal of Educational Measurement, 2006
The validity of inferences based on achievement test scores is dependent on the amount of effort that examinees put forth while taking the test. With low-stakes tests, for which this problem is particularly prevalent, there is a consequent need for psychometric models that can take into account differing levels of examinee effort. This article…
Descriptors: Guessing (Tests), Psychometrics, Inferences, Reaction Time
Peer reviewedSykes, Robert C.; Yen, Wendy M. – Journal of Educational Measurement, 2000
Investigated how well the generalized and Rasch models described item and test performance across a broad range of mixed-item-format test configurations (six tests from two state proficiency testing programs). Evaluating the impact of model assumptions on the predictions of item and test information permitted a delineation of the implications of…
Descriptors: Achievement Tests, Elementary Secondary Education, Prediction, Scaling
Peer reviewedWilliams, Valerie S. L.; Pommerich, Mary; Thissen, David – Journal of Educational Measurement, 1998
Created a developmental scale for the North Carolina End-of-Grade Mathematics Tests using a subset of identical test forms administered to adjacent grade levels with Thurstone scaling and Item Response Theory methods. Discusses differences in patterns produced. (Author/SLD)
Descriptors: Achievement Tests, Child Development, Comparative Analysis, Elementary Secondary Education
Peer reviewedWilliams, Valerie S. L.; Rosa, Kathleen Rees; McLeod, Lori D.; Thissen, David; Sanford, Eleanor E. – Journal of Educational Measurement, 1998
Uses data from the North Carolina End-of-Grade test of eighth-grade mathematics to estimate the achievement results on the scale of the National Assessment of Educational Progress (NAEP) Trial State Assessment. Uses linear regression models to develop projection equations to predict state NAEP results. (SLD)
Descriptors: Achievement Tests, Grade 8, Junior High Schools, Middle Schools
Peer reviewedWaltman, Kristie K. – Journal of Educational Measurement, 1997
A socially moderated link was established between statewide achievement results and the National Assessment of Educational Progress (NAEP) by using the same achievement level descriptions in an Iowa Test of Basic Skills standard-setting and an NAEP standard setting study. A statistically moderated link was established through an equipercentile…
Descriptors: Academic Achievement, Achievement Tests, Equated Scores, National Surveys
Peer reviewedLane, Suzanne; And Others – Journal of Educational Measurement, 1996
Evidence from test results of 3,604 sixth and seventh graders is provided for the generalizability and validity of the Quantitative Understanding: Amplifying Student Achievement and Reasoning (QUASAR) Cognitive Assessment Instrument, which is designed to measure program outcomes and growth in mathematics. (SLD)
Descriptors: Achievement Tests, Cognitive Processes, Elementary Education, Elementary School Students
Peer reviewedFeldt, Leonard S.; Forsyth, Robert A. – Journal of Educational Measurement, 1974
The net effect of the conditions under which tests are taken was empirically investigated using the scores obtained by high school students on an English and a mathematics test. (Author/BB)
Descriptors: Achievement Tests, Context Effect, English, Item Sampling
Peer reviewedBeck, Michael D. – Journal of Educational Measurement, 1974
An assessment of the differential effect of two pupil response procedures is presented. The two groups, half responding in test booklets, half using answer sheets, were matched by grade and general scholastic aptitude. The score reliabilities did not differ significantly for either pupil response mode. (Author/BB)
Descriptors: Achievement Tests, Answer Sheets, Elementary School Students, Responses
Peer reviewedPohlmann, John T.; Beggs, Donald L. – Journal of Educational Measurement, 1974
Descriptors: Academic Achievement, Achievement Tests, Attitude Measures, Graduate Students
Peer reviewedWashington, William N.; Godfrey, R. Richard – Journal of Educational Measurement, 1974
Item statistics between illustrated and written items drawn from the same content areas were compared using F ratios. The results indicated: that illustrated items performed slightly better than matched written items; and that the best performing category of illustrated items was tables. (Author/BB)
Descriptors: Achievement Tests, Illustrations, Test Construction, Test Items

Direct link
