NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 91 results
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Wan, Ping; Choi, Seung W.; Kim, Dong-In – Journal of Educational Measurement, 2015
With an increase in the number of online tests, the number of interruptions during testing due to unexpected technical issues seems to be on the rise. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees' scores. Researchers…
Descriptors: Computer Assisted Testing, Testing Problems, Scores, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Wan, Ping; Whitaker, Mike; Kim, Dong-In; Zhang, Litong; Choi, Seung W. – Journal of Educational Measurement, 2014
With an increase in the number of online tests, interruptions during testing due to unexpected technical issues seem unavoidable. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees' scores. There is a lack of research on this…
Descriptors: Computer Assisted Testing, Testing Problems, Scores, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Janssen, Rianne – Journal of Educational Measurement, 2013
Changing the order of items between alternate test forms to prevent copying and to enhance test security is a common practice in achievement testing. However, these changes in item order may affect item and test characteristics. Several procedures have been proposed for studying these item-order effects. The present study explores the use of…
Descriptors: Item Response Theory, Test Items, Test Format, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Zwick, Rebecca; Greif Green, Jennifer – Journal of Educational Measurement, 2007
In studies of the SAT, correlations of SAT scores, high school grades, and socioeconomic factors (SES) are usually obtained using a university as the unit of analysis. This approach obscures an important structural aspect of the data: The high school grades received by a given institution come from a large number of high schools, all of which have…
Descriptors: Organizations (Groups), High School Students, Grades (Scholastic), Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Monahan, Patrick O.; Lee, Won-Chan; Ankenmann, Robert D. – Journal of Educational Measurement, 2007
A Monte Carlo simulation technique for generating dichotomous item scores is presented that implements (a) a psychometric model with different explicit assumptions than traditional parametric item response theory (IRT) models, and (b) item characteristic curves without restrictive assumptions concerning mathematical form. The four-parameter beta…
Descriptors: True Scores, Psychometrics, Monte Carlo Methods, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; DeMars, Christine E. – Journal of Educational Measurement, 2006
The validity of inferences based on achievement test scores is dependent on the amount of effort that examinees put forth while taking the test. With low-stakes tests, for which this problem is particularly prevalent, there is a consequent need for psychometric models that can take into account differing levels of examinee effort. This article…
Descriptors: Guessing (Tests), Psychometrics, Inferences, Reaction Time
Peer reviewed Peer reviewed
Sykes, Robert C.; Yen, Wendy M. – Journal of Educational Measurement, 2000
Investigated how well the generalized and Rasch models described item and test performance across a broad range of mixed-item-format test configurations (six tests from two state proficiency testing programs). Evaluating the impact of model assumptions on the predictions of item and test information permitted a delineation of the implications of…
Descriptors: Achievement Tests, Elementary Secondary Education, Prediction, Scaling
Peer reviewed Peer reviewed
Williams, Valerie S. L.; Pommerich, Mary; Thissen, David – Journal of Educational Measurement, 1998
Created a developmental scale for the North Carolina End-of-Grade Mathematics Tests using a subset of identical test forms administered to adjacent grade levels with Thurstone scaling and Item Response Theory methods. Discusses differences in patterns produced. (Author/SLD)
Descriptors: Achievement Tests, Child Development, Comparative Analysis, Elementary Secondary Education
Peer reviewed Peer reviewed
Williams, Valerie S. L.; Rosa, Kathleen Rees; McLeod, Lori D.; Thissen, David; Sanford, Eleanor E. – Journal of Educational Measurement, 1998
Uses data from the North Carolina End-of-Grade test of eighth-grade mathematics to estimate the achievement results on the scale of the National Assessment of Educational Progress (NAEP) Trial State Assessment. Uses linear regression models to develop projection equations to predict state NAEP results. (SLD)
Descriptors: Achievement Tests, Grade 8, Junior High Schools, Middle Schools
Peer reviewed Peer reviewed
Waltman, Kristie K. – Journal of Educational Measurement, 1997
A socially moderated link was established between statewide achievement results and the National Assessment of Educational Progress (NAEP) by using the same achievement level descriptions in an Iowa Test of Basic Skills standard-setting and an NAEP standard setting study. A statistically moderated link was established through an equipercentile…
Descriptors: Academic Achievement, Achievement Tests, Equated Scores, National Surveys
Peer reviewed Peer reviewed
Lane, Suzanne; And Others – Journal of Educational Measurement, 1996
Evidence from test results of 3,604 sixth and seventh graders is provided for the generalizability and validity of the Quantitative Understanding: Amplifying Student Achievement and Reasoning (QUASAR) Cognitive Assessment Instrument, which is designed to measure program outcomes and growth in mathematics. (SLD)
Descriptors: Achievement Tests, Cognitive Processes, Elementary Education, Elementary School Students
Peer reviewed Peer reviewed
Feldt, Leonard S.; Forsyth, Robert A. – Journal of Educational Measurement, 1974
The net effect of the conditions under which tests are taken was empirically investigated using the scores obtained by high school students on an English and a mathematics test. (Author/BB)
Descriptors: Achievement Tests, Context Effect, English, Item Sampling
Peer reviewed Peer reviewed
Beck, Michael D. – Journal of Educational Measurement, 1974
An assessment of the differential effect of two pupil response procedures is presented. The two groups, half responding in test booklets, half using answer sheets, were matched by grade and general scholastic aptitude. The score reliabilities did not differ significantly for either pupil response mode. (Author/BB)
Descriptors: Achievement Tests, Answer Sheets, Elementary School Students, Responses
Peer reviewed Peer reviewed
Pohlmann, John T.; Beggs, Donald L. – Journal of Educational Measurement, 1974
Descriptors: Academic Achievement, Achievement Tests, Attitude Measures, Graduate Students
Peer reviewed Peer reviewed
Washington, William N.; Godfrey, R. Richard – Journal of Educational Measurement, 1974
Item statistics between illustrated and written items drawn from the same content areas were compared using F ratios. The results indicated: that illustrated items performed slightly better than matched written items; and that the best performing category of illustrated items was tables. (Author/BB)
Descriptors: Achievement Tests, Illustrations, Test Construction, Test Items
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7