Publication Date
| In 2015 | 0 |
| Since 2014 | 0 |
| Since 2011 (last 5 years) | 3 |
| Since 2006 (last 10 years) | 6 |
| Since 1996 (last 20 years) | 7 |
Descriptor
| College Entrance Examinations | 4 |
| Item Response Theory | 4 |
| Test Items | 4 |
| Scores | 3 |
| African American Students | 2 |
| College Students | 2 |
| Difficulty Level | 2 |
| Ethnicity | 2 |
| Grades (Scholastic) | 2 |
| Models | 2 |
| More ▼ | |
Source
| Journal of Educational… | 7 |
Author
| Albano, Anthony D. | 1 |
| Bridgeman, Brent | 1 |
| Cline, Frederick | 1 |
| Culpepper, Steven A. | 1 |
| Davenport, Ernest C. | 1 |
| Himelfarb, Igor | 1 |
| Jin, Kuan-Yu | 1 |
| Meyer, J. Patrick | 1 |
| Qiu, Xue-Lan | 1 |
| Setzer, J. Carl | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 7 |
| Reports - Research | 4 |
| Reports - Evaluative | 2 |
| Reports - Descriptive | 1 |
Education Level
| Higher Education | 7 |
| Postsecondary Education | 4 |
| Elementary Secondary Education | 2 |
| High Schools | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
Showing all 7 results
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Wang, Wen-Chung; Jin, Kuan-Yu; Qiu, Xue-Lan; Wang, Lei – Journal of Educational Measurement, 2012
In some tests, examinees are required to choose a fixed number of items from a set of given items to answer. This practice creates a challenge to standard item response models, because more capable examinees may have an advantage by making wiser choices. In this study, we developed a new class of item response models to account for the choice…
Descriptors: Item Response Theory, Test Items, Selection, Models
Zwick, Rebecca; Himelfarb, Igor – Journal of Educational Measurement, 2011
Research has often found that, when high school grades and SAT scores are used to predict first-year college grade-point average (FGPA) via regression analysis, African-American and Latino students, are, on average, predicted to earn higher FGPAs than they actually do. Under various plausible models, this phenomenon can be explained in terms of…
Descriptors: Socioeconomic Status, Grades (Scholastic), Error of Measurement, White Students
Meyer, J. Patrick; Setzer, J. Carl – Journal of Educational Measurement, 2009
Recent changes to federal guidelines for the collection of data on race and ethnicity allow respondents to select multiple race categories. Redefining race subgroups in this manner poses problems for research spanning both sets of definitions. NAEP long-term trends have used the single-race subgroup definitions for over thirty years. Little is…
Descriptors: Elementary Secondary Education, Federal Legislation, Simulation, Maximum Likelihood Statistics
Culpepper, Steven A.; Davenport, Ernest C. – Journal of Educational Measurement, 2009
Previous research notes the importance of understanding racial/ethnic differential prediction of college grades across multiple institutions. Institutional variation in selection indices is especially important given some states' laws governing public institutions' admissions decisions. This paper employed multilevel moderated multiple regression…
Descriptors: Prediction, College Students, Grades (Scholastic), Race
Stout, William – Journal of Educational Measurement, 2007
This article summarizes the continuous latent trait IRT approach to skills diagnosis as particularized by a representative variety of continuous latent trait models using item response functions (IRFs). First, several basic IRT-based continuous latent trait approaches are presented in some detail. Then a brief summary of estimation, model…
Descriptors: Identification, Item Response Theory, Scoring, Middle Schools
Bridgeman, Brent; Cline, Frederick – Journal of Educational Measurement, 2004
Time limits on some computer-adaptive tests (CATs) are such that many examinees have difficulty finishing, and some examinees may be administered tests with more time-consuming items than others. Results from over 100,000 examinees suggested that about half of the examinees must guess on the final six questions of the analytical section of the…
Descriptors: Guessing (Tests), Timed Tests, Adaptive Testing, Computer Assisted Testing

Peer reviewed
Direct link
