Publication Date
| In 2025 | 0 |
| Since 2024 | 31 |
| Since 2021 (last 5 years) | 111 |
| Since 2016 (last 10 years) | 245 |
| Since 2006 (last 20 years) | 566 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 38 |
| Practitioners | 25 |
| Teachers | 8 |
| Administrators | 6 |
| Counselors | 3 |
| Parents | 1 |
| Policymakers | 1 |
| Students | 1 |
Location
| Taiwan | 12 |
| United Kingdom | 10 |
| Netherlands | 9 |
| California | 8 |
| Turkey | 8 |
| Australia | 7 |
| Germany | 7 |
| New York | 7 |
| Canada | 6 |
| Florida | 6 |
| Japan | 6 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedKingsbury, G. Gage; Zara, Anthony R. – Applied Measurement in Education, 1991
This simulation investigated two procedures that reduce differences between paper-and-pencil testing and computerized adaptive testing (CAT) by making CAT content sensitive. Results indicate that the price in terms of additional test items of using constrained CAT for content balancing is much smaller than that of using testlets. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Computer Simulation
Peer reviewedBirenbaum, Menucha – Studies in Educational Evaluation, 1994
A scheme is introduced for classifying assessment methods by using a mapping sentence and examples of three tasks from research methodology are provided along with their profiles (structures) based on the mapping sentence. An instrument to determine student assessment preferences is presented and explored. (SLD)
Descriptors: Adaptive Testing, Classification, Educational Assessment, Measures (Individuals)
Peer reviewedWang, Tianyou; Vispoel, Walter P. – Journal of Educational Measurement, 1998
Used simulations of computerized adaptive tests to evaluate results yielded by four commonly used ability estimation methods: maximum likelihood estimation (MLE) and three Bayesian approaches. Results show clear distinctions between MLE and Bayesian methods. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewedvan der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 1999
Proposes an algorithm that minimizes the asymptotic variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The criterion results in a closed-form expression that is easy to evaluate. Also shows how the algorithm can be modified if the interest is in a test with a "simple ability structure."…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Peer reviewedO'Neill, Thomas; Lunz, Mary E.; Thiede, Keith – Journal of Applied Measurement, 2000
Studied item exposure in a computerized adaptive test when the item selection algorithm presents examinees with questions they were asked in a previous test administration. Results with 178 repeat examinees on a medical technologists' test indicate that the combined use of an adaptive algorithm to select items and latent trait theory to estimate…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Item Response Theory
Peer reviewedMosenthal, Peter B. – American Educational Research Journal, 1998
The extent to which variables from a previous study (P. Mosenthal, 1996) on document processing influenced difficulty on 165 tasks from the pose scales of five national adult literacy scales was studied. Three process variables accounted for 78% of the variance when prose task difficulty was defined using level scores. Implications for computer…
Descriptors: Adaptive Testing, Adults, Computer Assisted Testing, Definitions
Peer reviewedBennett, Randy Elliot; Morley, Mary; Quardt, Dennis – Applied Psychological Measurement, 2000
Describes three open-ended response types that could broaden the conception of mathematical problem solving used in computerized admissions tests: (1) mathematical expression (ME); (2) generating examples (GE); and (3) and graphical modeling (GM). Illustrates how combining ME, GE, and GM can form extended constructed response problems. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Constructed Response, Mathematics Tests
Peer reviewedEmbretson, Susan E. – Multivariate Behavioral Research, 2000
Discusses computerized dynamic testing with cues and items presented according to objective algorithms, elaborating on appropriate designs and psychometric models. Presents two studies involving 311 military recruits and 584 recruits that support the psychometric properties of a test measuring the susceptibility of reasoning to stressors. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Military Personnel, Psychometrics
Peer reviewedZwick, Rebecca; And Others – Journal of Educational Measurement, 1995
In a simulation study of ability and estimation of differential item functioning (DIF) in computerized adaptive tests, Rasch-based DIF statistics were highly correlated with generating DIF, but DIF statistics tended to be slightly smaller than in the three-parameter logistic model analyses. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Peer reviewedvan Krimpen-Stoop, Edith M. L. A.; Meijer, Rob – Applied Psychological Measurement, 1999
Theoretical null distributions of several fit statistic have been derived for paper-and-pencil tests. Examined whether these distributions also hold for computerized adaptive tests through simulation. Rates for two statistics studied were found to be similar in most cases. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Response Theory
Peer reviewedvan der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L. – Applied Psychological Measurement, 1999
Proposes an item-selection algorithm for neutralizing the differential effects of time limits on computerized adaptive test scores. Uses a statistical model for distributions of examinees' response times on items in a bank that is updated each time an item is administered. Demonstrates the method using an item bank from the Armed Services…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Item Banks
Peer reviewedNering, Michael L. – Applied Psychological Measurement, 1997
Evaluated the distribution of person fit within the computerized-adaptive testing (CAT) environment through simulation. Found that, within the CAT environment, these indexes tend not to follow a standard normal distribution. Person fit indexes had means and standard deviations that were quite different from the expected. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Error of Measurement, Item Response Theory
Peer reviewedStocking, Martha L. – Applied Psychological Measurement, 1997
Investigated three models that permit restricted examinee control over revising previous answers in the context of adaptive testing, using simulation. Two models permitting item revisions worked well in preserving test fairness and accuracy, and one model may preserve some cognitive processing styles developed by examinees for a linear testing…
Descriptors: Adaptive Testing, Cognitive Processes, Comparative Analysis, Computer Assisted Testing
Peer reviewedOlson, Allan – Educational Leadership, 2005
Most educators agree that the primary criterion of school success is the ongoing growth and achievement of every student even in the midst of constant debate about the state of the US education and conflicting opinions regarding the value of No Child Left Behind (NCLB). Standardized tests have their place, but computerized adaptive testing aimed…
Descriptors: Federal Legislation, Standardized Tests, Adaptive Testing, Educational Improvement
Al-A'ali, Mansoor – Educational Technology & Society, 2007
Computer adaptive testing is the study of scoring tests and questions based on assumptions concerning the mathematical relationship between examinees' ability and the examinees' responses. Adaptive student tests, which are based on item response theory (IRT), have many advantages over conventional tests. We use the least square method, a…
Descriptors: Educational Testing, Higher Education, Elementary Secondary Education, Student Evaluation

Direct link
