Publication Date
| In 2015 | 2 |
| Since 2014 | 9 |
| Since 2011 (last 5 years) | 16 |
| Since 2006 (last 10 years) | 18 |
| Since 1996 (last 20 years) | 18 |
Descriptor
| Foreign Countries | 12 |
| College Students | 9 |
| Correlation | 5 |
| Measures (Individuals) | 5 |
| Scores | 5 |
| Statistical Analysis | 5 |
| Test Items | 5 |
| Test Validity | 5 |
| Factor Analysis | 4 |
| Factor Structure | 4 |
| More ▼ | |
Source
| International Journal of… | 18 |
Author
Publication Type
| Journal Articles | 18 |
| Reports - Research | 15 |
| Reports - Descriptive | 3 |
| Tests/Questionnaires | 1 |
Education Level
| Postsecondary Education | 18 |
| Higher Education | 17 |
| Elementary Secondary Education | 2 |
| Secondary Education | 2 |
| Elementary Education | 1 |
| Grade 4 | 1 |
| High Schools | 1 |
| Intermediate Grades | 1 |
Audience
Showing 1 to 15 of 18 results
Rindermann, Heiner; Baumeister, Antonia E. E. – International Journal of Testing, 2015
Scholastic tests regard cognitive abilities to be domain-specific competences. However, high correlations between competences indicate either high task similarity or a dependence on common factors. The present rating study examined the validity of 12 Programme for International Student Assessment (PISA) and Third or Trends in International…
Descriptors: Test Validity, Test Interpretation, Competence, Reading Tests
Baghaei, Purya; Aryadoust, Vahid – International Journal of Testing, 2015
Research shows that test method can exert a significant impact on test takers' performance and thereby contaminate test scores. We argue that common test method can exert the same effect as common stimuli and violate the conditional independence assumption of item response theory models because, in general, subsets of items which have a…
Descriptors: Test Format, Item Response Theory, Models, Test Items
Almond, Russell G. – International Journal of Testing, 2014
Assessments consisting of only a few extended constructed response items (essays) are not typically equated using anchor test designs as there are typically too few essay prompts in each form to allow for meaningful equating. This article explores the idea that output from an automated scoring program designed to measure writing fluency (a common…
Descriptors: Automation, Equated Scores, Writing Tests, Essay Tests
Jurich, Daniel P.; Bradshaw, Laine P. – International Journal of Testing, 2014
The assessment of higher-education student learning outcomes is an important component in understanding the strengths and weaknesses of academic and general education programs. This study illustrates the application of diagnostic classification models, a burgeoning set of statistical models, in assessing student learning outcomes. To facilitate…
Descriptors: College Outcomes Assessment, Classification, Statistical Analysis, Models
Fischer, Sebastian; Freund, Philipp Alexander – International Journal of Testing, 2014
The Adaption-Innovation Inventory (AII), originally developed by Kirton (1976), is a widely used self-report instrument for measuring problem-solving styles at work. The present study investigates how scores on the AII are affected by different response styles. Data are collected from a combined sample (N = 738) of students, employees, and…
Descriptors: Measures (Individuals), Scores, Item Response Theory, Response Style (Tests)
Byrne, Barbara M.; van de Vijver, Fons J. R. – International Journal of Testing, 2014
In cross-cultural research, there is a tendency for researchers to draw inferences at the country level based on individual-level data. Such action implicitly and often mistakenly assumes that both the measuring instrument and its underlying construct(s) are operating equivalently across both levels. Based on responses from 5,482 college students…
Descriptors: Factor Structure, Measures (Individuals), Cross Cultural Studies, Structural Equation Models
Ferrett, Helen L.; Carey, Paul D.; Baufeldt, Angela L.; Cuzen, Natalie L.; Conradie, Simone; Dowling, Tessa; Stein, Dan J.; Thomas, Kevin G. F. – International Journal of Testing, 2014
Because of their global clinical utility, phonemic fluency tests are frequently incorporated into neuropsychological assessment batteries. However, in heterogeneous societies their use is complicated by the lack of careful attention to using letters of equivalent difficulty across languages, and the paucity of norms stratified by relevant…
Descriptors: Foreign Countries, Phonemes, Language Fluency, Alphabets
Zilberberg, Anna; Finney, Sara J.; Marsh, Kimberly R.; Anderson, Robin D. – International Journal of Testing, 2014
Given worldwide prevalence of low-stakes testing for monitoring educational quality and students' progress through school (e.g., Trends in International Mathematics and Science Study, Program for International Student Assessment), interpretability of resulting test scores is of global concern. The nonconsequential nature of low-stakes tests…
Descriptors: Student Attitudes, Student Motivation, Test Validity, Accountability
Engelhard, George, Jr.; Kobrin, Jennifer L.; Wind, Stefanie A. – International Journal of Testing, 2014
The purpose of this study is to explore patterns in model-data fit related to subgroups of test takers from a large-scale writing assessment. Using data from the SAT, a calibration group was randomly selected to represent test takers who reported that English was their best language from the total population of test takers (N = 322,011). A…
Descriptors: College Entrance Examinations, Writing Tests, Goodness of Fit, English
DeMars, Christine E. – International Journal of Testing, 2013
This tutorial addresses possible sources of confusion in interpreting trait scores from the bifactor model. The bifactor model may be used when subscores are desired, either for formative feedback on an achievement test or for theoretically different constructs on a psychological test. The bifactor model is often chosen because it requires fewer…
Descriptors: Test Interpretation, Scores, Models, Correlation
Pacico, Juliana Cerentini; Zanon, Cristian; Bastianello, Micheline Roat; Reppold, Caroline Tozzi; Hutz, Claudio Simon – International Journal of Testing, 2013
The objective of this study was to adapt and gather validity evidence for a Brazilian sample version of the Hope Index and to verify if cultural differences would produce different results than those found in the United States. In this study, we present a set of analyses that together comprise a comprehensive validity argument for the use of a…
Descriptors: Foreign Countries, Cognitive Tests, Content Validity, Test Validity
Ling, Guangming; Bridgeman, Brent – International Journal of Testing, 2013
To explore the potential effect of computer type on the Test of English as a Foreign Language-Internet-Based Test (TOEFL iBT) Writing Test, a sample of 444 international students was used. The students were randomly assigned to either a laptop or a desktop computer to write two TOEFL iBT practice essays in a simulated testing environment, followed…
Descriptors: Laptop Computers, Writing Tests, Essays, Computer Assisted Instruction
Pitoniak, Mary J.; Yeld, Nan – International Journal of Testing, 2013
Criterion-referenced assessments have become more common around the world, with performance standards being set to differentiate different levels of student performance. However, use of standard setting methods developed in the United States may be complicated by factors related to the political and educational contexts within another country. In…
Descriptors: Standard Setting (Scoring), Criterion Referenced Tests, Benchmarking, Student Evaluation
Gierl, Mark J.; Lai, Hollis – International Journal of Testing, 2012
Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…
Descriptors: Foreign Countries, Psychometrics, Test Construction, Test Items
Xu, Lihua; Barnes, Laura L. B. – International Journal of Testing, 2011
Measurement invariance of the 8-factor Inventory of School Motivation (McInerney & Sinclair, 1991) between American and Chinese college students was tested using single-group and multi-group confirmatory factor analysis. A Mandarin Chinese version of the ISM was developed for this study. Comparisons of latent means were conducted when warranted by…
Descriptors: College Students, Factor Analysis, Positive Reinforcement, Mandarin Chinese
Previous Page | Next Page ยป
Pages: 1 | 2
Peer reviewed
Direct link
