NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 28 results
Peer reviewed Peer reviewed
Direct linkDirect link
Rindermann, Heiner; Baumeister, Antonia E. E. – International Journal of Testing, 2015
Scholastic tests regard cognitive abilities to be domain-specific competences. However, high correlations between competences indicate either high task similarity or a dependence on common factors. The present rating study examined the validity of 12 Programme for International Student Assessment (PISA) and Third or Trends in International…
Descriptors: Test Validity, Test Interpretation, Competence, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Haberman, Shelby J. – International Journal of Testing, 2014
Recently there has been an increasing level of interest in subtest scores, or subscores, for their potential diagnostic value. Haberman (2008) suggested a method to determine if a subscore has added value over the total score. Researchers have often been interested in the performance of subgroups--for example, those based on gender or…
Descriptors: Scores, Achievement Tests, Language Tests, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Jurich, Daniel P.; Bradshaw, Laine P. – International Journal of Testing, 2014
The assessment of higher-education student learning outcomes is an important component in understanding the strengths and weaknesses of academic and general education programs. This study illustrates the application of diagnostic classification models, a burgeoning set of statistical models, in assessing student learning outcomes. To facilitate…
Descriptors: College Outcomes Assessment, Classification, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Oliveri, Maria Elena; von Davier, Matthias – International Journal of Testing, 2014
In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…
Descriptors: Test Bias, Scores, International Programs, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Byrne, Barbara M.; van de Vijver, Fons J. R. – International Journal of Testing, 2014
In cross-cultural research, there is a tendency for researchers to draw inferences at the country level based on individual-level data. Such action implicitly and often mistakenly assumes that both the measuring instrument and its underlying construct(s) are operating equivalently across both levels. Based on responses from 5,482 college students…
Descriptors: Factor Structure, Measures (Individuals), Cross Cultural Studies, Structural Equation Models
Peer reviewed Peer reviewed
Direct linkDirect link
Engelhard, George, Jr.; Kobrin, Jennifer L.; Wind, Stefanie A. – International Journal of Testing, 2014
The purpose of this study is to explore patterns in model-data fit related to subgroups of test takers from a large-scale writing assessment. Using data from the SAT, a calibration group was randomly selected to represent test takers who reported that English was their best language from the total population of test takers (N = 322,011). A…
Descriptors: College Entrance Examinations, Writing Tests, Goodness of Fit, English
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Moses, Tim – International Journal of Testing, 2013
The major purpose of this study is to assess the conditions under which single scoring for constructed-response (CR) items is as effective as double scoring in the licensure testing context. We used both empirical datasets of five mixed-format licensure tests collected in actual operational settings and simulated datasets that allowed for the…
Descriptors: Scoring, Test Format, Licensing Examinations (Professions), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – International Journal of Testing, 2013
This tutorial addresses possible sources of confusion in interpreting trait scores from the bifactor model. The bifactor model may be used when subscores are desired, either for formative feedback on an achievement test or for theoretically different constructs on a psychological test. The bifactor model is often chosen because it requires fewer…
Descriptors: Test Interpretation, Scores, Models, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Makransky, Guido; Glas, Cees A. W. – International Journal of Testing, 2013
Cognitive ability tests are widely used in organizations around the world because they have high predictive validity in selection contexts. Although these tests typically measure several subdomains, testing is usually carried out for a single subdomain at a time. This can be ineffective when the subdomains assessed are highly correlated. This…
Descriptors: Foreign Countries, Cognitive Ability, Adaptive Testing, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
King, Ronnel B.; Watkins, David A. – International Journal of Testing, 2013
The aim of this study is to assess the cross-cultural applicability of the Chinese version of the Inventory of School Motivation (ISM; McInerney & Sinclair, 1991) in the Hong Kong context using both within-network and between-network approaches to construct validation. The ISM measures four types of achievement goals: mastery, performance, social,…
Descriptors: Factor Analysis, Reliability, Learning Motivation, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Clauser, Brian E.; Mee, Janet; Margolis, Melissa J. – International Journal of Testing, 2013
This study investigated the extent to which the performance data format impacted data use in Angoff standard setting exercises. Judges from two standard settings (a total of five panels) were randomly assigned to one of two groups. The full-data group received two types of data: (1) the proportion of examinees selecting each option and (2) plots…
Descriptors: Standard Setting (Scoring), Cutting Scores, Validity, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Viglione, Donald J.; Perry, William; Giromini, Luciano; Meyer, Gregory J. – International Journal of Testing, 2011
We used multiple regression to calculate a new Ego Impairment Index (EII-3). The aim was to incorporate changes in the component variables and distribution of the number of responses as found in the new Rorschach Performance Assessment System, while sustaining the validity and reliability of previous EIIs. The EII-3 formula was derived from a…
Descriptors: Test Items, Self Concept, Validity, Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Hjemdal, Odin; Friborg, Oddgeir; Braun, Stephanie; Kempenaers, Chantal; Linkowski, Paul; Fossion, Pierre – International Journal of Testing, 2011
The Resilience Scale for Adults (RSA) was developed and has been extensively validated in Norwegian samples. The purpose of this study was to explore the construct validity of the Resilience Scale for Adults in a French-speaking Belgian sample and test measurement invariance between the Belgian and a Norwegian sample. A Belgian student sample (N =…
Descriptors: Measurement Techniques, Construct Validity, French, Adults
Peer reviewed Peer reviewed
Direct linkDirect link
Moura, Octavio; dos Santos, Rute Andrade; Rocha, Magda; Matos, Paula Mena – International Journal of Testing, 2010
The Children's Perception of Interparental Conflict Scale (CPIC) is based on the cognitive-contextual framework for understanding interparental conflict. This study investigates the factor validity and the invariance of two factor models of CPIC within a sample of Portuguese adolescents and emerging adults (14 to 25 years old; N = 677). At the…
Descriptors: Conflict, Factor Structure, Adolescents, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Martin, Andrew J.; Hau, Kit-Tai – International Journal of Testing, 2010
The present study explored motivation and engagement among Chinese and Australian school students. Based on a sample of 528 Hong Kong Chinese 12-13 year olds and an archive sample of 6,366 Australian 12-13 year olds, achievement motivation was assessed using the Motivation and Engagement Scale-High School (MES-HS). Confirmatory factor analysis and…
Descriptors: Foreign Countries, Achievement Need, Student Motivation, Learner Engagement
Previous Page | Next Page ยป
Pages: 1  |  2