NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 11 results
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Ying; Mousavi, Amin – International Journal of Testing, 2015
The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…
Descriptors: Measurement, Achievement Tests, Comparative Analysis, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun; Geisinger, Kurt F. – International Journal of Testing, 2014
Differential item functioning (DIF) analysis is important in terms of test fairness. While DIF analyses have mainly been conducted with manifest grouping variables, such as gender or race/ethnicity, it has been recently claimed that not only the grouping variables but also contextual variables pertaining to examinees should be considered in DIF…
Descriptors: Test Bias, Gender Differences, Regression (Statistics), Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Engelhard, George, Jr.; Kobrin, Jennifer L.; Wind, Stefanie A. – International Journal of Testing, 2014
The purpose of this study is to explore patterns in model-data fit related to subgroups of test takers from a large-scale writing assessment. Using data from the SAT, a calibration group was randomly selected to represent test takers who reported that English was their best language from the total population of test takers (N = 322,011). A…
Descriptors: College Entrance Examinations, Writing Tests, Goodness of Fit, English
Peer reviewed Peer reviewed
Direct linkDirect link
King, Ronnel B.; Watkins, David A. – International Journal of Testing, 2013
The aim of this study is to assess the cross-cultural applicability of the Chinese version of the Inventory of School Motivation (ISM; McInerney & Sinclair, 1991) in the Hong Kong context using both within-network and between-network approaches to construct validation. The ISM measures four types of achievement goals: mastery, performance, social,…
Descriptors: Factor Analysis, Reliability, Learning Motivation, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Mucherah, Winnie; Finch, W. Holmes; Keaikitse, Setlhomo – International Journal of Testing, 2012
Understanding adolescent self-concept is of great concern for educators, mental health professionals, and parents, as research consistently demonstrates that low self-concept is related to a number of problem behaviors and poor outcomes. Thus, accurate measurements of self-concept are key, and the validity of such measurements, including the…
Descriptors: Test Bias, Mental Health Workers, Validity, Self Concept Measures
Peer reviewed Peer reviewed
Direct linkDirect link
D'Agostino, Jerome; Karpinski, Aryn; Welsh, Megan – International Journal of Testing, 2011
After a test is developed, most content validation analyses shift from ascertaining domain definition to studying domain representation and relevance because the domain is assumed to be set once a test exists. We present an approach that allows for the examination of alternative domain structures based on extant test items. In our example based on…
Descriptors: Expertise, Test Items, Mathematics Tests, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Svetina, Dubravka; Gorin, Joanna S.; Tatsuoka, Kikumi K. – International Journal of Testing, 2011
As a construct definition, the current study develops a cognitive model describing the knowledge, skills, and abilities measured by critical reading test items on a high-stakes assessment used for selection decisions in the United States. Additionally, in order to establish generalizability of the construct meaning to other similarly structured…
Descriptors: Reading Tests, Reading Comprehension, Critical Reading, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Mucherah, Winnie; Finch, Holmes – International Journal of Testing, 2010
This study investigated the structural equivalence of the Self Description Questionnaire (SDQ) in relation to Kenyan high school students. A total of 1,990 students from two same-sex boarding schools participated. Confirmatory factor analysis revealed the overall model fit the data well. However, an examination of the individual factors revealed…
Descriptors: African Culture, Boarding Schools, Construct Validity, Questionnaires
Peer reviewed Peer reviewed
Direct linkDirect link
Govaerts, Sophie; Gregoire, Jacques – International Journal of Testing, 2008
This article describes the development and two studies on the construct validity of the Academic Emotions Scale (AES). The AES is a French self-report questionnaire assessing six emotions in the context of school learning: enjoyment, hope, pride, anxiety, shame and frustration. Its construct validity was studied through exploratory and…
Descriptors: Construct Validity, Test Validity, Factor Structure, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Marsh, Herbert W.; Hau, Kit-Tai; Artelt, Cordula; Baumert, Jurgen; Peschar, Jules L. – International Journal of Testing, 2006
Through a rigorous process of selecting educational psychology's most useful affective constructs, the Organisation for Economic Co-operation and Development (OECD) constructed the Students' Approaches to Learning (SAL) instrument, which requires only 10 min to measure 14 factors that assess self-regulated learning strategies, self-beliefs,…
Descriptors: Measurement Techniques, Educational Psychology, Psychometrics, Cross Cultural Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Ross, Steven J.; Okabe, Junko – International Journal of Testing, 2006
Test validity is predicated on there being a lack of bias in tasks, items, or test content. It is well-known that factors such as test candidates' mother tongue, life experiences, and socialization practices of the wider community may serve to inject subtle interactions between individuals' background and the test content. When the gender of the…
Descriptors: Gender Bias, Language Tests, Test Validity, Reading Comprehension