NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 76 to 90 of 278 results
Peer reviewed Peer reviewed
Direct linkDirect link
Svetina, Dubravka; Gorin, Joanna S.; Tatsuoka, Kikumi K. – International Journal of Testing, 2011
As a construct definition, the current study develops a cognitive model describing the knowledge, skills, and abilities measured by critical reading test items on a high-stakes assessment used for selection decisions in the United States. Additionally, in order to establish generalizability of the construct meaning to other similarly structured…
Descriptors: Reading Tests, Reading Comprehension, Critical Reading, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Fukuda, Eriko; Saklofske, Donald H.; Tamaoka, Katsuo; Fung, Tak Shing; Miyaoka, Yayoi; Kiyama, Sachiko – International Journal of Testing, 2011
This article reports the psychometric properties of two emotional intelligence measures translated into Japanese. Confirmatory factor analysis (CFA) was conducted to examine the factor structure of a Japanese version of the Wong and Law Emotional Intelligence Scale (WLEIS) completed by 310 Japanese university students. A second study employed CFA…
Descriptors: Emotional Intelligence, Japanese, Factor Structure, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Chulu, Bob Wajizigha; Sireci, Stephen G. – International Journal of Testing, 2011
Many examination agencies, policy makers, media houses, and the public at large make high-stakes decisions based on test scores. Unfortunately, in some cases educational tests are not statistically equated to account for test differences over time, which leads to inappropriate interpretations of students' performance. In this study we illustrate…
Descriptors: Classification, Foreign Countries, Item Response Theory, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hjemdal, Odin; Friborg, Oddgeir; Braun, Stephanie; Kempenaers, Chantal; Linkowski, Paul; Fossion, Pierre – International Journal of Testing, 2011
The Resilience Scale for Adults (RSA) was developed and has been extensively validated in Norwegian samples. The purpose of this study was to explore the construct validity of the Resilience Scale for Adults in a French-speaking Belgian sample and test measurement invariance between the Belgian and a Norwegian sample. A Belgian student sample (N =…
Descriptors: Measurement Techniques, Construct Validity, French, Adults
Peer reviewed Peer reviewed
Direct linkDirect link
Crocetti, Elisabetta; Shokri, Omid – International Journal of Testing, 2010
The purpose of this study was to validate the Iranian version of the Identity Style Inventory (ISI). Participants were 376 (42% males) university students. Confirmatory factor analyses revealed a clear three-factor structure of identity style and a mono-factor structure of commitment in the overall sample as well as in gender subgroups. Convergent…
Descriptors: Validity, Self Concept Measures, College Students, Adjustment (to Environment)
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, John Chi-kin; Yin, Hongbiao; Zhang, Zhonghua – International Journal of Testing, 2010
This article reports the adaptation and analysis of Pintrich's Motivated Strategies for Learning Questionnaire (MSLQ) in Hong Kong. First, this study examined the psychometric qualities of the existing Chinese version of MSLQ (MSLQ-CV). Based on this examination, this study developed a revised Chinese version of MSLQ (MSLQ-RCV) for junior…
Descriptors: Foreign Countries, Questionnaires, Psychometrics, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Byrne, Barbara M.; van de Vijver, Fons J. R. – International Journal of Testing, 2010
A critical assumption in cross-cultural comparative research is that the instrument measures the same construct(s) in exactly the same way across all groups (i.e., the instrument is measurement and structurally equivalent). Structural equation modeling (SEM) procedures are commonly used in testing these assumptions of multigroup equivalence.…
Descriptors: Measures (Individuals), Cross Cultural Studies, Measurement, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Fonseca-Pedrero, Eduardo; Wells, Craig; Paino, Mercedes; Lemos-Giraldez, Serafin; Villazon-Garcia, Ursula; Sierra, Susana; Garcia-Portilla Gonzalez, Ma Paz; Bobes, Julio; Muniz, Jose – International Journal of Testing, 2010
The main objective of the present study was to examine measurement invariance of the Reynolds Depression Adolescent Scale (RADS) (Reynolds, 1987) across gender and age in a representative sample of nonclinical adolescents. The sample was composed of 1,659 participants, 801 males (48.3%), with a mean age of 15.9 years (SD = 1.2). Confirmatory…
Descriptors: Measurement Techniques, Measures (Individuals), Factor Analysis, Depression (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Mucherah, Winnie; Finch, Holmes – International Journal of Testing, 2010
This study investigated the structural equivalence of the Self Description Questionnaire (SDQ) in relation to Kenyan high school students. A total of 1,990 students from two same-sex boarding schools participated. Confirmatory factor analysis revealed the overall model fit the data well. However, an examination of the individual factors revealed…
Descriptors: African Culture, Boarding Schools, Construct Validity, Questionnaires
Peer reviewed Peer reviewed
Direct linkDirect link
Moura, Octavio; dos Santos, Rute Andrade; Rocha, Magda; Matos, Paula Mena – International Journal of Testing, 2010
The Children's Perception of Interparental Conflict Scale (CPIC) is based on the cognitive-contextual framework for understanding interparental conflict. This study investigates the factor validity and the invariance of two factor models of CPIC within a sample of Portuguese adolescents and emerging adults (14 to 25 years old; N = 677). At the…
Descriptors: Conflict, Factor Structure, Adolescents, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Barry, Carol L.; Horst, S. Jeanne; Finney, Sara J.; Brown, Allison R.; Kopp, Jason P. – International Journal of Testing, 2010
Given the prevalence of low-stakes testing internationally (e.g., NAEP, TIMSS, PIRLS), it is crucial to try to better understand examinee motivation in these contexts. In the current study, mixture modeling results supported three different profiles of test-taking effort over the course of five tests. Classes 1 and 2 had varying levels of effort…
Descriptors: Testing, Comparative Analysis, Accountability, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Alves, Cecilia; Majeau, Renate Taylor – International Journal of Testing, 2010
The purpose of this study is to apply the attribute hierarchy method in an operational diagnostic mathematics program at Grades 3 and 6 to promote cognitive inferences about students' problem-solving skills. The attribute hierarchy method is a psychometric procedure for classifying examinees' test item responses into a set of structured attribute…
Descriptors: Test Items, Student Reaction, Diagnostic Tests, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Evers, Arne; Sijtsma, Klaas; Lucassen, Wouter; Meijer, Rob R. – International Journal of Testing, 2010
This article describes the 2009 revision of the Dutch Rating System for Test Quality and presents the results of test ratings from almost 30 years. The rating system evaluates the quality of a test on seven criteria: theoretical basis, quality of the testing materials, comprehensiveness of the manual, norms, reliability, construct validity, and…
Descriptors: Rating Scales, Documentation, Educational Quality, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M. – International Journal of Testing, 2010
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Descriptors: Monte Carlo Methods, Simulation, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E.; Wise, Steven L. – International Journal of Testing, 2010
This investigation examined whether different rates of rapid guessing between groups could lead to detectable levels of differential item functioning (DIF) in situations where the item parameters were the same for both groups. Two simulation studies were designed to explore this possibility. The groups in Study 1 were simulated to reflect…
Descriptors: Guessing (Tests), Test Bias, Motivation, Gender Differences
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  19