Publication Date
| In 2015 | 2 |
| Since 2014 | 2 |
| Since 2011 (last 5 years) | 6 |
| Since 2006 (last 10 years) | 13 |
| Since 1996 (last 20 years) | 19 |
Descriptor
| Item Response Theory | 19 |
| English (Second Language) | 11 |
| Language Tests | 10 |
| Scores | 7 |
| Test Items | 7 |
| Foreign Countries | 5 |
| Psychometrics | 5 |
| Statistical Analysis | 5 |
| Evaluation | 4 |
| Models | 4 |
| More ▼ | |
Source
| Language Assessment Quarterly | 19 |
Author
| Sawaki, Yasuyo | 2 |
| Zumbo, Bruno D. | 2 |
| Ark, Tavinder K. | 1 |
| Barkaoui, Khaled | 1 |
| Barkhuizen, Gary | 1 |
| Brown, James Dean | 1 |
| Coniam, David | 1 |
| Davidson, Fred | 1 |
| Eckes, Thomas | 1 |
| Elder, Cathie | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 19 |
| Reports - Research | 11 |
| Opinion Papers | 3 |
| Reports - Evaluative | 3 |
| Reports - Descriptive | 2 |
| Tests/Questionnaires | 2 |
Education Level
| Grade 8 | 2 |
| Secondary Education | 2 |
| Adult Basic Education | 1 |
| Adult Education | 1 |
| Elementary Education | 1 |
| Grade 10 | 1 |
| Grade 11 | 1 |
| Grade 6 | 1 |
| Grade 9 | 1 |
| Higher Education | 1 |
| More ▼ | |
Audience
Showing 1 to 15 of 19 results
Tan, May; Turner, Carolyn E. – Language Assessment Quarterly, 2015
In Quebec the high-stakes Secondary Five ESL exit writing exam developed by the Education Ministry (MELS) is administered and corrected by classroom teachers. In this distinctive situation, the MELS works toward aligning classroom-based assessment (CBA) and the writing exam by making ongoing teacher involvement part of its development and…
Descriptors: Foreign Countries, High Stakes Tests, English (Second Language), Exit Examinations
Zumbo, Bruno D.; Liu, Yan; Wu, Amery D.; Shear, Benjamin R.; Olvera Astivia, Oscar L.; Ark, Tavinder K. – Language Assessment Quarterly, 2015
Methods for detecting differential item functioning (DIF) and item bias are typically used in the process of item analysis when developing new measures; adapting existing measures for different populations, languages, or cultures; or more generally validating test score inferences. In 2007 in "Language Assessment Quarterly," Zumbo…
Descriptors: Test Bias, Test Items, Holistic Approach, Models
Barkaoui, Khaled – Language Assessment Quarterly, 2013
This article critiques traditional single-level statistical approaches (e.g., multiple regression analysis) to examining relationships between language test scores and variables in the assessment setting. It highlights the conceptual, methodological, and statistical problems associated with these techniques in dealing with multilevel or nested…
Descriptors: Hierarchical Linear Modeling, Statistical Analysis, Multiple Regression Analysis, Generalizability Theory
Hsieh, Mingchuan – Language Assessment Quarterly, 2013
The Yes/No Angoff and Bookmark method for setting standards on educational assessment are currently two of the most popular standard-setting methods. However, there is no research into the comparability of these two methods in the context of language assessment. This study compared results from the Yes/No Angoff and Bookmark methods as applied to…
Descriptors: Standard Setting (Scoring), Comparative Analysis, Language Tests, Multiple Choice Tests
Harsch, Claudia; Rupp, Andre Alexander – Language Assessment Quarterly, 2011
The "Common European Framework of Reference" (CEFR; Council of Europe, 2001) provides a competency model that is increasingly used as a point of reference to compare language examinations. Nevertheless, aligning examinations to the CEFR proficiency levels remains a challenge. In this article, we propose a new, level-centered approach to designing…
Descriptors: Language Tests, Writing Tests, Test Construction, Test Items
Winke, Paula – Language Assessment Quarterly, 2011
In this study, I investigated the reliability of the U.S. Naturalization Test's civics component by asking 414 individuals to take a mock U.S. citizenship test comprising civics test questions. Using an incomplete block design of six forms with 16 nonoverlapping items and four anchor items on each form (the anchors connected the six subsets of…
Descriptors: Test Items, Citizenship, Civics, Test Validity
Sawaki, Yasuyo; Kim, Hae-Jin; Gentile, Claudia – Language Assessment Quarterly, 2009
In cognitive diagnosis a Q-matrix (Tatsuoka, 1983, 1990), which is an incidence matrix that defines the relationships between test items and constructs of interest, has great impact on the nature of performance feedback that can be provided to score users. The purpose of the present study was to identify meaningful skill coding categories that…
Descriptors: Feedback (Response), Test Items, Test Content, Identification
Lee, Yong-Won; Sawaki, Yasuyo – Language Assessment Quarterly, 2009
The present study investigated the functioning of three psychometric models for cognitive diagnosis--the general diagnostic model, the fusion model, and latent class analysis--when applied to large-scale English as a second language listening and reading comprehension assessments. Data used in this study were scored item responses and incidence…
Descriptors: Reading Comprehension, Field Tests, Identification, Classification
Kobayashi, Miyoko; Negishi, Masashi – Language Assessment Quarterly, 2008
This article presents an interview with Professor Kenji Ohtomo who retired in March 2006 from the post of Dean, College of Applied International Studies, Tokiwa University, Mito, in Japan. Professor Ohtomo is currently a Professor Emeritus at the University of Tsukuba and Honorary President of the Japan Language Testing Association, of which he…
Descriptors: Language Tests, Foreign Countries, English (Second Language), Interviews
Brown, James Dean – Language Assessment Quarterly, 2008
In keeping with the theme of the International Language Testing Association/Language Testing Research Colloquium Conference in 2008, "Focusing on the Core: Justifying the Use of Language Assessments to Stakeholders," I define "stakeholder-friendly tests," "defensible testing," and "testing-context analysis." I then go on to discuss the rational…
Descriptors: Language Usage, Curriculum Development, Testing, Language Tests
Ockey, Gary J. – Language Assessment Quarterly, 2007
When testing English language learners (ELLs) in subject matter areas, construct irrelevant variance could result from English, the language in which the test is presented. Differential item functioning (DIF) techniques have been used to determine if items are operating differently for population subgroups and might therefore be appropriate for…
Descriptors: Test Bias, Validity, Word Problems (Mathematics), Mathematics Tests
Zumbo, Bruno D. – Language Assessment Quarterly, 2007
The purpose of this article is to reflect on the state of the theorizing and praxis of DIF in general: where it has been; where it is now; and where I think it is, and should, be going. Along the way the major trends in the differential item functioning (DIF) literature are summarized and integrated providing some organizing principles that allow…
Descriptors: Test Bias, Evaluation Research, Research Methodology, Regression (Statistics)
Spaan, Mary – Language Assessment Quarterly, 2007
This article follows the development of test items (see "Language Assessment Quarterly", Volume 3 Issue 1, pp. 71-79 for the article "Test and Item Specifications Development"), beginning with a review of test and item specifications, then proceeding to writing and editing of items, pretesting and analysis, and finally selection of an item for a…
Descriptors: Test Items, Test Construction, Responses, Test Content
Farhady, Hossein – Language Assessment Quarterly, 2005
Within the last few decades, there have been multidimensional advancements in language assessment. Some of the advancements have been in the direction of developing theoretical models of the construct of language ability, others in the line of measuring that construct, and still some others toward materializing the outcomes of measuring the…
Descriptors: Models, Applied Linguistics, Language Tests, Psychometrics
Elder, Cathie; Knoch, Ute; Barkhuizen, Gary; von Randow, Janet – Language Assessment Quarterly, 2005
Research on the utility of feedback to raters in the form of performance reports has produced mixed findings (Lunt, Morton, & Wigglesworth, 1994; Wigglesworth, 1993) and has thus far been trialled only in oral assessment contexts. This article reports on a study investigating raters' attitudes and responsiveness to feedback on their ratings of an…
Descriptors: Feedback (Response), Scripts, Test Bias, Writing Evaluation
Previous Page | Next Page ยป
Pages: 1 | 2
Peer reviewed
Direct link
