NotesFAQContact Us
Search Tips
ERIC Number: ED243321
Record Type: Non-Journal
Publication Date: 1984-Mar
Pages: 17
Abstractor: N/A
Reference Count: N/A
The Construct Validation of Language Tests Using Bias Techniques.
Friedman, Charles B.
A method for asessing second language test validity by modifying and applying statistical test bias techniques to the performance of language learners from different native language groups was developed out of concern for internal test bias, both overall and item-related. Valid tests should contain some items testing differential performance between different native language groups, but global test measurement should also, theoretically, be similar, providing similar measurement of the same underlying constructs with the same degree of accuracy. That is, micro-level sensitivity and macro-level similarity should exist at the same time. In a pilot study of the Test of English as a Foreign Language (TOEFL), results of testing 481 native Chinese speakers and 484 native Arabic speakers were analyzed by the following methods. To investigate macro-level similarities, the Kuder-Richardson-20 statistical method and a formula suggested by Feldt were used, and a separate factor analysis was performed for each language group for each test section using the SSPS program factor PA2. At the micro level, three analyses were used: two item difficulty parameters (transformed item difficulty and Rasch difficulty) and a chi-square analysis of distribution of correct responses across difficulty levels. Results showed that the TOEFL sections do measure similar constructs with the same degree of accuracy, and that individual items are sensitive to native language difference, giving evidence of the construct validity of the test. (MSE)
Publication Type: Speeches/Meeting Papers; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers - Assessments and Surveys: Test of English as a Foreign Language