NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ1115788
Record Type: Journal
Publication Date: 2016-Nov
Pages: 21
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-0361-0365
EISSN: N/A
A Second Dystopia in Education: Validity Issues in Authentic Assessment Practices
Hathcoat, John D.; Penn, Jeremy D.; Barnes, Laura L.; Comer, Johnathan C.
Research in Higher Education, v57 n7 p892-912 Nov 2016
Authentic assessments used in response to accountability demands in higher education face at least two threats to validity. First, a lack of interchangeability between assessment tasks introduces bias when using aggregate-based scores at an institutional level. Second, reliance on written products to capture constructs such as critical thinking (CT) may introduce construct-irrelevant variance if score variance reflects written communication (WC) skill as well as variation in the construct of interest. Two studies investigated these threats to validity. Student written responses to faculty in-class assignments were sampled from general education courses within an institution. Faculty raters trained to use a common rubric than rated the students' written papers. The first study used hierarchical linear modeling to estimate the magnitude of between-assignment variance in CT scores among 343 student-written papers nested within 18 assignments. About 18% of the total CT variance was attributed to differences in average CT scores indicating that assignments were not interchangeable. Approximately 47% of this between-assignment variance was predicted by the extent to which the assignments requested students to demonstrate their own perspective. Thus aggregating CT scores across students and assignments could bias the scores up or down depending on the characteristics of the assignments, particularly perspective-taking. The second study used exploratory factor analysis and squared partial correlations to estimate the magnitude of construct-irrelevant variance in CT scores. Student papers were rated for CT by one group of faculty and for WC by a different group of faculty. Nearly 25% of the variance in CT scores was attributed to differences in WC scores. Score-based interpretations of CT may need to be delimited if observations are solely obtained through written products. Both studies imply a need to gather additional validity evidence in authentic assessment practices before this strategy is widely adopted among institutions of higher education. Authors also address misconceptions about standardization in authentic assessment practices.
Springer. 233 Spring Street, New York, NY 10013. Tel: 800-777-4643; Tel: 212-460-1500; Fax: 212-348-4505; e-mail: service-ny@springer.com; Web site: http://www.springerlink.com
Publication Type: Journal Articles; Reports - Research
Education Level: Higher Education; Postsecondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A