NotesFAQContact Us
Search Tips
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ1122148
Record Type: Journal
Publication Date: 2016
Pages: 21
Abstractor: As Provided
ISSN: ISSN-1049-4820
A Multi-Peer Assessment Platform for Programming Language Learning: Considering Group Non-Consensus and Personal Radicalness
Wang, Yanqing; Liang, Yaowen; Liu, Luning; Liu, Ying
Interactive Learning Environments, v24 n8 p2011-2031 2016
Multi-peer assessment has often been used by teachers to reduce personal bias and make the assessment more reliable. This study reviews the design and development of multi-peer assessment systems that detect and solve two common issues in such systems: non-consensus among group members and personal radicalness in some assessments. A multi-peer assessment model is proposed to address these issues. The model captures roles, activities, and data structures in a typical multi-peer assessment setting that can be generalized to other scenarios. We implemented the model in a multi-peer code review system and conducted several empirical experiments in programming language classes. The studies showed that the model can significantly improve student learning outcomes than in a single-peer assessment. Also, we used statistical measures to detect non-consensus and radicalness issues that often occur in the model. The results reveal many insights and provide valuable guidance for teachers to implement a multi-peer assessment system.
Routledge. Available from: Taylor & Francis, Ltd. 325 Chestnut Street Suite 800, Philadelphia, PA 19106. Tel: 800-354-1420; Fax: 215-625-2940; Web site:
Publication Type: Journal Articles; Tests/Questionnaires; Reports - Research
Education Level: Higher Education; Postsecondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A