NotesFAQContact Us
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ1419708
Record Type: Journal
Publication Date: 2024
Pages: 31
Abstractor: As Provided
EISSN: EISSN-1946-6226
A Model of How Students Engineer Test Cases with Feedback
Austin M. Shin; Ayaan M. Kazerouni
ACM Transactions on Computing Education, v24 n1 Article 1 2024
Background and Context: Students' programming projects are often assessed on the basis of their tests as well as their implementations, most commonly using test adequacy criteria like branch coverage, or, in some cases, mutation analysis. As a result, students are implicitly encouraged to use these tools during their development process (i.e., so they have awareness of the strength of their own test suites). Objectives: Little is known about how students choose test cases for their software while being guided by these feedback mechanisms. We aim to explore the interaction between students and commonly used testing feedback mechanisms (in this case, branch coverage and mutation-based feedback). Method: We use grounded theory to explore this interaction. We conducted 12 think-aloud interviews with students as they were asked to complete a series of software testing tasks, each of which involved a different feedback mechanism. Interviews were recorded and transcripts were analyzed, and we present the overarching themes that emerged from our analysis. Findings: Our findings are organized into a process model describing how students completed software testing tasks while being guided by a test adequacy criterion. Program comprehension strategies were commonly employed to reason about feedback and devise test cases. Mutation-based feedback tended to be cognitively overwhelming for students, and they resorted to weaker heuristics in order to address this feedback. Implications: In the presence of testing feedback, students did not appear to consider problem coverage as a testing goal so much as program coverage. While test adequacy criteria can be useful for assessment of software tests, we must consider whether they represent good goals for testing, and if our current methods of practice and assessment are encouraging poor testing habits.
Association for Computing Machinery. 2 Penn Plaza Suite 701, New York, NY 10121. Tel: 800-342-6626; Tel: 212-626-0500; Fax: 212-944-1318; e-mail:; Web site:
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A