Automated Critique of Early Programming Antipatterns

Document Type

Conference Proceeding

Publication Date

2-2019

Department

Department of Computer Science

Abstract

The introductory programming lab, with small cycles of teaching, coding, testing, and critique from instructors, is an extraordinarily productive learning experience for novice programmers. We wish to extend the availability of such critique through automation, capturing the essence of interaction between student and instructor as closely as possible. Integrated Development Environments and Automated Grading Systems provide constant feedback through static analysis and unit testing. But we also wish to tailor automated feedback to acknowledge commonly recurring issues with novice programmers, in keeping with the practice of a human instructor. We argue that the kinds of mistakes that novice programmers make, and the way they are reported to the novices, deserve special care. In this paper we provide examples of early programming antipatterns that have arisen from our teaching experience, and describe different ways of identifying and dealing with them automatically through our tool WebTA. Novice students may produce code that is close to a correct solution but contains syntactic errors; WebTA attempts to salvage the promising portions of the student’s submission and suggest repairs that are more meaningful than typical compiler error messages. Alternatively, a student misunderstanding may result in well-formed code that passes unit tests yet contains clear design flaws; through additional analysis,WebTA can identify and flag them. Finally, certain types of antipattern can be anticipated and flagged by the instructor, based on the context of the course and the programming exercise; WebTA allows for customizable critique triggers and messages.

Publication Title

SIGCSE '19: Proceedings of the 50th ACM Technical Symposium on Computer Science Education

Share

COinS