Skip to main content

Introducing an Innovative Tool to Enhance Autograders and Improve Student Learning

ingi |

ingi
29 August 2023, modified on 18 December 2024

Autograders are frequently used in programming courses with large classes, to provide students with automated feedback on their programming assignments. But for teachers, programming the feedback scripts can be a difficult and time-consuming task. Furthermore, rather than reflecting actual student flaws, feedback scripts are often based on predicted results or teacher presumptions regarding student mistakes. We have created a tool that addresses these issues by looking at the code written by prior students and linking recurring code patterns found to their scores. Such patterns can thus reflect typically good solutions or coding flaws found in bad solutions. From these patterns the can then automatically construct unit tests that concentrate on frequent mistakes, errors, or poor coding techniques. These unit tests can then be integrated easily in the feedback script for that assignment, so that future students get feedback on the kind of errors that were often made by prior students, for example to help them to train for their next exam.

Figure 1: Students taking a Python programming exam using the INGInious autograder.

More technically, to improve feedback for our first year computer science course in Python, here is a brief description of the step-by-step approach followed by our tool. First, we collect, from the autograder, code samples from our students’ exercises. Next, we extract the structure of their code as an abstract syntax tree. We then use pattern mining to identify common patterns in the structure of the code of multiple students. We then manually analyze these patterns and select the ones that we consider most relevant for providing improved feedback to students. The last step automatically translates these selected patterns into Python unit tests. These tests are then integrated in the INGInious platform, in addition to the already available feedback, in order to provide more customized feedback to future students. Overall, this approach helps us enhance the feedback quality and address common mistakes or coding practices and flaws observed in the students’ code.

Figure 2: Overview of the steps involved in enhancing automated feedback to students in an introductory programming course. We start by compiling exercise data. Next, we generated tests based on the recurrent patterns we had discovered. Finally, we made the test available to future students on INGInious

Lienard, Julien ; Mens, Kim ; Nijssen, Siegfried ; et. al. Extracting Unit Tests from Patterns Mined in Student Code to Provide Improved Feedback in Autograders.Seminar Series on Advanced Techniques & Tools for Software Evolution (SATToSE) (Salerno, Italie, du 12/06/2023 au 15/06/2023)

Link to the full paper here