Date of Award

2026

Document Type

Campus Access Dissertation

Degree Name

Doctor of Philosophy in Environmental Engineering (PhD)

Administrative Home Department

Department of Civil, Environmental, and Geospatial Engineering

Advisor 1

Michelle Javie-Eggart

Advisor 2

Leo Ureel II

Committee Member 1

Kedmon Hungwe

Committee Member 2

Brain Barkdoll

Abstract

Automated feedback systems are now a common part of introductory programming courses, particularly in large enrollment settings where timely instructor feedback is difficult to provide. While these systems have been associated with improvements in performance, much of the existing work has focused on outcomes such as correctness and efficiency, with less attention to how feedback functions during learning. There is limited understanding of how students interpret and use automated feedback, and how these processes relate to the development of programming self-efficacy.

This dissertation examines how automated code critiquers influence programming self-efficacy among first-year engineering students, with attention to both patterns of change and the processes through which those changes occur. Drawing on social cognitive theory and formative feedback research, the study approaches feedback not as something that directly produces learning, but as information that must be interpreted and acted upon within ongoing problem-solving.

A mixed-methods approach was used across three studies. The first examines how prior programming experience and baseline confidence relate to changes in programming self-efficacy. The second considers whether the use of an automated code critiquer is associated with differences in self-efficacy across gender. The third focuses on how students engage with and make sense of feedback during programming tasks.

The findings show that exposure to automated feedback is associated with changes in programming self-efficacy, but these changes vary across students. Quantitative results indicate that prior experience and starting confidence shape how students respond to feedback over time, while no independent gender-by-intervention effect is observed when baseline differences are considered. Qualitative analysis provides further insight, showing that students engage with feedback through an ongoing process of interpretation. Students differ not in whether they use feedback, but in how they understand it, how they respond to uncertainty, and how they regulate confidence during problem-solving.

These findings indicate that the effects of automated feedback are not determined by access alone, but by how students make sense of and use feedback during learning. This work shows that understanding feedback requires attention to how it is interpreted in context, and that differences in self-efficacy are shaped through these interactions. By focusing on these processes, this dissertation offers a more grounded explanation of how programming self-efficacy develops and points to the need for feedback systems that better support how students engage, interpret, and persist during learning

Available for download on Saturday, October 31, 2026

Share

COinS