Date of Award
2026
Document Type
Open Access Dissertation
Degree Name
Doctor of Philosophy in Applied Cognitive Science and Human Factors (PhD)
Administrative Home Department
Department of Psychology and Human Factors
Advisor 1
Elizabeth S. Veinott
Committee Member 1
Shane T. Mueller
Committee Member 2
Briana C. Bettin
Committee Member 3
Giridhar Reddy Bhojja
Committee Member 4
Pasi Lautala
Abstract
Artificial intelligence is increasingly embedded in safety-critical systems, including transportation, where people must make fast decisions under uncertainty and high risk. As AI takes on a larger role in supporting human judgment, researchers and designers must evaluate more than just its technical performance, they must ask whether people can understand the system and use it effectively in high-risk settings.
This dissertation presents two distinct evaluation methodologies. First, the Human-Centered Evaluation, examines how intelligent in-vehicle alert systems influence driver decision behavior across two experiments. In Experiment 1 (n=24) and Experiment 2 (n=26), participants completed a driving simulator study in which our staged intelligent alerts significantly reduced driver speeds at both Active and Passive Highway-Rail Grade Crossings (HRGCs). The second methodology, Risk Evaluation, examines how to improve people’s plan evaluations. In Experiment 3 (n=138 individuals & n=30 in groups), we found that the Premortem was more effective than other reasoning techniques in evaluating plan confidence. Experiment 4 (n=41) extended this work by testing a Structured Premortem method specifically designed for plans integrating emerging AI technologies.
Together, these methodologies offer a more complete framework for evaluating AI-driven systems. The human-centered experiments test whether technology meaningfully changes behavior under realistic conditions; the risk-evaluation experiments test whether people can anticipate the broader consequences of deploying that technology. This dissertation contributes empirical evidence on how intelligent alerts shape driver decision-making and provides practical tools for proactive risk identification in emerging technologies. Ultimately, it argues that rigorous AI evaluation in safety-critical environments demands both behavioral validation and structured foresight.
Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
Recommended Citation
Kannan, Anusha, "DESIGNING WITH FORESIGHT: PREMORTEM AND HUMAN-CENTERED EVALUATION OF EMERGING TECHNOLOGIES", Open Access Dissertation, Michigan Technological University, 2026.