Document Type

Article

Publication Date

Fall 2023

Department

Department of Cognitive and Learning Sciences

Abstract

The development of AI systems represents a significant investment of funds and time. Assessment is necessary in order to determine whether that investment has paid off. Empirical evaluation of systems in which humans and AI systems act interdependently to accomplish tasks must provide convincing empirical evidence that the work system is learnable and that the technology is usable and useful. We argue that the assessment of human–AI (HAI) systems must be effective but must also be efficient. Bench testing of a prototype of an HAI system cannot require extensive series of large-scale experiments with complex designs. Some of the constraints that are imposed in traditional laboratory research just are not appropriate for the empirical evaluation of HAI systems. We present requirements for avoiding “unnecessary rigor.” They cover study design, research methods, statistical analyses, and online experimentation. These should be applicable to all research intended to evaluate the effectiveness of HAI systems.

Publisher's Statement

© 2023 The Authors. AI Magazine published by Wiley Periodicals LLC on behalf of the Association for the Advancement of Artificial Intelligence. Publisher’s version of record: https://doi.org/10.1002/aaai.12108

Publication Title

AI Magazine

Version

Publisher's PDF

Share

COinS