In-vehicle air gesture design: impacts of display modality and control orientation
Document Type
Article
Publication Date
9-14-2023
Department
Department of Cognitive and Learning Sciences
Abstract
The number of visual distraction-caused crashes highlights a need for non-visual displays in the in-vehicle information system (IVIS). Audio-supported air gesture controls can tackle this problem. Twenty-four young drivers participated in our experiment using a driving simulator with six different gesture prototypes—3 modality types (visual-only, visual/auditory, and auditory-only) × 2 control orientation types (horizontal and vertical). Various data were obtained, including lane departures, eye glance behavior, secondary task performance, and driver workload. Results showed that the auditory-only displays showed a significantly lower lane departures and perceived workload. A tradeoff between eyes-on-road time and secondary task completion time for the auditory-only display was also observed, which means the safest, but slowest among the prototypes. Vertical controls (direct manipulation) showed significantly lower workload than horizontal controls (mouse metaphor), but did not differ in performance measures. Experimental results are discussed in the context of multiple resource theory and design guidelines for future implementation.
Publication Title
Journal on Multimodal User Interfaces
Recommended Citation
Sterkenburg, J.,
Landry, S.,
FakhrHosseini, S.,
&
Jeon, M.
(2023).
In-vehicle air gesture design: impacts of display modality and control orientation.
Journal on Multimodal User Interfaces.
http://doi.org/10.1007/s12193-023-00415-8
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p2/126