How is nonverbal auditory information processed? Revisiting existing models and proposing a preliminary model
Use of multimodal displays is getting more prevalent in Human Factors and Human-Computer Interaction. Existing information processing models and theories predict the benefits of multimodality in user interfaces. While the models have been refined regarding vision, more granularity is still required regarding audition. The existing models mainly account for verbal processing in terms of representation, encoding, and retrieving, but these models do not provide sufficient explanations for nonverbal processing. In the present paper, I point out research gaps in nonverbal information processing of the representative models at the working memory and attention level. Then, I propose a preliminary conceptual model supported by neural and behavioral level evidence, and provide evaluations of the model and future works.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
How is nonverbal auditory information processed? Revisiting existing models and proposing a preliminary model.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting.
Retrieved from: https://digitalcommons.mtu.edu/cls-fp/29
Copyright © 2016, © SAGE Publications. Publisher's version of record: https://doi.org/10.1177/1541931213601351