Fast and precise touch-based text entry for head-mounted augmented reality with variable occlusion
Department of Computer Science
We present the VISAR keyboard: An augmented reality (AR) head-mounted display (HMD) system that supports text entry via a virtualised input surface. Users select keys on the virtual keyboard by imitating the process of single-hand typing on a physical touchscreen display. Our system uses a statistical decoder to infer users’ intended text and to provide error-tolerant predictions. There is also a high-precision fall-back mechanism to support users in indicating which keys should be unmodified by the auto-correction process. A unique advantage of leveraging the well-established touch input paradigm is that our system enables text entry with minimal visual clutter on the see-through display, thus preserving the user’s field-of-view. We iteratively designed and evaluated our system and show that the final iteration of the system supports a mean entry rate of 17.75wpm with a mean character error rate less than 1%. This performance represents a 19.6% improvement relative to the state-of-the-art baseline investigated: A gaze-then-gesture text entry technique derived from the system keyboard on the Microsoft HoloLens. Finally, we validate that the system is effective in supporting text entry in a fully mobile usage scenario likely to be encountered in industrial applications of AR HMDs.
ACM Transactions on Computer-Human Interaction (TOCHI)
Dudley, J. J.,
Kristensson, P. O.
Fast and precise touch-based text entry for head-mounted augmented reality with variable occlusion.
ACM Transactions on Computer-Human Interaction (TOCHI),
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/884