Research on human-centered computing has rapidly expanded from user interface design and evaluation to the creation of entire user experiences and even novel life-styles and human values. Therefore, the designing, building, and evaluation of computational technologies should be related to people’s capabilities, limitations, and environments, and should reflect how these technologies affect society. Part of the Institute of Computing and Cybersystems, the Center for Human,-Centered Computing (HCC) leverages broad expertise in people and technology to help lead this timely effort. Specifically, we integrate art, people, design, technology, and experiences, and conduct novel experiments and research on multiple areas in human-centered computing. HCC prepares Michigan Tech students to become future creators with balanced viewpoints by educating their computing side, their human side, and their interactions.
Peruse a selection of publications from the Center for Human-Centered Computing below. Visit the ICC homepage for more information about the Institute and its Centers.
Submissions from 2018
Robotic motion learning framework to promote social engagement, Rachel Burns, Myounghoon Jeon, and Chung Hyuk Park
Situation awareness performance in healthy young adults Is associated with a serotonin transporter gene polymorphism, Yeimy González-Giraldo, Rodrigo E González-Reyes, Shane T. Mueller, Brian J. Piper, Ana Adan, and Diego A. Forero
Robot-assisted socio-emotional intervention framework for children with Autism Spectrum disorder, Hifza Javed, Myounghoon "Philart" Jeon, Ayanna Howard, and Chung Hyuk Park
“Musical Exercise” for people with visual impairments: A preliminary study with the blindfolded, Ridwan Ahmed Khan, Myounghoon Jeon, and Tejin Yoon
The assessment of driver compliance at highway-railroad grade crossings based on naturalistic driving study data, Pasi T. Lautala, Modeste Muhire, Alawudin Salim, Myounghoon Jeon, David Nelson, and Aaron Dean
The effects of peripheral vision and light simulation on distance judgements through HMDs, Bochao Li, James Walker, and Scott A. Kuhl
Using naturalistic driving study data to investigate, Alawudin Salim, Myounghoon Jeon, Pasi T. Lautala, and David Nelson
Examining the learnability of auditory displays: Music, earcons, spearcons, and lyricons, Kay Tislar, Zackery Duford, Brittany Nelson, Madeline Peabody, and Myounghoon Jeon
The impact of word, multiple word, and sentence input on virtual keyboard decoding performance, Keith Vertanen, Crystal Fletcher, Dylan Gaines, Jacob Gould, and Per Ola Kristensson
Submissions from 2017
The influence of robot design on acceptance of social robots, Jaclyn Barnes, Maryam FakhrHosseini, Myounghoon Jeon, Chung Hyuk Park, and Ayanna Howard
Child-Robot theater: STEAM education in an afterschool program, Jaclyn Barnes, Maryam FakhrHosseini, Eric Vasey, Zackery Duford, Joseph Ryan, and Myounghoon Jeon
Explaining explanation, part 2: Empirical foundations, Robert R. Hoffman, Shane Mueller, and Gary Klein
Emotions and affect in human factors and human-computer interaction, Myounghoon "Philart" Jeon
Robotic arts: Current practices, potentials, and implications, Myounghoon "Philart" Jeon
Cognitive perspectives on opinion dynamics: the role of knowledge in consensus formation, opinion divergence, and group polarization, Shane T. Mueller and Yin Yin Tan
Visual–inertial displacement sensing using data fusion of vision‐based displacement with acceleration, Jong-Woong Park, Do-Soo Moon, Hyungchul Yoon, Fernando Gomez, Billie F. Spencer Jr., and Jong R. Kim
Evaluating path planning in human-robot teams: Quantifying path agreement and mental model congruency, Brandon S. Perelman, Shane Mueller, and Kristin E. Schaefer
Roller coaster park manager by day problem solver by night: Effect of video game play on problem solving, Kaitlyn Marie Roose and Elizabeth Veinott
Promoting industrial robotics education by curriculum, robotic simulation software, and advanced robotic workcell development and implementation, Aleksandr Sergeyev, Nasser Alaraje, Siddharth Y. Parmar, Scott A. Kuhl, Vincent T. Druschke, and J. Hooker
Linking actions and objects: Context-specific learning of novel weight priors, Kevin Trewartha and J. Randall Flanagan
Towards improving predictive AAC using crowdsourced dialogues and partner context, Keith Vertanen
Efficient typing on a visually occluded keyboard, James Walker, Bochao Li, Keith Vertanen, and Scott A. Kuhl
Submissions from 2016
Multisensory robotic therapy to promote natural emotional interaction for children with ASD, Rachel Bevill, Paul Azzi, Matthew Spadafora, Chung Hyuk Park, Hyung Jung Kim, JongWon Lee, Kazi Raihan, Myounghoon Jeon, and Ayanna Howard
Interactive robotic framework for multi-sensory therapy for children with Autism spectrum disorder, Rachel Bevill, Chung Hyuk Park, Hyung Jung Kim, JongWon Lee, Ariena Rennie, Myounghoon Jeon, and Ayanna Howard
A survey on hardware and software solutions for multimodal wearable assistive devices targeting the visually impaired, Adam Caspo, György Wersényi, and Myounghoon Jeon
What to teach in HCI?: How to educate HCI students to envision the future of human being, not the future of technology?, Myounghoon Jeon
Getting active with passive crossings: Investigating the use of in-vehicle auditory alerts for highway-rail grade crossings, Steven Landry, Myounghoon Jeon, Pasi T. Lautala, and David Nelson
The effects of artificially reduced field of view and peripheral frame stimulation on distance judgments in HMDs, Bochao Li, Anthony Nordman, James Walker, and Scott A. Kuhl
17 Human-Car confluence: “Socially-Inspired driving mechanisms”, Andreas Riener, Myounghoon Jeon, and Alois Ferscha
Distinct contributions of explicit and implicit memory processes to weight prediction when lifting objects and judging their weights: an aging study, Kevin Trewartha and J. Randall Flanagan
Inviscid Text Entry and Beyond, Keith Vertanen, Mark Dunlop, James Clawson, Per Ola Kristensson, and Ahmed Sabbir Arif
Submissions from 2015
Estimation of drivers' emotional states based on Neuroergonmic equipment: an exploratory study using fNIRS, Maryam FakhrHosseini, Myounghoon Jeon, and Rahul Bose
An Investigation on Driver Behaviors and Eye-Movement Patterns at Grade Crossings Using a Driving Simulator, Maryam FakhrHosseini, Myounghoon Jeon, Pasi T. Lautala, and David Nelson
Regulating drivers’ aggressiveness by Sonifying emotional data, Maryam FakhrHosseini, Paul Kirby, and Myounghoon Jeon
An exploration of semiotics of new auditory displays: A comparative analysis with visual displays, Myounghoon Jeon
Development and evaluation of emotional robots for children with Autism spectrum disorders, Myounghoon Jeon
Embarrassment as a divergent process for creative arts in the immersive virtual environment, Myounghoon Jeon
Sorry, I’m Late; I’m not in the mood: Negative emotions lengthen driving time, Myounghoon Jeon and Jayde Croschere
Menu navigation with in-vehicle technologies: Auditory menu cues improve dual task performance, preference, and workload, Myounghoon Jeon, Thomas M. Gable, Benjamin K. Davision, Michael A. Nees, Jeff Wilson, and Bruce N. Walker
Report on the in-vehicle auditory interactions workshop: Taxonomy, challenges, and approaches, Myounghoon Jeon, T Hermann, P Bazilinskyy, J Hammerschmidt, KAE Wolf, I Alvarez, and et. al.
Cultural differences in preference of auditory emoticons: USA and South Korea, Myounghoon Jeon, Lee Ju-Hwan, Jason Sterkenburg, and Christopher Plummer
Technologies expand aesthetic dimensions: Visualization and Sonification of embodied Penwald drawings, Myounghoon Jeon, Steven Landry, Joseph Ryan, and James Walker
The effects of social interactions with in-vehicle agents on a driver's anger level, driving performance, situation awareness, and perceived workload, Myounghoon Jeon, Bruce N. Walker, and Thomas M. Gable
Subject assessment of in-vehicle auditory warnings for rail grade crossings, Steven Landry, Jayde Croschere, and Myounghoon Jeon
Robotic framework with multi-modal perception for physio-musical interactive therapy for children with autism, Chung Hyuk Park, Myounghoon Jeon, and Ayanna Howard
Robotic framework for music-based emotional and social engagement with children with Autism, Chung Hyuk Park, Neetha Pai, Jayashan Bakthavatchalam, Yaojie Li, Myounghoon Jeon, and Ayanna Howard
Lyricon (Lyrics + Earcons) improves identification of auditory cues, Yuanjing Sun and Myounghoon Jeon
Interactive Sonification markup language (ISML) for efficient motion-sound mappings, James Walker, Michael T. Smith, and Myounghoon Jeon
Robotic sonification for promoting emotional and social interactions of children with ASD, Ruimin Zhang, Myounghoon Jeon, Chung Hyuk Park, and Ayanna Howard
Works from 2014
Training change detection leads to substantial task-specific improvement, Martin Buschkuehl, Susanne M. Jaeggi, Shane Mueller, and Priti Shah