Audio-based emotion estimation for interactive robotic therapy for children with autism spectrum disorder
Recently, efforts in the development of speech recognition systems and robots have come to fruition with an overflow of applications in our daily lives. However, we are still far from achieving natural interaction between humans and robots, given that robots do not take into account the emotional state of speakers. The objective of this research is to create an automatic emotion classifier integrated with a robot, such that the robot can understand the emotional state of a human user by analyzing the speech signals from the user. This becomes particularly relevant in the realm of using assistive robotics to tailor therapeutic techniques towards assisting children with autism spectrum disorder (ASD). Over the past two decades, the number of children being diagnosed with ASD has been rapidly increasing, yet the clinical and societal support have not been enough to cope with the needs. Therefore, finding alternative, affordable, and accessible means of therapy and assistance has become more of a concern. Improving audio-based emotion prediction for children with ASD will allow for the robotic system to properly assess the engagement level of the child and modify its responses to maximize the quality of interaction between the robot and the child and sustain an interactive learning environment.
2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)
Kim, Jonathan C.; Azzi, Paul; Jeon, Myounghoon; Howard, Ayanna; and Park, Chung Hyuk, "Audio-based emotion estimation for interactive robotic therapy for children with autism spectrum disorder" (2017). Department of Cognitive and Learning Sciences Publications. 16.