Audio-based emotion estimation for interactive robotic therapy for children with autism spectrum disorder

Document Type

Conference Proceeding

Publication Date

7-2017

Abstract

Recently, efforts in the development of speech recognition systems and robots have come to fruition with an overflow of applications in our daily lives. However, we are still far from achieving natural interaction between humans and robots, given that robots do not take into account the emotional state of speakers. The objective of this research is to create an automatic emotion classifier integrated with a robot, such that the robot can understand the emotional state of a human user by analyzing the speech signals from the user. This becomes particularly relevant in the realm of using assistive robotics to tailor therapeutic techniques towards assisting children with autism spectrum disorder (ASD). Over the past two decades, the number of children being diagnosed with ASD has been rapidly increasing, yet the clinical and societal support have not been enough to cope with the needs. Therefore, finding alternative, affordable, and accessible means of therapy and assistance has become more of a concern. Improving audio-based emotion prediction for children with ASD will allow for the robotic system to properly assess the engagement level of the child and modify its responses to maximize the quality of interaction between the robot and the child and sustain an interactive learning environment.

Publisher's Statement

Copyright © 2017, IEEE. Publisher's version of record: https://doi.org/10.1109/URAI.2017.7992881

Publication Title

2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)

Share

COinS