Learning Navigation Tasks from Demonstration for Semi-Autonomous Remote Operation of Mobile Robots

Document Type

Conference Proceeding

Publication Date

9-19-2018

Abstract

© 2018 IEEE. Mobile robots are valuable tools for search and rescue missions, especially in hazardous or inaccessible areas. These systems have the potential to address a wide variety of tasks that can arise during search and rescue missions. Effective remote operation using these vehicles requires sufficient situational awareness. In practice, communication quality does not often facilitate transfer of large amounts of information to provide situational awareness to the operator in a timely manner. Sharing autonomy between the vehicle and human can address this limitation by offtoading the decision making on low-level actions from the operator to the vehicle's on-board computers. Sending occasional high-level commands to the vehicle is then sufficient for uninterrupted operation. Explicitly designing and tuning a control or decision making algorithm based on the specific task and environment may not always be feasible in the short preparation time available for addressing tasks involved in search and rescue in a specific environment. In this paper, we propose a deep learning framework for quick training of mobile robots to perform navigational tasks and facilitate remote operations. Two deep network models were trained on a hallway navigation task, demonstrated by a human expert. One learns action values for observation and action pairs, the second classifies observations into different action classes. Our evaluations and tests on the task of hallway navigation demonstrated that learning action values results in policies that better generalize compared to classification method. The video at https://youtu.be/wwGHnjRzXTQ demonstrates the implementation of this method.

Publication Title

2018 IEEE International Symposium on Safety, Security, and Rescue Robotics, SSRR 2018

Share

COinS