Reliability-based reinforcement learning under uncertainty
Document Type
Conference Proceeding
Publication Date
11-3-2020
Department
Department of Mechanical Engineering-Engineering Mechanics
Abstract
Despite the numerous advances, reinforcement learning remains away from widespread acceptance for autonomous controller design as compared to classical methods due to lack of ability to effectively tackle uncertainty. The reliance on absolute or deterministic reward as a metric for optimization process renders reinforcement learning highly susceptible to changes in problem dynamics. We introduce a novel framework that effectively quantify the uncertainty in the design space and induces robustness in controllers by switching to a reliabilitybased optimization routine. A model-based approach is used to improve the data efficiency of the method while predicting the system dynamics. We prove the stability of learned neurocontrollers in both static and dynamic environments on classical reinforcement learning tasks such as Cart Pole balancing and Inverted Pendulum.
Publication Title
Proceedings of the ASME Design Engineering Technical Conference
ISBN
9780791884003
Recommended Citation
Wang, Z.,
&
Patwardhan, N.
(2020).
Reliability-based reinforcement learning under uncertainty.
Proceedings of the ASME Design Engineering Technical Conference,
11A-2020.
http://doi.org/10.1115/DETC2020-22019
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/14450
Publisher's Statement
© 2020 American Society of Mechanical Engineers (ASME). All rights reserved. Publisher’s version of record: https://doi.org/10.1115/DETC2020-22019