Decentralized Q-Learning Supervisory Control for Coordinated Multi-Loop Tuning in Pump Stations
Document Type
Article
Publication Date
3-1-2026
Abstract
This paper introduces a reinforced learning-based supervisory control architecture that oversees multiple Recursive Least Squares (RLS) based self-tuning pump controllers and determines when each loop is permitted to adapt its gains. The supervisor learns adaptation policies that minimize interaction between loops while preserving responsiveness to changing hydraulic conditions. A two-loop pump station simulation is used to evaluate performance under product changes and transient flow disturbances. The results show that the supervisory layer reduces the number of simultaneous adaptation events by over 70%, leading to a 32% lower pressure-tracking error and 45% fewer gain-induced oscillations compared to conventional independent adaptive control. The reinforcement learning policy converges within 15 training episodes, resulting in stable adaptation scheduling and seamless transitions. The key novelty of this work lies in introducing decentralized reinforcement-learning-based coordination for adaptive pump control, enabling supervisory decision-making that actively prevents interference between controllers during transients. This approach provides a scalable and lightweight solution for coordinating multi-loop pump stations, enhancing robustness and operational performance in real-world pipeline systems.
Publication Title
Machines
Recommended Citation
Brattley, D.,
&
Weaver, W.
(2026).
Decentralized Q-Learning Supervisory Control for Coordinated Multi-Loop Tuning in Pump Stations.
Machines,
14(3).
http://doi.org/10.3390/machines14030299
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p2/2518