Reinforcement learning-based adaptive transmission in time-varying underwater acoustic channels

Document Type


Publication Date



Department of Electrical and Computer Engineering, Center for Cyber-Physical Systems


This paper studies adaptive transmission in an underwater acoustic (UWA) point-to-point communication system that operates on an epoch-by-epoch basis for a long term. A fixed amount of information bits periodically arrive at the transmitter data queue, and wait for transmission via a number of packets within each epoch. To trade off energy consumption with transmission latency, the transmitter decides the transmission action at the beginning of each epoch, including to transmit or not, and the transmission power and the modulation-and-coding parameters, based on the data queue status and the predicted channel conditions in the current and future epochs. To describe both the fast fading and the large-scale shadowing of UWA channels, the channel within each epoch is characterized by a compound Nakagamilognormal distribution, and the evolution of the distribution parameters is modeled as an unknown Markov process. Given that the channel can only be observed during active transmissions, we formulate the adaptive transmission problem as a partially observable Markov decision process, and develop an online algorithm in a model-based reinforcement learning framework. The algorithm recursively estimates the channel model parameters, tracks the channel dynamics, and computes the optimal transmission action that minimizes a long-term system cost. Emulated results based on channel measurements from two-field experiments demonstrate that the proposed algorithm achieves decent performance relative to a benchmark method that assumes perfect and non-causal channel knowledge.

Publisher's Statement

Copyright 2017 IEEE. Publisher's version of record: https://doi.org/10.1109/ACCESS.2017.2784239

Publication Title

IEEE Access