Optimizing locality-aware memory management of key-value caches
Document Type
Article
Publication Date
10-20-2016
Department
Department of Computer Science; Center for Scalable Architectures and Systems
Abstract
The in-memory cache system is a performance-critical layer in today's web server architectures. Memcached is one of the most effective, representative, and prevalent among such systems. An important problem is on its memory allocation. The default design does not make the best use of the memory. It is unable to adapt when the demand changes, a problem known as slab calcification. This paper introduces locality-aware memory allocation (LAMA), which addresses the problem by first analyzing the locality of Memcached's requests and then reassigning slabs to minimize the miss ratio or the average response time. By evaluating LAMA using various industry and academic workloads, the paper shows that LAMA outperforms existing techniques in the steady-state performance, the speed of convergence, and the ability to adapt to request pattern changes, and overcome slab calcification. The new solution is close to optimal, achieving over 98 percent of the theoretical potential. Furthermore, LAMA can also be adopted in resource partitioning to guarantee quality-of-service (QoS).
Publication Title
IEEE Transactions on Computers
Recommended Citation
Hu, X.,
Wang, X.,
Zhou, L.,
Luo, Y.,
Ding, C.,
Jiang, S.,
&
Wang, Z.
(2016).
Optimizing locality-aware memory management of key-value caches.
IEEE Transactions on Computers,
66(5), 862-875.
http://doi.org/10.1109/TC.2016.2618920
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/1137