Optimizing locality-aware memory management of key-value caches
Department of Computer Science, Center for Scalable Architectures and Systems
The in-memory cache system is a performance-critical layer in today's web server architectures. Memcached is one of the most effective, representative, and prevalent among such systems. An important problem is on its memory allocation. The default design does not make the best use of the memory. It is unable to adapt when the demand changes, a problem known as slab calcification. This paper introduces locality-aware memory allocation (LAMA), which addresses the problem by first analyzing the locality of Memcached's requests and then reassigning slabs to minimize the miss ratio or the average response time. By evaluating LAMA using various industry and academic workloads, the paper shows that LAMA outperforms existing techniques in the steady-state performance, the speed of convergence, and the ability to adapt to request pattern changes, and overcome slab calcification. The new solution is close to optimal, achieving over 98 percent of the theoretical potential. Furthermore, LAMA can also be adopted in resource partitioning to guarantee quality-of-service (QoS).
IEEE Transactions on Computers
Optimizing locality-aware memory management of key-value caches.
IEEE Transactions on Computers,
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/1137