pRedis: Penalty and locality aware memory allocation in Redis

Document Type

Article

Publication Date

11-2019

Department

Department of Computer Science

Abstract

Due to large data volume and low latency requirements of modern web services, the use of in-memory key-value (KV) cache often becomes an inevitable choice (e.g. Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., LRU or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management.

We propose pRedis, Penalty and Locality Aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, in order to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14% ~ 52% latency reduction over a state-of-the-art penalty aware cache management scheme, Hyperbolic Caching, and shows more quantitative predictability of performance.

Publication Title

SoCC '19: Proceedings of the ACM Symposium on Cloud Computing

Share

COinS