BACM: Barrier-aware cache management for irregular memory-intensive GPGPU workloads

Document Type

Conference Proceeding

Publication Date

11-23-2017

Department

Department of Computer Science; Center for Scalable Architectures and Systems

Abstract

General-purpose workloads running on modern graphics processing units rely on hardware-based barriers to synchronize warps within a thread block (TB). However, imbalance may exist before reaching a barrier if a GPGPU workload contains irregular memory accesses, i.e., some warps may be critical while others may not. Ideally, cache space should be reserved for the critical warps. Unfortunately, current cache management policies are unaware of the existence of barriers and critical warps, which significantly limits the performance of irregular memory-intensive GPGPU workloads. In this paper, we propose Barrier-Aware Cache Management (BACM) which is built on top of two underlying policies: a greedy policy and a friendly policy. The greedy policy does not allow non-critical warps to allocate cache lines in the L1 data cache; only critical warps can. The friendly policy allows non-critical warps to allocate cache lines but only over invalid or lower-priority cache lines. BACM dynamically chooses between the greedy and friendly policies based on the L1 data cache hit rate for the non-critical warps. By doing so, BACM reserves more cache space to accelerate critical warps, thereby improving overall performance. Experimental results show that BACM achieves an average performance improvement of 24% and 20% compared to the GTO and BAWS policies, respectively. BACM's hardware cost is limited to 96 bytes per streaming multiprocessor.

Publication Title

2017 IEEE International Conference on Computer Design

Share

COinS