Fast thermal analysis on GPU for 3D ICs with integrated microchannel cooling
Effective thermal management for 3D integrated circuits (3D ICs) is becoming increasingly challenging due to the ever-increasing power density and chip design complexity; traditional heat sinks are expected to quickly reach their limits for meeting the cooling needs of 3D ICs. Alternatively, the integrated liquid-cooled microchannel heat sink has become one of the most effective solutions. In this paper, we present fast multigrid and block tridiagonally preconditioned graphics processing unit (GPU) based thermal simulation algorithms for 3D ICs. Unlike the CPU-based solver development in which existing sophisticated numerical simulation tools (matrix solvers) can be readily adopted and implemented, GPU-based thermal simulation demands more effort in the algorithm and data structure design phase, and requires careful consideration of GPU's thread/memory organization, data access/communication patterns, arithmetic intensity, as well as its hardware occupancies. As shown by various experimental results, our GPU-based 3D thermal simulation solvers can achieve more than 360 × speedups over the best available direct solvers and more than 35 × speedups over the CPU-based iterative solvers, without loss of accuracy. © 1993-2012 IEEE.
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Fast thermal analysis on GPU for 3D ICs with integrated microchannel cooling.
IEEE Transactions on Very Large Scale Integration (VLSI) Systems,
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/11154