Membership Inference Attacks and Defenses in Neural Network Pruning
Document Type
Conference Proceeding
Publication Date
2022
Department
Department of Electrical and Computer Engineering
Abstract
Neural network pruning has been an essential technique to reduce the computation and memory requirements for using deep neural networks for resource-constrained devices. Most existing research focuses primarily on balancing the sparsity and accuracy of a pruned neural network by strategically removing insignificant parameters and retraining the pruned model. Such efforts on reusing training samples pose serious privacy risks due to increased memorization, which, however, has not been investigated yet. In this paper, we conduct the first analysis of privacy risks in neural network pruning. Specifically, we investigate the impacts of neural network pruning on training data privacy, i.e., membership inference attacks. We first explore the impact of neural network pruning on prediction divergence, where the pruning process disproportionately affects the pruned model's behavior for members and non-members. Meanwhile, the influence of divergence even varies among different classes in a fine-grained manner. Enlightened by such divergence, we proposed a self-attention membership inference attack against the pruned neural networks. Extensive experiments are conducted to rigorously evaluate the privacy impacts of different pruning approaches, sparsity levels, and adversary knowledge. The proposed attack shows the higher attack performance on the pruned models when compared with eight existing membership inference attacks. In addition, we propose a new defense mechanism to protect the pruning process by mitigating the prediction divergence based on KL-divergence distance, whose effectiveness has been experimentally demonstrated to effectively mitigate the privacy risks while maintaining the sparsity and accuracy of the pruned models.
Publication Title
Proceedings of the 31st USENIX Security Symposium, Security 2022
ISBN
9781939133311
Recommended Citation
Yuan, X.,
&
Zhang, L.
(2022).
Membership Inference Attacks and Defenses in Neural Network Pruning.
Proceedings of the 31st USENIX Security Symposium, Security 2022, 4561-4578.
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/16621