"A New Data-Free Backdoor Removal Method via Adversarial Self-Knowledge" by Xuexiang Li, Yafei Gao et al.
 

A New Data-Free Backdoor Removal Method via Adversarial Self-Knowledge Distillation

Document Type

Article

Publication Date

12-19-2024

Department

Department of Computer Science

Abstract

In the context of IoT edge devices, pretrained models are often sourced directly from cloud computing platforms due to the unavailability of training data. This lack of access during the training phase makes these models susceptible to backdoor attacks. To address this challenge, we introduce a novel data-free backdoor removal method that operates effectively even when only the poisoned model is accessible. Our innovative approach employs two end-to-end generators with identical architectures to create both clean and poisoned samples. These samples are crucial for transferring knowledge from the teacher model-the fixed poisoned model-to the student model, which is initialized with the poisoned model. Our method utilizes a channel shuffling technique during the distillation process to disrupt and eliminate the backdoor knowledge embedded in the teacher model. This process involves iterative updates of the generators and meticulous distillation of the student model, leading to efficient backdoor removal. We conducted extensive experiments on five sophisticated backdoor attacks across two benchmark datasets. The results demonstrate that our method not only significantly bolsters the model's resistance to backdoor attacks but also maintains high recognition accuracy for clean samples, thereby outperforming existing methods.

Publication Title

IEEE Internet of Things Journal

Share

COinS