Off-campus Michigan Tech users: To download campus access theses or dissertations, please use the following button to log in with your Michigan Tech ID and password: log in to proxy server
Non-Michigan Tech users: Please talk to your librarian about requesting this thesis or dissertation through interlibrary loan.
Privacy-Preserving and Resource-Efficient Learning Frameworks for Collaborative Deep Neural Networks
Date of Award
2026
Document Type
Campus Access Dissertation
Degree Name
Doctor of Philosophy in Computer Science (PhD)
Administrative Home Department
Department of Computer Science
Advisor 1
Xiaoyong Yuan
Advisor 2
Zhenlin Wang
Committee Member 1
Xinyu Lei
Committee Member 2
Kaichen Yang
Abstract
Deep Neural Networks (DNNs) are rapidly advancing in capability and scale, increasing reliance on them to analyze sensitive data and, consequently, heightening privacy risks during training and inference, especially in untrusted federated learning (FL) and collaborative inference settings, where adversaries may exploit gradients or intermediate features. Meanwhile, modern networks often exceed a single device’s capacity, and fine-tuning large foundation models remains costly for resource-limited users. Thus, balancing training efficiency with fine-tuning performance is essential for scalable distributed learning. Therefore, my current research focuses on three aspects: (1) reducing privacy risks in collaborative inference, (2) protecting users’ data in federated learning, and (3) efficient fine-tuning of large foundation models with minimal performance loss. Collaborative inference enables resource-limited edge devices to use large DNNs by computing early layers locally and sending intermediate features to the cloud. However, recent work shows that these features allow model inversion attacks (MIAs) to reconstruct sensitive inputs, and existing perturbation- or cryptographic-based defenses struggle to balance MIA robustness, accuracy, and theoretical guarantees. My first work proposes PATROL, a privacy-oriented pruning framework that strengthens task-relevant feature extraction on the client while suppressing privacy-sensitive signals. Using Lipschitz regularization and adversarial reconstruction training, PATROL achieves superior MIA resistance on a real vehicle re-identification task. My second work proposed a framework, CUPR, to establish a theoretical framework linking reconstruction performance to noise-perturbed utility, clarifying the privacy–utility trade-off observed in prior defenses. The theoretical lower bound is validated on MNIST, CIFAR-10, and FaceScrub under four reconstruction attacks, showing favorable trade-offs for both client-side and server-side CUPR: the former excels on low-dimensional data, and both perform similarly on high-dimensional inputs. During training, FL remains vulnerable because model updates leak private information. Although Trust Execution Environments (TEEs) offer hardware-isolated protection, their limited memory prevents deployment of large networks. My third work introduces Marshaled Learning, which partitions models into subnets across TEE-enabled clients and incorporates a dynamic knowledge propagation mechanism to improve performance under heterogeneous data. Marshaled Learning improves accuracy by 2–5%, converges faster than prior FL methods, and incurs only 1–3× overhead on Azure confidential VMs. For large foundation models, memory constraints also limit fine-tuning efficiency. D2FT addresses this by dynamically selecting attention modules during forward/backward passes, reducing unnecessary computation. With multi-knapsack optimization, D2FT reduces training computation by 40% and communication by 50%, with minimal accuracy loss on three datasets. When integrated with LoRA, D2FT achieves 40% computation or 50% communication reduction, with only 4–6% accuracy drop on Stanford Cars.
Recommended Citation
Ding, Shiwei, "Privacy-Preserving and Resource-Efficient Learning Frameworks for Collaborative Deep Neural Networks", Campus Access Dissertation, Michigan Technological University, 2026.
https://digitalcommons.mtu.edu/etdr/2028