Optimal online control strategy for differentially private federated learning
Author Identifier (ORCID)
Abstract
While differential privacy (DP) contributes to pre serving data privacy during federated learning (FL), DP-FL suffers from either premature convergence or underutilized privacy budgets and subsequently degraded accuracy. Some recent studies heuristically adjusted the variance of the DP noises but offered no guarantee of optimality, little insight, and limited scalability. This paper presents a new control framework for (ε, δ)-DP FL to address the prevalent issues of DP-FL, i.e., premature convergence or underutilized privacy budgets. The key idea is to interpret the DP perturbation of DP-FL as a control process, where the DP noise variance and communication rounds are interdependent and jointly and adaptively determined. An optimal control framework is proposed to adjust the communication rounds and DP noise variance, adapting to the training accuracy of DP-FL. The optimality gap of (ε, δ)-DP FL is derived under the optimal control framework. The importance of joint orchestration of the DP noise and communication rounds is delineated. Experiments on MLP, CNN, and ResNet-9 models show that, given a privacy level, our control framework allows DP-FL to converge much faster with better accuracy than existing techniques, including those with persistent or heuristically reconfigurable DP noise variances.
Document Type
Journal Article
Date of Publication
1-1-2025
Publication Title
IEEE Transactions on Dependable and Secure Computing
Publisher
IEEE
School
School of Engineering
Copyright
subscription content
Comments
Yuan, X., Savkin, A. V., Ni, W., Xue, M., & Liu, R. P. (2025). Optimal online control strategy for differentially private federated learning. IEEE Transactions on Dependable and Secure Computing. Advance online publication. https://doi.org/10.1109/TDSC.2025.3643906