Scheduling and securing asynchronous federated learning through cooperative jamming
Author Identifier (ORCID)
Abstract
Asynchronous federated learning (AFL) tackles the straggler effect of traditional synchronous federated learning (SFL). Yet, AFL may face limited (communication, computation, and energy) resources and security threats, especially in wireless settings. This paper presents a novel two-stage deep reinforcement learning (DRL) algorithm built on a Transformer Encoder-based Proximal Policy Optimization (TEPPO) framework, named TS-TEPPO, which jointly optimizes the learning latency, energy consumption, and model accuracy of AFL systems while securing model transmissions through cooperative jamming. The CPU configuration of local training and the transmit power of model uploading are learned by the TEPPO in the first stage. A linear programming (LP)-based device scheduling and cooperative jamming strategy is designed to optimize the rest of the decisions in the second stage and evaluate the immediate reward to train the TEPPO, thus providing improved convergence and reliability. Experiments based on a Convolutional Neural Network (CNN) model and the MNIST dataset demonstrate that the TS-TEPPO can reduce a defined cost concerning the training latency and energy consumption by 81.5% compared to its benchmarks, when the required test accuracy of AFL is 0.9.
Document Type
Journal Article
Date of Publication
1-1-2025
Publication Title
IEEE Transactions on Cognitive Communications and Networking
Publisher
IEEE
School
School of Engineering
Funders
National Key Research and Development Program of China (2022YFB2902002, 2022YFB2902303) / Shanghai Rising-Star Program
Copyright
subscription content
Comments
Ni, F., Zhou, Z., Ni, W., Chen, X., Pan, G., Sun, Y., Zhang, S., & Jamalipour, A. (2025). Scheduling and securing asynchronous federated learning through cooperative jamming. IEEE Transactions on Cognitive Communications and Networking, 12, 3209–3222. https://doi.org/10.1109/TCCN.2025.3623377