[1]ZHANG Shufen,LI Tao,ZHANG Zhenbo,et al.A Robust Aggregation Algorithm Defending Against Byzantine Attacks in Federated Learning[J].Journal of Zhengzhou University (Engineering Science),2027,48(XX):1-9.[doi:10.13705/j.issn.1671-6833.2026.04.012]
Copy
Journal of Zhengzhou University (Engineering Science)[ISSN
1671-6833/CN
41-1339/T] Volume:
48
Number of periods:
2027 XX
Page number:
1-9
Column:
Public date:
2027-12-10
- Title:
-
A Robust Aggregation Algorithm Defending Against Byzantine Attacks in Federated Learning
- Author(s):
-
ZHANG Shufen 1,2,3 , LI Tao 1,2,3 , ZHANG Zhenbo 1,2,3 , ZHONG Qi 1,2,3 , JING Zhongrui1,2,3
-
1. College of Science , North China University of Science and Technology, Tangshan 063210, China; 2. Hebei Province Key Laboratory of Data Science and Application( North China University of Science and Technology) , Tangshan 063210, China; 3. Tangshan Key Laboratory of Data Science( North China University of Science and Technology) , Tangshan 063210, China
-
- Keywords:
-
federated learning; Byzantine attacks; robust; reputation; weighted aggregation
- CLC:
-
TP309.2TN92
- DOI:
-
10.13705/j.issn.1671-6833.2026.04.012
- Abstract:
-
To address the issue that existing defense schemes in federated learning tend to over-prune benign models during filtering, a robust aggregation algorithm defending against Byzantine attacks in federated learning (FLDBA) was proposed. HDBSCAN density-based clustering was employed to group models, identifying the benign clusters, and the most representative model in terms of direction was selected as the trusted reference model. Using the trusted model as a benchmark, cosine similarity was utilized to screen potentially misclassified benign models within clusters, thereby correcting misjudgments. Additionally, a reputation mechanism was established to dynamically evaluate models’ historical behaviors, mitigating the impact of missed detections. For models with high reputation, adaptive magnitude scaling was applied, and differential aggregation weights were assigned based on update quality to further enhance aggregation performance. Experimental results demonstrated that when defending against sign-flipping attacks, FLDBA achieved an accuracy improvement of 0.18 percentage points to 5.13 percentage points compared to FLRAM, FLAME, RFLPA, FLTrust, and Krum, while reducing the attack success rate by 40.52 percentage points to 61.39 percentage points, exhibiting superior robustness.