[1]张淑芬,李涛,张镇博,等.一种抗拜占庭攻击的联邦学习鲁棒聚合算法[J].郑州大学学报(工学版),2027,48(XX):1-9.[doi:10.13705/j.issn.1671-6833.2026.04.012]
 ZHANG Shufen,LI Tao,ZHANG Zhenbo,et al.A Robust Aggregation Algorithm Defending Against Byzantine Attacks in Federated Learning[J].Journal of Zhengzhou University (Engineering Science),2027,48(XX):1-9.[doi:10.13705/j.issn.1671-6833.2026.04.012]
点击复制

一种抗拜占庭攻击的联邦学习鲁棒聚合算法()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
48
期数:
2027年XX
页码:
1-9
栏目:
出版日期:
2027-12-10

文章信息/Info

Title:
A Robust Aggregation Algorithm Defending Against Byzantine Attacks in Federated Learning
作者:
张淑芬123 李涛123 张镇博123 钟琪123 景忠瑞123
1. 华北理工大学 理学院,河北 唐山 063210;2. 河北省数据科学与应用重点实验室 ( 华北理工大学) ,河北 唐山063210;3. 唐山市数据科学重点实验室(华北理工大学) ,河北 唐山 063210
Author(s):
ZHANG Shufen123 LI Tao123 ZHANG Zhenbo123 ZHONG Qi123 JING Zhongrui123
1. College of Science , North China University of Science and Technology, Tangshan 063210, China; 2. Hebei Province Key Laboratory of Data Science and Application( North China University of Science and Technology) , Tangshan 063210, China; 3. Tangshan Key Laboratory of Data Science( North China University of Science and Technology) , Tangshan 063210, China
关键词:
联邦学习 拜占庭攻击 鲁棒性 信誉 加权聚合
Keywords:
federated learning Byzantine attacks robust reputation weighted aggregation
分类号:
TP309.2TN92
DOI:
10.13705/j.issn.1671-6833.2026.04.012
文献标志码:
A
摘要:
针对联邦学习中现有的防御方案在模型过滤时会过度剔除良性模型的问题,提出了一种抗拜占庭攻击的联邦学习鲁棒聚合算法 FLDBA。通过 HDBSCAN 密度聚类算法对模型进行聚类,识别出良性模型集合,并求取良性模型集合中方向最具代表性的模型作为可信模型。以可信模型为基准,利用余弦相似度对聚类结果中可能被误判为异常的良性模型进行筛选,实现对误判的修正。同时设立信誉机制,对模型历史行为进行动态评估,以降低漏判对系统的影响。对于信誉较高的模型,对模型幅值进行自适应缩放,并根据其更新质量赋予不同的聚合权重,提升模型的聚合效果。实验结果表明,在抵御符号翻转攻击时,FLDBA 的准确率比 FLRAM、FLAME、RFLPA、FLTrust 和 Krum 提升了 0.18 个百分点~5.13 个百分点,攻击成功率降低了 40.52 个百分点~61.39 个百分点,具有更好的鲁棒性。
Abstract:
To address the issue that existing defense schemes in federated learning tend to over-prune benign models during filtering, a robust aggregation algorithm defending against Byzantine attacks in federated learning (FLDBA) was proposed. HDBSCAN density-based clustering was employed to group models, identifying the benign clusters, and the most representative model in terms of direction was selected as the trusted reference model. Using the trusted model as a benchmark, cosine similarity was utilized to screen potentially misclassified benign models within clusters, thereby correcting misjudgments. Additionally, a reputation mechanism was established to dynamically evaluate models’ historical behaviors, mitigating the impact of missed detections. For models with high reputation, adaptive magnitude scaling was applied, and differential aggregation weights were assigned based on update quality to further enhance aggregation performance. Experimental results demonstrated that when defending against sign-flipping attacks, FLDBA achieved an accuracy improvement of 0.18 percentage points to 5.13 percentage points compared to FLRAM, FLAME, RFLPA, FLTrust, and Krum, while reducing the attack success rate by 40.52 percentage points to 61.39 percentage points, exhibiting superior robustness.

参考文献/References:

[1] Mcmahan H B, Moore E, Ramage D, et al. Communication-efficient efficient learning of deep networks from decentralized data[PP/OL]. V4. arXiv(2023-01-26)[2025-11-01]. https://arxiv.org/abs/1602. 05629.
[2] Shi Lei, Li Tian, Gao Yufei, et al. A review of machine learning-based methods for database tuning[J]. Journal of Zhengzhou University (Engineering Science), 2024, 45(1): 1-11. [石磊, 李天, 高宇飞, 等. 基于机器学习的数据库系统参数优化方法综述[J]. 郑州大学学报(工学版), 2024, 45(1): 1-11.]
[3] Zhang Chen, Xie Yu, Bai Hang, et al. A survey on federated learning[J]. Knowledge-Based Systems, 2021, 216: 106775.
[4] Chen Xuebin, Ren Zhiqiang, Zhang Hongyang. Review on security threats and defense measures in federated learning[J]. Journal of Computer Applications, 2024, 44(6): 1663-1672. [陈学斌, 任志强, 张宏扬. 联邦学习中的安全威胁与防御措施综述[J]. 计算机应用, 2024, 44(6): 1663-1672.]
[5] Chen Xuebin, Qu Changsheng. Overview of backdoor attacks and defense in federated learning[J]. Journal of Computer Applications, 2024, 44(11): 3459-3469. [陈学斌, 屈昌盛. 面向联邦学习的后门攻击与防御综述[J]. 计算机应用, 2024, 44(11): 3459-3469.]
[6] Sikandar H S, Waheed H, Tahir S, et al. A detailed survey on federated learning attacks and defenses[J]. Electronics, 2023, 12(2): 260.
[7] Wang Yongkang, Xia Xianqing, Zhan Yufeng. ELITE: defending federated learning against Byzantine attacks based on information entropy[C]//Proceedings of the 2021 China Automation Congress (CAC). Piscataway: IEEE, 2021: 6049-6054.
[8] Liu Pengrui, Xu Xiangrui, Wang Wei. Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives[J]. Cybersecurity, 2022, 5: 4.
[9] Blanchard P, El Mhamdi E M, Guerraoui R, et al. Machine learning with adversaries: Byzantine tolerant gradient descent[C]//Neural Information Processing Systems. Long Beach: NeurIPS, 2017: 118-128.
[10] Cao Xiaoyu, Fang Minghong, Liu Jia, et al. FLTrust: Byzantine-robust federated learning via trust bootstrapping[C]//Proceedings 2021 Network and Distributed System Security Symposium. Reston: Internet Society, 2021: 1-18.
[11] Mai Peihua, Pang Yan, Yan Ran. RFLPA: A robust federated learning framework against poisoning attacks with secure aggregation[C]//Neural Information Processing Systems. Vancouver: NeurIPS, 2024: 104329-104356.
[12] Nguyen T D, Rieger P, Chen Huili, et al. FLAME: taming backdoors in federated learning[PP/OL]. V5. arXiv(2023-08-05)[2025-11-01]. https://arxiv.org/abs/2101. 02281v4.
[13] Chen Haitian, Chen Xuebin, Peng Lulu, et al. FLRAM: Robust aggregation technique for defense against Byzantine poisoning attacks in federated learning[J]. Electronics, 2023, 12(21): 4463.
[14] Lei Cheng, Zhang Lin. Federated learning model based on update quality detection and malicious client identification[J]. Computer Science, 2024, 51(11): 368-378. [雷诚, 张琳. 基于更新质量检测和恶意客户端识别的联邦学习模型[J]. 计算机科学, 2024, 51(11): 368-378.]
[15] Li Shenghui, Ngai E C H, Voigt T. An experimental study of Byzantine-robust aggregation schemes in federated learning[J]. IEEE Transactions on Big Data, 2024, 10(6): 975-988.
[16] Shi Junyu, Wan Wei, Hu Shengshan, et al. Challenges and approaches for mitigating Byzantine attacks in federated learning[PP/OL]. V2. arXiv(2022-10-07)[2025-11-01]. https://arxiv.org/abs/2112. 14468.
[17] Xia Peipei, Zhang Li, Li Fanzhang. Learning similarity with cosine similarity ensemble[J]. Information Sciences, 2015, 307: 39-52.
[18] Arthur D, Vassilvitskii S. k-means++: the advantages of careful seeding[C]//ACM-SIAM Symposium on Discrete Algorithms. New York: ACM, 2007: 1027-1035.
[19] Xiao Han, Rasul K, Vollgraf R. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms[PP/OL]. V2. arXiv(2017-09-15)[2025-11-01]. https://arxiv.org/abs/1708. 07747.
[20] Krizhevsky A, Sutskever I, Hinton G E. tiny images[R/OL]. (2009-04-08)[2025-11-01]. https://www.cs.utoronto.ca/~kriz/learning-features-2009-TR.pdf.
[21] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
[22] Xie Cong, Koyejo S, Gupta I. Fall of empires: breaking Byzantine-tolerant SGD by inner product manipulation[PP/OL]. V1. arXiv(2019-03-10)[2025-11-01]. https://arxiv.org/abs/1903. 03936.
[23] Li Liping, Xu Wei, Chen Tianyi, et al. RSA: Byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(1): 1544-1551.
[24] Rajput S, Wang Hongyi, Charles Z, et al. DETOX: a redundancy-based framework for faster and more robust gradient aggregation[PP/OL]. V2. arXiv(2020-03-08)[2025-11-01]. https://arxiv.org/abs/1907.12205.
[25] Alkhunaizi N, Kamzolov D, Takáč M, et al. Suppressing poisoning attacks on federated learning for medical imaging[C]//Medical Image Computing and Computer Assisted Intervention. Cham: Springer, 2022: 673-683.
[26] Shejwalkar V, Houmansadr A. Manipulating the Byzantine: optimizing model poisoning attacks and defenses for federated learning[C]//Proceedings 2021 Network and Distributed System Security Symposium. Virtual. Internet Society, 2021: 1-18.
[27] Li Shenghui, Ngai E, Voigt T. Byzantine-robust aggregation in federated learning empowered industrial IoT[J]. IEEE Transactions on Industrial Informatics, 2023, 19(2): 1165-1175.

备注/Memo

备注/Memo:
收稿日期:2026-01-20;修订日期:2026-03-01
基金项目:国家自然科学基金联合基金资助项目( U20A20179)
作者简介:张淑芬(1972— ) ,女,河北唐山人,华北理工大学教授,主要从事云计算、数据安全、隐私保护研究,E-mail:zhsf@ ncst. edu. cn。
更新日期/Last Update: 2026-03-13