参考文献/References:
[1] Mcmahan H B, Moore E, Ramage D, et al. Communication-efficient efficient learning of deep networks from decentralized data[PP/OL]. V4. arXiv(2023-01-26)[2025-11-01]. https://arxiv.org/abs/1602. 05629.
[2] Shi Lei, Li Tian, Gao Yufei, et al. A review of machine learning-based methods for database tuning[J]. Journal of Zhengzhou University (Engineering Science), 2024, 45(1): 1-11. [石磊, 李天, 高宇飞, 等. 基于机器学习的数据库系统参数优化方法综述[J]. 郑州大学学报(工学版), 2024, 45(1): 1-11.]
[3] Zhang Chen, Xie Yu, Bai Hang, et al. A survey on federated learning[J]. Knowledge-Based Systems, 2021, 216: 106775.
[4] Chen Xuebin, Ren Zhiqiang, Zhang Hongyang. Review on security threats and defense measures in federated learning[J]. Journal of Computer Applications, 2024, 44(6): 1663-1672. [陈学斌, 任志强, 张宏扬. 联邦学习中的安全威胁与防御措施综述[J]. 计算机应用, 2024, 44(6): 1663-1672.]
[5] Chen Xuebin, Qu Changsheng. Overview of backdoor attacks and defense in federated learning[J]. Journal of Computer Applications, 2024, 44(11): 3459-3469. [陈学斌, 屈昌盛. 面向联邦学习的后门攻击与防御综述[J]. 计算机应用, 2024, 44(11): 3459-3469.]
[6] Sikandar H S, Waheed H, Tahir S, et al. A detailed survey on federated learning attacks and defenses[J]. Electronics, 2023, 12(2): 260.
[7] Wang Yongkang, Xia Xianqing, Zhan Yufeng. ELITE: defending federated learning against Byzantine attacks based on information entropy[C]//Proceedings of the 2021 China Automation Congress (CAC). Piscataway: IEEE, 2021: 6049-6054.
[8] Liu Pengrui, Xu Xiangrui, Wang Wei. Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives[J]. Cybersecurity, 2022, 5: 4.
[9] Blanchard P, El Mhamdi E M, Guerraoui R, et al. Machine learning with adversaries: Byzantine tolerant gradient descent[C]//Neural Information Processing Systems. Long Beach: NeurIPS, 2017: 118-128.
[10] Cao Xiaoyu, Fang Minghong, Liu Jia, et al. FLTrust: Byzantine-robust federated learning via trust bootstrapping[C]//Proceedings 2021 Network and Distributed System Security Symposium. Reston: Internet Society, 2021: 1-18.
[11] Mai Peihua, Pang Yan, Yan Ran. RFLPA: A robust federated learning framework against poisoning attacks with secure aggregation[C]//Neural Information Processing Systems. Vancouver: NeurIPS, 2024: 104329-104356.
[12] Nguyen T D, Rieger P, Chen Huili, et al. FLAME: taming backdoors in federated learning[PP/OL]. V5. arXiv(2023-08-05)[2025-11-01]. https://arxiv.org/abs/2101. 02281v4.
[13] Chen Haitian, Chen Xuebin, Peng Lulu, et al. FLRAM: Robust aggregation technique for defense against Byzantine poisoning attacks in federated learning[J]. Electronics, 2023, 12(21): 4463.
[14] Lei Cheng, Zhang Lin. Federated learning model based on update quality detection and malicious client identification[J]. Computer Science, 2024, 51(11): 368-378. [雷诚, 张琳. 基于更新质量检测和恶意客户端识别的联邦学习模型[J]. 计算机科学, 2024, 51(11): 368-378.]
[15] Li Shenghui, Ngai E C H, Voigt T. An experimental study of Byzantine-robust aggregation schemes in federated learning[J]. IEEE Transactions on Big Data, 2024, 10(6): 975-988.
[16] Shi Junyu, Wan Wei, Hu Shengshan, et al. Challenges and approaches for mitigating Byzantine attacks in federated learning[PP/OL]. V2. arXiv(2022-10-07)[2025-11-01]. https://arxiv.org/abs/2112. 14468.
[17] Xia Peipei, Zhang Li, Li Fanzhang. Learning similarity with cosine similarity ensemble[J]. Information Sciences, 2015, 307: 39-52.
[18] Arthur D, Vassilvitskii S. k-means++: the advantages of careful seeding[C]//ACM-SIAM Symposium on Discrete Algorithms. New York: ACM, 2007: 1027-1035.
[19] Xiao Han, Rasul K, Vollgraf R. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms[PP/OL]. V2. arXiv(2017-09-15)[2025-11-01]. https://arxiv.org/abs/1708. 07747.
[20] Krizhevsky A, Sutskever I, Hinton G E. tiny images[R/OL]. (2009-04-08)[2025-11-01]. https://www.cs.utoronto.ca/~kriz/learning-features-2009-TR.pdf.
[21] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
[22] Xie Cong, Koyejo S, Gupta I. Fall of empires: breaking Byzantine-tolerant SGD by inner product manipulation[PP/OL]. V1. arXiv(2019-03-10)[2025-11-01]. https://arxiv.org/abs/1903. 03936.
[23] Li Liping, Xu Wei, Chen Tianyi, et al. RSA: Byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(1): 1544-1551.
[24] Rajput S, Wang Hongyi, Charles Z, et al. DETOX: a redundancy-based framework for faster and more robust gradient aggregation[PP/OL]. V2. arXiv(2020-03-08)[2025-11-01]. https://arxiv.org/abs/1907.12205.
[25] Alkhunaizi N, Kamzolov D, Takáč M, et al. Suppressing poisoning attacks on federated learning for medical imaging[C]//Medical Image Computing and Computer Assisted Intervention. Cham: Springer, 2022: 673-683.
[26] Shejwalkar V, Houmansadr A. Manipulating the Byzantine: optimizing model poisoning attacks and defenses for federated learning[C]//Proceedings 2021 Network and Distributed System Security Symposium. Virtual. Internet Society, 2021: 1-18.
[27] Li Shenghui, Ngai E, Voigt T. Byzantine-robust aggregation in federated learning empowered industrial IoT[J]. IEEE Transactions on Industrial Informatics, 2023, 19(2): 1165-1175.