LUO R H, YUAN H, ZHONG F H, et al. Traffic jam detection based on convolutional neural network[J]. Journal of Zhengzhou University (Engineering Science), 2019,40(2):21-25.
[2]LI Y M, JIANG Y, LI Z F, et al. Backdoor learning: a survey[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(1): 5-22.
[3]GU T Y, LIU K, DOLAN-GAVITT B, et al. BadNets: evaluating backdooring attacks on deep neural networks[J]. IEEE Access, 2019, 7: 47230-47244.
[4]NGUYEN A, TRAN A. WaNet: imperceptible warpingbased backdoor attack[EB/OL]. (2021-02-20)[202508-16].https:∥doi.org/10.48550/arXiv.2102.10369.
[5]BARNI M, KALLAS K, TONDI B. A new backdoor attack in CNNS by training set corruption without label poisoning[C]∥2019 IEEE International Conference on Image Processing (ICIP). Piscataway: IEEE, 2019: 101-105.
[6]TRAN B, LI J, MADRY A. Spectral signatures in backdoor attacks[EB/OL]. (2018-11-01)[2025-08-16].https:∥doi.org/10.48550/arXiv.1811.00636.
[7]WU D X, WANG Y S. Adversarial neuron pruning purifies backdoored deep models[EB/OL]. (2021-10-27)[2025-08-16]. https:∥doi. org/10. 48550/arXiv.2110.14430.
[8]ZENG Y, CHEN S, PARK W, et al. Adversarial unlearning of backdoors via implicit hypergradient[EB/OL]. (2021-10-07)[2025-08-16]. https:∥doi. org/10.48550/arXiv.2110.03735.
[9]ZHENG R K, TANG R J, LI J Z, et al. Pre-activation distributions expose backdoor neurons[J]. Advances in Neural Information Processing Systems, 2022, 35: 1866718680.
[10] ZHANG X Y, ZHOU X Y, LIN M X, et al. ShuffleNet: an extremely efficient convolutional neural network for mobile devices[C]∥2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 6848-6856.
[11] CAI R, ZHANG Z Y, CHEN T L, et al. Randomized channel shuffling: minimal-overhead backdoor attack detection without clean datasets[J]. Advances in Neural Information Processing Systems, 2022, 35: 33876-33889.
[12] CHEN H T, WANG Y H, XU C, et al. Data-free learning of student networks[C]∥2019 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway:IEEE, 2019: 3514-3522.
[13] FANG G F, SONG J, SHEN C C, et al. Data-free adversarial distillation[EB/OL]. (2019-12-23)[2025-0816].https:∥arxiv.org/abs/1912.11006.
[14] SHI L C, JIAO Y Y, LU B L. Differential entropy feature for EEG-based vigilance estimation[C]∥2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Piscataway:IEEE, 2013: 6627-6630.
[15] CHEN X Y, LIU C, LI B, et al. Targeted backdoor attacks on deep learning systems using data poisoning[EB/OL]. (2017-12-15)[2025-08-16]. https:∥arxiv.org/abs/1712.05526.
[16] NGUYEN T A, TRAN A. Input-aware dynamic backdoor attack[J]. Advances in Neural Information Processing Systems, 2020, 33: 3454-3464.
[17]WANG Z T, ZHAI J, MA S Q. BppAttack: stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning[C]∥Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2022: 15074-15084.
[18] LIU K, DOLAN-GAVITT B, GARG S. Fine-pruning: defending against backdooring attacks on deep neural networks[C]∥Research in Attacks, Intrusions, and Defenses. Cham: Springer, 2018: 273-294.
[19]WU B Y, CHEN H R, ZHANG M D, et al. Backdoorbench: a comprehensive benchmark of backdoor learning[J]. Advances in Neural Information Processing Systems, 2022, 35: 10546-10559.
[20] HEUSEL M, RAMSAUER H, UNTERTHINER T, et al. GANs trained by a two time-scale update rule converge to a local Nash equilibrium[EB/OL]. (2017-06-26)[2025-08-16].https:∥arxiv.org/abs/1706.08500.
[21] VAN DER MAATEN L, HINTON G. Visualizing data using t-SNE[J]. Journal of Machine Learning Research, 2008, 9(11):2579-2605.