[1]李明辉,马文凯,周翊民,等.基于多传感器融合的无人机生命搜寻方法[J].郑州大学学报(工学版),2023,44(02):61-67.[doi:10.13705/j.issn.1671-6833.2023.02.003]
 LI Minghui,MA Wenkai,ZHOU Yimin,et al.UAV Life Search Method Based on Multi-sensor Fusion[J].Journal of Zhengzhou University (Engineering Science),2023,44(02):61-67.[doi:10.13705/j.issn.1671-6833.2023.02.003]
点击复制

基于多传感器融合的无人机生命搜寻方法()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
44
期数:
2023年02期
页码:
61-67
栏目:
出版日期:
2023-02-27

文章信息/Info

Title:
UAV Life Search Method Based on Multi-sensor Fusion
作者:
李明辉1马文凯1周翊民2叶玲见2
1.陕西科技大学 机电工程学院,陕西 西安 710016, 2.中国科学院 深圳先进技术研究院,广东 深圳 518055

Author(s):
LI Minghui1 MA Wenkai1 ZHOU Yimin2 YE Lingjian2
1.School of Mechanical and Electrical Engineering of Shaanxi University of Science and Technology, Xi’an 710016, 2.Shaanxi, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen, Guangdong 518055

关键词:
数据融合 红外图像特征 音频特征 判别相关分析(DCA) 无人机生命搜寻方法
Keywords:
data fusion infrared image features audio features discriminant correlation analysis (DCA) UAV life search methods
分类号:
TP391. 4
DOI:
10.13705/j.issn.1671-6833.2023.02.003
文献标志码:
A
摘要:
为应对单个生命探测传感器在野外、灾区生命搜寻时的不稳定状况,提出了一种基于多传感器信息融合 的无人机(UVA) 生 命 搜 寻 方 法。 首 先,构 建 不 同 结 构 的 ResNeXt 网 络 以 提 取 不 同 维 度 信 息 的 特 征,利 用 一 维 ResNeXt 网络提取音频梅尔频谱系数的深层特征,利用二维 ResNeXt 网络提取红外图像的深层特征;其次,使用判 别相关分析(DCA)对 2 种高维特征进行降维融合,兼顾不同特征的相关性和类别性,以获得更丰富的环境信息,从 而提高生命搜寻的准确性;最后,将融合特征输入支持向量机分类器以进行生命识别的决策,建立具有相关性的音 频和图像双模态数据集,并将所提方法在该数据集上进行实验比较和分析,对其性能进行有效评估。 实验结果表 明:所提方法在特征提取和特征融合方面效果优于其他传统方法,且多传感器融合识别准确率可达 98. 7%,证明该 方法能有效提高特殊场景下人体检测的准确性,多传感器融合检测效果优于单传感器。
Abstract:
In order to cope with the unstable conditions of single sensor for life detection in the fields and disaster areas, a life search method was proposed based on multi-sensor information fusion. Firstly, ResNeXt networks with different structures were constructed to extract features with different dimensional information. Deep features of audio Mel-scale Frequency Cepstral Coefficients were extracted using a one-dimensional ResNeXt network, and deep features of the infrared images were extracted using a two-dimensional ResNeXt network. Secondly, the two high-dimensional features were fused by dimensionality reduction using discriminant correlation analysis (DCA) to take into account the correlation and category of different features in order to obtain richer environmental information, thus improving the life search accuracy. Finally, the fused features were fed into a support vector machine classifier for decision making in life recognition. A bimodal dataset of audio and images with correlation was created and the proposed method was experimentally compared and analyzed in this dataset to evaluate the search performance of the proposed method. The experimental results demonstrated that the proposed method could outperform other traditional methods in feature extraction and feature fusion, and the multi-sensor fusion recognition accuracy could reach 98. 7%, which proved that the method could effectively improve the accuracy of human detection in special scenes, and the performance of multi-sensor fusion based human detection was higher than that with single sensor.

参考文献/References:

[1] KEYWORTH S, WOLFE S. UAVs for land use applictions: UAVs in the civilian airspace institution of engineering and technology [ C] / / IET Seminar on UAVs in the Civilian Airspace. London:IET, 2013: 1-13.

 [2] MA Y Y, LIANG F L, WANG P F, et al. Research on identifying different life states based on the changes of vital signs of rabbit under water and food deprivation by UWB radar measurement [ C] / / 2019 Photonics & Electromagnetics Research Symposium-Fall ( PIERS-Fall ) . Piscataway:IEEE, 2019: 397-403. 
[3] SEKI M, FUJIWARA H, SUMI K. A robust background subtraction method for changing background [ C] / / Proceedings Fifth IEEE Workshop on Applications of Computer Vision. Piscataway:IEEE, 2000: 207-213. 
[4] LIU X C, FENG X L. Research on weak signal detection for downhole acoustic telemetry system[C] / / 2010 3rd International Congress on Image and Signal Processing. Piscataway:IEEE, 2010: 4432-4435. 
[5] ZACHARIE M, FUJI S, MINORI S. Rapid human body detection in disaster sites using image processing from unmanned aerial vehicle (UAV) cameras[C] / / 2018 International Conference on Intelligent Informatics and Biomedical Sciences ( ICIIBMS) . Piscataway:IEEE, 2018: 230-235.
 [6] MCCLURE J, SAHIN F. A low-cost search-and-rescue drone for near real-time detection of missing persons [C] / / 2019 14th Annual Conference System of Systems Engineering ( SoSE) . Piscataway:IEEE, 2019: 13-18. 
[7] VALSAN A, B P, G H V D, et al. Unmanned aerial vehicle for search and rescue mission[ C] / / 2020 4th International Conference on Trends in Electronics and Informatics ( ICOEI) (48184) . Piscataway:IEEE, 2020: 684 -687.
 [8] YAMAZAKI Y, PREMACHANDRA C, PEREA C J. Audio-processing-based human detection at disaster sites with unmanned aerial vehicle[ J] . IEEE Access, 2020, 8: 101398-101405. 
[9] PERDANA M I, RISNUMAWAN A, SULISTIJONO I A. Automatic aerial victim detection on low-cost thermal camera using convolutional neural network[C] / / 2020 International Symposium on Community-centric Systems (CcS) . Piscataway:IEEE, 2020: 1-5. 
[10] MARUŠIC ’ Ž, ZELENIKA D, MARUŠIC ’ T, et al. Visual search on aerial imagery as support for finding lost persons [C] / / 2019 8th Mediterranean Conference on Embedded Computing (MECO) . Piscataway:IEEE, 2019: 1-4.
 [11] HAGHIGHAT M, ABDEL-MOTTALEB M, ALHALABI W. Discriminant correlation analysis: real-time feature level fusion for multimodal biometric recognition [ J ] . IEEE Transactions on Information Forensics and Security2016, 11(9) : 1984-1996. 
[12] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradientbased learning applied to document recognition[ J] . Proceedings of the IEEE, 1998, 86(11) : 2278-2324.
 [13] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks [ J ] . Advances in Neural Information Processing Systems, 2012, 25: 1097-1105. 
[14] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [ EB / OL ] . (2014- 09 - 04 ) [ 2021 - 11 - 12 ] . https: / / arxiv. org / abs/ 1409. 1556. 
[15] SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions[C] / / Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . Piscataway:IEEE, 2015: 1-9. 
[16] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition [ C] / / 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2016: 770-778. 
[17] XIE S N, GIRSHICK R, DOLLÁR P, et al. Aggregated residual transformations for deep neural networks [ C] / / 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2017: 5987-5995. 
[18] 龚瑞昆, 刘佳. 图像处理的玉米病害识别研究[ J] . 现 代电子技术, 2021, 44(24) : 149-152. 
GONG R K, LIU J. Research on maize disease recognition based on image processing [ J] . Modern Electronics Technique, 2021, 44(24) : 149-152.
 [19] SALAMON J, JACOBY C, BELLO J P. A dataset and taxonomy for urban sound research [ C] / / Proceedings of the 22nd ACM International Conference on Multimedia. New York: ACM, 2014: 1041-1044. 
[20] 张坚鑫, 郭四稳, 张国兰, 等. 基于多尺度特征融合 的火灾检测模型[ J] . 郑州大学学报(工学版) , 2021, 42(5) : 13-18. 
ZHANG J X, GUO S W, ZHANG G L, et al. Fire detection model based on multi-scale feature fusion[ J] . Journal of Zhengzhou University ( Engineering Science ) , 2021, 42(5) : 13-18. 
[21] 穆晓敏, 张嗣思, 齐林. 基于 2D-FrFT 多阶次特征融 合的人 脸 表 情 识 别 方 法 [ J] . 郑 州 大 学 学 报 ( 工 学 版) , 2012, 33(1) : 109-112. 
MU X M, ZHANG S S, QI L. Human emotion recognition using fused 2D-fractional Fourier transform features based on CCA[ J] . Journal of Zhengzhou University (Engineering Science) , 2012, 33(1) : 109-112

相似文献/References:

[1]刘佳佳,彭鹏..基于Kalman滤波融合算法的某坝基水平位移分析[J].郑州大学学报(工学版),2010,31(03):110.[doi:10.3969/j.issn.1671-6833.2010.03.027]

更新日期/Last Update: 2023-02-25