LUO P C, WANG J M, WANG S Q, et al. Research ondeep learning based scientific dataset retrieval method[ J] . Information Studies: Theory & Application, 2022,45(7) : 49-56.
[2] CHEN S H, XU T J. Long text QA matching model basedon BiGRU-DAttention-DSSM[ J] . Mathematics, 2021, 9(10) : 1129.
[3] 冯皓楠, 何智勇, 马良荔. 基于图文注意力融合的主题标签推荐[ J] . 郑州大学学报( 工学版) , 2022, 43(6) : 30-35.
FENG H N, HE Z Y, MA L L. Multimodal hashtag recommendation based on image and text attention fusion[ J] . Journal of Zhengzhou University ( Engineering Science) , 2022, 43(6) : 30-35.
[4] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]∥Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000-6010.
[5] LI Y D, ZHANG Y Q, ZHAO Z, et al. CSL: a largescale Chinese scientific literature dataset [ EB / OL ] .(2022- 09 - 12 ) [ 2023 - 06 - 11 ] . https:∥arxiv. org /abs/ 2209. 05034.
[6] GAO T Y, YAO X C, CHEN D Q. SimCSE: simple contrastive learning of sentence embeddings [ C] ∥Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association forComputational Linguistics, 2021: 6894-6910.
[7] LI S H, GONG B. Word embedding and text classification based on deep learning methods[ J] . MATEC Web ofConferences, 2021, 336: 06022.
[8] LIU J P, CHU X T, WANG Y F, et al. Deep text retrieval models based on DNN, CNN, RNN and Transformer: a review[ C]∥2022 IEEE 8th International Conference on Cloud Computing and Intelligent Systems(CCIS) . Piscataway: IEEE, 2022: 391-400.
[9] HUANG P S, HE X D, GAO J F, et al. Learning deepstructured semantic models for web search using clickthrough data[C]∥Proceedings of the 22nd ACM international conference on Information & Knowledge Management. New York: ACM, 2013: 2333-2338.
[10] SHEN Y L, HE X D, GAO J F, et al. A latent semanticmodel with convolutional-pooling structure for informationretrieval[C]∥Proceedings of the 23rd ACM InternationalConference on Conference on Information and KnowledgeManagement. New York: ACM, 2014: 101-110.
[11] MOHAN S, FIORINI N, KIM S, et al. A fast deeplearning model for textual relevance in biomedical information retrieval [ C ] ∥Proceedings of the 2018 WorldWide Web Conference. New York: ACM, 2018: 77-86.
[12] KHURANA D, KOLI A, KHATTER K, et al. Naturallanguage processing: state of the art, current trends andchallenges[ J] . Multimedia tools and applications, 2023,82(3) : 3713-3744.
[13] MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]∥Proceedings of the 26th International Conference on Neural Information Processing Systems:Volume2. New York: ACM, 2013: 3111-3119.
[14] 汪烨, 周思源, 翁知远, 等. 一种面向用户反馈的智能分析 与 服 务 设 计 方 法 [ J] . 郑 州 大 学 学 报 ( 工 学版) , 2023, 44(3) : 56-61.
WANG Y, ZHOU S Y, WENG Z Y, et al. An intelligentanalysis and service design method for user feedback[ J] .Journal of Zhengzhou University ( Engineering Science) ,2023, 44(3) : 56-61.
[15] LIU P F, YUAN W Z, FU J L, et al. Pre-train, prompt,and predict: a systematic survey of prompting methods innatural language processing [ J ] . ACM Computing Surveys, 2021,55(9) : 195.
[16] CHOUDHARY S, GUTTIKONDA H, CHOWDHURY DR, et al. Document retrieval using deep learning [ C]∥2020 Systems and Information Engineering Design Symposium ( SIEDS) . Piscataway: IEEE, 2020: 1-6.
[17] ESTEVA A, KALE A, PAULUS R, et al. COVID-19 information retrieval with deep-learning based semanticsearch, question answering, and abstractive summarization[ J] . NPJ Digital Medicine, 2021, 4: 68.
[18] BELTAGY I, LO K, COHAN A. SciBERT: a pretrainedlanguage model for scientific text[C]∥Proceedings of the2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing ( EMNLP-IJC-NLP ) . Stroudsburg: Association for Computational Linguistics, 2019: 3615-3620.
[19] CHOWDHURY A, ROSENTHAL J, WARING J, et al.Applying self-supervised learning to medicine: review ofthe state of the art and medical implementations[ J] . Informatics, 2021, 8(3) : 59.
[20] REIMERS N, GUREVYCH I. Sentence-BERT: sentenceembeddings using siamese BERT-networks[C]∥Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International JointConference on Natural Language Processing ( EMNLPIJCNLP ) . Stroudsburg: Association for ComputationalLinguistics, 2019: 3982-3992.
[21] LI L Y, SONG D M, MA R T, et al. KNN-BERT: finetuning pre-trained models with KNN classifier[ EB / OL] .(2021- 10 - 06 ) [ 2023 - 06 - 11 ] . https:∥arxiv. org /abs/ 2110. 02523.
[22] PALANIVINAYAGAM A, EL-BAYEH C Z, DAMAŠEVICˇIUSR. Twenty years of machine-learning-based text classification: a systematic review [ J ] . Algorithms, 2023, 16(5) : 236.
[23] CHICCO D. Siamese neural networks: an overview[ J] .Methods in Molecular Biology, 2021, 2190: 73-94.
[24] DEVLIN J, CHANG M W, LEE K, et al. BERT: pretraining of deep bidirectional transformers for languageunderstanding[ EB / OL ] . ( 2019 - 05 - 24 ) [ 2023 - 06 -11] . https:∥arxiv. org / abs/ 1810. 04805.
[25] CUI Y M, CHE W X, LIU T, et al. Pre-training withwhole word masking for Chinese BERT[ J] . IEEE / ACMTransactions on Audio, Speech, and Language Processing, 2021, 29: 3504-3514.
[26] CUI Y M, YANG Z Q, LIU T. PERT: pre-trainingBERT with permuted language model[ EB / OL] . ( 2022 -03 - 14 ) [ 2023 - 06 - 11 ] . https: ∥ arxiv. org /abs/ 2203. 06906.
[27] CUI Y M, CHE W X, WANG S J, et al. LERT: a linguistically-motivated pre-trained language model [ EB /OL] . ( 2022 - 11 - 10) [ 2023 - 06 - 11] . https:∥arxiv.org / abs/ 2211. 05344.