[1]刘 纳,郑国风,徐贞顺,等.基于小样本学习的口语理解方法综述[J].郑州大学学报(工学版),2024,45(01):78-89.[doi:10.13705/j.issn.1671-6833.2024.01.012]
 LIU Na,ZHENG Guofeng,XU Zhenshun,et al.A Survey of Spoken Language Understanding Based on Few-shot Learning[J].Journal of Zhengzhou University (Engineering Science),2024,45(01):78-89.[doi:10.13705/j.issn.1671-6833.2024.01.012]
点击复制

基于小样本学习的口语理解方法综述()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
45
期数:
2024年01期
页码:
78-89
栏目:
出版日期:
2024-01-19

文章信息/Info

Title:
A Survey of Spoken Language Understanding Based on Few-shot Learning
作者:
刘 纳 郑国风 徐贞顺 林令德 李 晨 杨 杰
1. 北方民族大学 计算机科学与工程学院,宁夏 银川 750021;2. 北方民族大学 图像图形智能处理国家民委重点实 验室,宁夏 银川 750021
Author(s):
LIU Na 12 ZHENG Guofeng 12 XU Zhenshun 12 LIN Lingde 12 LI Chen 12 YANG Jie 12
1. School of Computer Science and Engineering, North Minzu University, Yinchuan 750021, China; 
2. The Key Laboratory of Images and Graphics Intelligent Processing of State Ethnic Affairs Commission, North Minzu University, Yinchuan 750021, China
关键词:
口语理解 小样本学习 模型微调 数据增强 度量学习
Keywords:
spoken language understanding few-shot learning fine-tune data augmentation metric learning
DOI:
10.13705/j.issn.1671-6833.2024.01.012
文献标志码:
A
摘要:
小样本口语理解是目前对话式人工智能亟待解决的问题之一。 结合国内外最新研究现状,系统地梳理了 口语理解任务的相关文献。 简要介绍了在非小样本场景中口语理解任务建模的经典方法,包括无关联建模、隐式 关联建模、显式关联建模以及基于预训练范式的建模方法;重点阐述了在小样本口语理解任务中为解决训练样本 受限问题而提出的基于模型微调、基于数据增强和基于度量学习 3 类方法,介绍了如 ULMFiT、原型网络和归纳网 络等代表性模型。 在此基础上对不同模型的语义理解能力、可解释性、泛化能力等性能进行分析对比。 最后对口 语理解任务面临的挑战和未来发展方向进行讨论,指出零样本口语理解、中文口语理解、开放域口语理解以及跨语 言口语理解等研究内容是该领域的研究难点。
Abstract:
Few-shot spoken language understanding ( SLU) is one of the urgent problems in dialogue artificial intelligence (DAI) . The relevant literature on SLU task, combining the latest research trends both domestic and foreign was systematically reviewed. The classic methods for SLU task modeling in non-few-shot scenarios were briefly introduced, including single modeling, implicit joint modeling, explicit joint modeling, and pre-trained paradigms. The latest studies in few-shot SLU were introduced, which included three kinds of few-shot learning methods based on model fine-tuning, data augmentation and metric learning. Representative models such as ULMFiT, prototypical network, and induction network were discussed. On this basis, the semantic understanding ability, interpretability, generalization ability and other performances of different methods were analyzed and compared. Finally, the challenges and future development directions of SLU tasks were discussed, it was pointed out that zero-shot SLU, Chinese SLU, open-domain SLU, and cross-lingual SLU would be the research difficulties in this field
更新日期/Last Update: 2024-01-24