[1]万红,贾上坤,崔恩泽,等.基于人体姿态估计的站桩数字化表达与评估[J].郑州大学学报(工学版),2022,43(04):8-15.[doi:10.13705/j.issn.1671-6833.2022.04.023]
 WAN Hong,JIA Shangkun,CUI Enze,et al.Digital Expression and Evaluation of Standing Stake Based on Human Pose Estimation[J].Journal of Zhengzhou University (Engineering Science),2022,43(04):8-15.[doi:10.13705/j.issn.1671-6833.2022.04.023]
点击复制

基于人体姿态估计的站桩数字化表达与评估()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
43
期数:
2022年04期
页码:
8-15
栏目:
出版日期:
2022-07-03

文章信息/Info

Title:
Digital Expression and Evaluation of Standing Stake Based on Human Pose Estimation
作者:
万红12贾上坤12崔恩泽12张俊明12
1.郑州大学电气工程学院;2.郑州大学河南省脑科学与脑机接口技术重点实验室;

Author(s):
WAN Hong12JIA Shangkun12CUI Enze12ZHANG Junming12
1.School of Electrical Engineering,Zhengzhou University,Zhengzhou 450001,China;
2.Henan Key Laboratory of Brain Science and Brain Computer Interface Technology,Zhengzhou University,Zhengzhou 450001,China
关键词:
Keywords:
standing stakeOpenPosecharacteristic parametersevaluation indexauxiliary training
分类号:
TP391
DOI:
10.13705/j.issn.1671-6833.2022.04.023
文献标志码:
A
摘要:
站桩看似简单实则深奥 。 为解读站桩过程的内在规律 , 辅助学员站桩训练 , 结合实验和数据分 析 , 基于人体姿态估计技术 , 提取站桩过程的动态特征参数 , 构建站桩姿态数字化表达与评估体系 。 首先,利用 OpenPose 人体姿态估计算法从站桩视频中提取人体关键点;其次,根据站桩要领确定数字化表达的关键特征参数;接着采用动态时间规整算法和判别分析法分别计算各特征参数的评估指标;最后, 基于长期站桩数据,使用变异系数法赋予各评估指标不同的权重,探讨各特征参数的重要性并对站桩效 能进行综合评估。具体实施包括:设计一个从正面和侧面采集站桩视频的实验,6 名太极拳专家和 22 名学员参与这项研究,学员被分为实验组和对照组。通过对专家组站桩参数的分析发现,站桩实则是一个动态过程,实验数据从正面、侧面表达了不同部位的动态特征参数;同时跟踪实验组学员 8 个月的长期站桩数据,通过与专家数据的对比评估发现,躯干、大腿、膝关节和髋关节是站桩过程中更重要的身体部位。此外,经过数字化评估指导后,实验组学员的站桩质量得到明显提升,验证了数字化表达与评估体系对于辅助训练的有效性。
Abstract:
Standing stake seems simple but actually profound.In order to interpret the internal law of standing stake process and assist students in standing stake training,combined with experiments and data analysis,based on the human pose estimation technology,the dynamic characteristic parameters of standing stake process were extracted,and the digital expression and evaluation system of standing stake posture was constructed.Firstly,the human pose estimation algorithm OpenPose was used to extract the human keypoints from the video of standing stake.Secondly,the key characteristic parameters of digital expression were determined according to the essentials of standing stake.Then,the dynamic time warping algorithm and discriminant analysis method were used to calculate the evaluation indexes of each characteristic parameter.Finally,based on the long-term standing stake data,the coefficient of variation method was used to assign different weights to each evaluation index,to discuss the importance of each characteristic parameter and to comprehensively evaluate standing stake performance.The specific implementation included:designing an experiment to collect standing stake video from the front and side,six Tai Chi professional experts and twenty-two students participated in this research,and the students were divided into experimental and control groups.Through the analysis of standing stake parameters of the expert group,it was found that standing stake was actually a dynamic process,and the experimental data expressed the dynamic characteristic parameters of different parts from the front and side.At the same time,the long-term standing stake data of students in the experimental group was tracked for eight months.Through the comparative evaluation with the expert data,it revealed that the trunk,thigh,knee and hip were the more important body parts in standing stake process.In addition,after digital evaluation and guidance,standing stake quality of students in the experimental group was significantly improved,which could verify the effectiveness of digital expression and evaluation system for auxiliary training.

参考文献/References:

[1] 蔡建平, 淮湛欣. 太极拳运动考评软件的设计与实 现[ J] . 软件, 2012, 33(3) : 60-63. 

CAI J P, HUAI Z X. Design & implementation of Taiji motion appraisal system [ J ] . Software, 2012, 33 (3) : 60-63.
 [2] HASHIMOTO H, NAKAJIMA M, KAWATA S, et al. Skill level evaluation of Taijiquan based on 3D body motion analysis[C] / / 2014 IEEE International Conference on Industrial Technology. Piscataway: IEEE, 2014: 712-717.
[3] 漆才杰, 戴国斌. 太极( 定步) 推手动作识别系统 的设计与研制[ J] . 武汉体育学院学报, 2015, 49 (8) : 52-56. QI C J, DAI G B. Design and development of Taichi pushing hands ( with fixed-foot stance ) movements recognition system[ J] . Journal of Wuhan institute of physical education, 2015, 49(8) : 52-56. 
[4] 薛智宏, 张利英, 程振华, 等. 基于 Kinect 的原地 太极拳辅 助 训 练 系 统 [ J ] . 河 北 科 技 大 学 学 报, 2017, 38(2) : 183-189. 
XUE Z H, ZHANG L Y, CHENG Z H, et al. Research of Tai-Chi-Chuan auxiliary training system based on Kinect [ J] . Journal of Hebei university of science and technology, 2017, 38(2) : 183-189. 
[5] TOMPSON J J, JAIN A, LECUN Y, et al. Joint training of a convolutional network and a graphical model for human pose estimation[EB / OL] . (2014- 09- 17) [2021-11-08] . https: / / arxiv. org / abs/ 1406. 2984.
 [6] NING G H, ZHANG Z, HE Z Q. Knowledge-guided deep fractal neural networks for human pose estimation [ J ] . IEEE transactions on multimedia, 2018, 20 (5) : 1246-1259. 
[7] 陈梦婷, 王兴刚, 刘文予. 基于密集深度插值的 3D 人体姿态估计方法[ J] . 郑州大学学报( 工学版) , 2021, 42(3) : 26-32.
CHEN M T, WANG X G, LIU W Y. Dense depth interpolation for 3D human pose estimation[ J] . Journal of Zhengzhou university ( engineering science) , 2021, 42(3) : 26-32. 
[8] CAO Z, HIDALGO G, SIMON T, et al. OpenPose: realtime multi-person 2D pose estimation using part affinity fields[J]. IEEE transactions on pattern analysis and machine intelligence, 2021, 43(1): 172-186. 
[9] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [ EB / OL] . (2015-04 - 10) [ 2021 - 11 - 08] . https: / / arxiv. org / abs/ 1409. 1556. 
[10] KHAN A, HARIS M, NADEEM S S, et al. Virtual self defense trainer-analyzing and scoring user pose [ C] / / 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications. Piscataway: IEEE, 2020: 1-5.
 [11] YADAV S K, SINGH A, GUPTA A, et al. Real-time Yoga recognition using deep learning[ J] . Neural computing and applications, 2019, 31(12) : 9349-9361.
 [12] THAR M C, WINN K Z N, FUNABIKI N. A proposal of Yoga pose assessment method using pose detection for self-learning [ C ] / / 2019 International Conference on Advanced Information Technologies ( ICAIT) . Piscataway: IEEE, 2019: 137-142. 
[13] 唐心宇, 宋爱国. 人体姿态估计及在康复训练情景 交互中的应用[ J] . 仪器仪表学报, 2018, 39( 11) : 195-203. 
TANG X Y, SONG A G. Human pose estimation and its implementation in scenario interaction system of rehabilitation training [ J] . Chinese journal of scientific instrument, 2018, 39(11) : 195-203.
 [14] TAKEDA I, YAMADA A, ONODERA H. Artificial Intelligence-assisted motion capture for medical applications: a comparative study between markerless and passive marker motion capture[ J] . Computer methods in biomechanics and biomedical engineering, 2021, 24 (8) : 864-873.
 [15] GIORGINO T. Computing and visualizing dynamic time warping alignments in R: the dtw package [ J] . Journal of statistical software, 2009, 31(7) : 1-24.
 [16] YU X Q, XIONG S P. A dynamic time warping based algorithm to evaluate Kinect-enabled home-based physical rehabilitation exercises for older people[ J] . Sensors, 2019, 19(13) : 2882. 
[17] 张勇, 党兰学. 线性判别分析特征提取稀疏表示人 脸识别方法[ J] . 郑州大学学报 ( 工学版) , 2015, 36(2) : 94-98. 
ZHANG Y, DANG L X. Sparse representation-based face recognition method by LDA feature extraction[ J] . Journal of Zhengzhou university ( engineering science) , 2015, 36(2) : 94-98

更新日期/Last Update: 2022-07-03