[1]武继刚,李妙君,赵淑平.基于低秩稀疏表达的弹性最小二乘回归学习[J].郑州大学学报(工学版),2023,44(06):25-32.[doi:10.13705/j.issn.1671-6833.2023.03.011]
WU Jigang,LI Miaojun,ZHAO Shuping.Low-rank Sparse Representation Based on Elastic Least Squares Regression Learning[J].Journal of Zhengzhou University (Engineering Science),2023,44(06):25-32.[doi:10.13705/j.issn.1671-6833.2023.03.011]
点击复制
基于低秩稀疏表达的弹性最小二乘回归学习(
)
《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]
- 卷:
-
44卷
- 期数:
-
2023年06期
- 页码:
-
25-32
- 栏目:
-
- 出版日期:
-
2023-09-25
文章信息/Info
- Title:
-
Low-rank Sparse Representation Based on Elastic Least Squares Regression Learning
- 作者:
-
武继刚; 李妙君; 赵淑平
-
广东工业大学 计算机学院,广东 广州 510006
- Author(s):
-
WU Jigang; LI Miaojun; ZHAO Shuping
-
1. School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China; 2. School of Electrical Engineering, Hebei University of Technology, Tianjin 300130, China
-
- 关键词:
-
稀疏表达; 最小二乘回归; 低秩表示; 灵活性; 稀疏误差项
- Keywords:
-
sparse representation; least squares regression; low-rank representation; flexibility; sparse error term
- DOI:
-
10.13705/j.issn.1671-6833.2023.03.011
- 文献标志码:
-
A
- 摘要:
-
为了克服重定向最小二乘回归模型容易破坏回归目标的结构的缺点,提出了一种基于低秩稀疏表达的弹 性最小二乘回归学习模型 LRSR-eLSR。 模型以最小二乘回归为基础,不使用严格的 0-1 标签矩阵作为目标矩阵,而 是引入边距约束来直接从数据中学习回归目标,可以在保持回归目标低秩结构的同时,增加回归模型的灵活性。 而且,为了捕获数据的结构信息,利用了数据的低秩表示来保持数据的结构。 在计算的过程中,考虑问题求解的复 杂性,使用了核范数正则化代替秩函数。 除此之外,模型还引入了一个带有 L2,1 范数的稀疏误差项来补偿回归误 差,这有利于学习更灵活地变换。 模型还对投影矩阵施加额外的正则化项,来避免过拟合问题。 实验结果表明:在 4 个公开的数据集上,所提模型的识别准确率优于其他方法;在 COIL-20 数据集中,识别率可达到 98%。
- Abstract:
-
In order to overcome the disadvantage that the redirected least squares regression model might destroy the structure of the regression target, a low-rank sparse representation based elastic least squares regression learning ( LRSR-eLSR)model was proposed. The model based on the least squares regression, but did not use the strict 0-1 label matrix as the target matrix. Instead, it introduced the margin constraint to directly learn the regression objective from the data, which could increase the flexibility of the regression model while maintaining the regression target structure. Moreover, in order to capture the structure information of the data, a low rank representation of the data was used to maintain the structure of the data. In the process of calculation, considering the complexity of the problem, the kernel norm regularization was used instead of the rank function. In addition to this, the model introduced a sparse error term with a L2,1 -norm to compensate for regression errors, which facilitates learning more flexible transformations. The model also imposed additional regularization terms on the projection matrix to avoid overfitting. The experimental results showed that the recognition accuracy of the model in this paper is better than that of other methods on four published datasets. The recognition rate could be up to 98% in the COIL-20 dataset.
更新日期/Last Update:
2023-10-22