[3] RODRIGUEZ A, LAIO A. Clustering by fast search andfind of density peaks[ J] . Science, 2014, 344 ( 6191) :1492-1496.
[4] GUO W J, WANG W H, ZHAO S P, et al. Density peakclustering with connectivity estimation [ J ] . Knowledge-Based Systems, 2022, 243: 108501.
[5] CHENG M C, MA T F, LIU Y B. A projection-basedsplit-and-merge clustering algorithm[ J] . Expert Systemswith Applications, 2019, 116: 121-130.
[6] SIERANOJA S, FRÄNTI P. Adapting k-means for graphclustering [ J ] . Knowledge and Information Systems,2022, 64(1) : 115-142.
[7] HUANG J Z, NG M K, RONG H Q, et al. Automatedvariable weighting in k-means type clustering[ J] . IEEETransactions on Pattern Analysis and Machine Intelligence, 2005, 27(5) : 657-668.
[8] ZHANG Y Q, CHEUNG Y M. A new distance metric exploiting heterogeneous interattribute relationship for ordinal-and-nominal-attribute data clustering [ J ] . IEEETransactions on Cybernetics, 2022, 52(2) : 758-771.
[9] 周成龙, 陈玉明, 朱益冬. 粒 K 均值聚类算法[ J] . 计算机工程与应用, 2023, 59(13) : 317-324.
ZHOU C L, CHEN Y M, ZHU Y D. Granular K-meansclustering algorithm[ J] . Computer Engineering and Applications, 2023, 59(13) : 317-324.
[10] 邓秀勤, 郑丽苹, 张逸群, 等. 基于新的距离度量的异构属性数据子空间聚类[ J] . 郑州大学学报( 工学版) , 2023, 44(2) :53-60.
DENG X Q, ZHENG L P, ZHANG Y Q, et al. Subspaceclustering of heterogeneous-attribute data based on a newdistance metric [ J ] . Journal of Zhengzhou University(Engineering Science) , 2023, 44(2) :53-60.
[11] AVERBUCH-ELOR H, BAR N, COHEN-OR D. Borderpeeling clustering[ J] . IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42 ( 7 ) : 1791-1797.
[12] DU M J, WANG R, JI R, et al. ROBP a robust borderpeeling clustering using Cauchy kernel [ J] . InformationSciences, 2021, 571: 375-400.
[13] CAPÓ M, PÉREZ A, LOZANO J A. An efficient splitmerge re-start for the K-means algorithm [ J ] . IEEETransactions on Knowledge and Data Engineering, 2022,34(4) : 1618-1627.
[14] LLOYD S. Least squares quantization in PCM[ J] . IEEETransactions on Information Theory, 1982, 28(2) : 129-137.
[15] SHI J B, MALIK J. Normalized cuts and image segmentation[ J ] . IEEE Transactions on Pattern Analysis andMachine Intelligence, 2000, 22(8) : 888-905.
[16] VON LUXBURG U. A tutorial on spectral clustering[ J] .Statistics and Computing, 2007, 17(4) : 395-416.
[17] ZHA H Y, HE X F, DING C, et al. Spectral relaxationfor K-means clustering[ C]∥Proceedings of the 14th International Conference on Neural Information ProcessingSystems: Natural and Synthetic. Cambridge: MIT, 2001:1057-1064.
[18] ZHANG X L, WANG W, NRVÅG K, et al. K-AP:generating specified K clusters by efficient affinity propagation[ C]∥Proceedings of the 2010 IEEE InternationalConference on Data Mining. Piscataway: IEEE, 2010:1187-1192.
[19] MEILǍ M. Comparing clusterings—an information baseddistance[ J] . Journal of Multivariate Analysis, 2007, 98(5) :873-895.
[20] HUBERT L, ARABIE P. Comparing partitions[ J] . Journal of Classification, 1985, 2:193-218.
[21] VEENMAN C J, REINDERS M J T, BACKER E. Amaximum variance cluster algorithm [ J] . IEEE Transactions on Pattern Analysis and Machine Intelligence,2002, 24(9) : 1273-1280.
[22] GIONIS A, MANNILA H, TSAPARAS P. Clustering aggregation[C]∥ Proceeding of the 21st International Conference on Data Engineering ( ICDE′ 05 ) . Piscataway:IEEE,2005:341-352.
[23] KELLY M, LONGJOHN R, NOTTINGHAM K. The UCImachine learning repository[ DB / OL] . [ 2023 - 06 - 29]https:∥archive. ics. uci. edu / datasets.
[24] ALCALA-FDEZ J, FERNANDEZ A, LUENGO J, et al.KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework[ J] . Journal of Multiple-Valued Logic and Soft Computing, 2011, 17(2 / 3) : 255-287.