文档视界 最新最全的文档下载
当前位置:文档视界 › 基于快速地标采样的大规模谱聚类算法

基于快速地标采样的大规模谱聚类算法

第39卷第2期电子与信息学报Vol.39No.2 2017年2月Journal of Electronics&Information Technology Feb.2017

基于快速地标采样的大规模谱聚类算法

叶茂*刘文芬

(解放军信息工程大学郑州450002)

(数学工程与先进计算国家重点实验室郑州450002)

摘要:为避免传统谱聚类算法高复杂度的应用局限,基于地标表示的谱聚类算法利用地标点与数据集各点间的相似度矩阵,有效降低了谱嵌入的计算复杂度。在大数据集情况下,现有的随机抽取地标点的方法会影响聚类结果的稳定性,k均值中心点方法面临收敛时间未知、反复读取数据的问题。该文将近似奇异值分解应用于基于地标点的谱聚类,设计了一种快速地标点采样算法。该算法利用由近似奇异向量矩阵行向量的长度计算的抽样概率来进行抽样,同随机抽样策略相比,保证了聚类结果的稳定性和精度,同k均值中心点策略相比降低了算法复杂度。同时从理论上分析了抽样结果对原始数据的信息保持性,并对算法的性能进行了实验验证。

关键词:地标点采样;大数据;谱聚类;近似奇异值分解

中图分类号:TP181文献标识码:A文章编号:1009-5896(2017)02-0278-07 DOI:10.11999/JEIT160260

Large Scale Spectral Clustering Based on Fast Landmark Sampling

YE Mao LIU Wenfen

(PLA Information Engineering University,Zhengzhou450002,China)

(State Key Laboratory of Mathematical Engineering and Advanced Computing,Zhengzhou450002,China)

Abstract:The applicability of traditional spectral clustering is limited by its high complexity in large-scale data sets.Through construction of affinity matrix between landmark points and data points,the Landmark-based Spectral Clustering(LSC)algorithm can significantly reduce the computational complexity of spectral embedding.

It is vital for clustering results to apply the suitable strategies of the generation of landmark points.While considering big data problems,the existing generation strategies of landmark points face some deficiencies:the unstable results of random sampling,along with the unknown convergence time and the repeatability of data reading in k-means centers method.In this paper,a rapid landmark-sampling spectral clustering algorithm based on the approximate singular value decomposition is designed,which makes the sampling probability of each landmark point decided by the row norm of the approximate singular vector https://www.docsj.com/doc/9e9952866.html,pared with LSC algorithm based on random sampling,the clustering result of new algorithm is more stable and accurate;compared with LSC algorithm based on k-means centers,the new algorithm reduces the computational complexity.Moreover, the preservation of information in original data is analyzed for the landmark-sampling results theoretically.At the same time,the performance of new approach is verified by the experiments in some public data sets.

Key words:Landmark sampling;Big data;Spectral clustering;Approximate singular value decomposition

1引言

聚类分析可将数据集按照相似性分成子集,使得人们能根据分类结果找出数据的内在联系,是模式识别、数据挖掘的主要方法之一[1]。传统聚类算法

收稿日期:2016-03-21;改回日期:2016-07-18,网络出版:2016-09-30 *通信作者:叶茂yemaoxxgc@https://www.docsj.com/doc/9e9952866.html,

基金项目:国家973计划(2012CB315905),国家自然科学基金(61502527,61379150)

Foundation Items:The National973Program of China (2012CB315905),The National Natural Science Foundation of China(61502527,61379150)(如k均值等)在非凸数据集上效果不佳,这使得适用于非凸数据集和能检测线性不可分簇的谱聚类算法[2,3]成为了聚类分析中的研究热点。但是,传统的谱聚类算法涉及构造相似度矩阵和对相应的拉普拉斯矩阵特征分解,需要2

O()

n的空间复杂度和3

O()

n

的时间复杂度,这对于大规模数据集来说是难以承受的计算负担。

为提升谱聚类算法的扩展性,一个自然的想法就是设计可以减少特征分解复杂度的算法。2004年,Fowlkes等人[4]改进Nystr?m方法并将其用于谱聚

相关文档