• 제목/요약/키워드: spectral clustering

검색결과 90건 처리시간 0.023초

Spectral clustering based on the local similarity measure of shared neighbors

  • Cao, Zongqi;Chen, Hongjia;Wang, Xiang
    • ETRI Journal
    • /
    • 제44권5호
    • /
    • pp.769-779
    • /
    • 2022
  • Spectral clustering has become a typical and efficient clustering method used in a variety of applications. The critical step of spectral clustering is the similarity measurement, which largely determines the performance of the spectral clustering method. In this paper, we propose a novel spectral clustering algorithm based on the local similarity measure of shared neighbors. This similarity measurement exploits the local density information between data points based on the weight of the shared neighbors in a directed k-nearest neighbor graph with only one parameter k, that is, the number of nearest neighbors. Numerical experiments on synthetic and real-world datasets demonstrate that our proposed algorithm outperforms other existing spectral clustering algorithms in terms of the clustering performance measured via the normalized mutual information, clustering accuracy, and F-measure. As an example, the proposed method can provide an improvement of 15.82% in the clustering performance for the Soybean dataset.

Robust Similarity Measure for Spectral Clustering Based on Shared Neighbors

  • Ye, Xiucai;Sakurai, Tetsuya
    • ETRI Journal
    • /
    • 제38권3호
    • /
    • pp.540-550
    • /
    • 2016
  • Spectral clustering is a powerful tool for exploratory data analysis. Many existing spectral clustering algorithms typically measure the similarity by using a Gaussian kernel function or an undirected k-nearest neighbor (kNN) graph, which cannot reveal the real clusters when the data are not well separated. In this paper, to improve the spectral clustering, we consider a robust similarity measure based on the shared nearest neighbors in a directed kNN graph. We propose two novel algorithms for spectral clustering: one based on the number of shared nearest neighbors, and one based on their closeness. The proposed algorithms are able to explore the underlying similarity relationships between data points, and are robust to datasets that are not well separated. Moreover, the proposed algorithms have only one parameter, k. We evaluated the proposed algorithms using synthetic and real-world datasets. The experimental results demonstrate that the proposed algorithms not only achieve a good level of performance, they also outperform the traditional spectral clustering algorithms.

Spectral Clustering with Sparse Graph Construction Based on Markov Random Walk

  • Cao, Jiangzhong;Chen, Pei;Ling, Bingo Wing-Kuen;Yang, Zhijing;Dai, Qingyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권7호
    • /
    • pp.2568-2584
    • /
    • 2015
  • Spectral clustering has become one of the most popular clustering approaches in recent years. Similarity graph constructed on the data is one of the key factors that influence the performance of spectral clustering. However, the similarity graphs constructed by existing methods usually contain some unreliable edges. To construct reliable similarity graph for spectral clustering, an efficient method based on Markov random walk (MRW) is proposed in this paper. In the proposed method, theMRW model is defined on the raw k-NN graph and the neighbors of each sample are determined by the probability of the MRW. Since the high order transition probabilities carry complex relationships among data, the neighbors in the graph determined by our proposed method are more reliable than those of the existing methods. Experiments are performed on the synthetic and real-world datasets for performance evaluation and comparison. The results show that the graph obtained by our proposed method reflects the structure of the data better than those of the state-of-the-art methods and can effectively improve the performance of spectral clustering.

준정부호 스펙트럼의 군집화 (Semidefinite Spectral Clustering)

  • 김재환;최승진
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2005년도 한국컴퓨터종합학술대회 논문집 Vol.32 No.1 (A)
    • /
    • pp.892-894
    • /
    • 2005
  • Graph partitioning provides an important tool for data clustering, but is an NP-hard combinatorial optimization problem. Spectral clustering where the clustering is performed by the eigen-decomposition of an affinity matrix [1,2]. This is a popular way of solving the graph partitioning problem. On the other hand, semidefinite relaxation, is an alternative way of relaxing combinatorial optimization. issuing to a convex optimization[4]. In this paper we present a semidefinite programming (SDP) approach to graph equi-partitioning for clustering and then we use eigen-decomposition to obtain an optimal partition set. Therefore, the method is referred to as semidefinite spectral clustering (SSC). Numerical experiments with several artificial and real data sets, demonstrate the useful behavior of our SSC. compared to existing spectral clustering methods.

  • PDF

An Improved Automated Spectral Clustering Algorithm

  • Xiaodan Lv
    • Journal of Information Processing Systems
    • /
    • 제20권2호
    • /
    • pp.185-199
    • /
    • 2024
  • In this paper, an improved automated spectral clustering (IASC) algorithm is proposed to address the limitations of the traditional spectral clustering (TSC) algorithm, particularly its inability to automatically determine the number of clusters. Firstly, a cluster number evaluation factor based on the optimal clustering principle is proposed. By iterating through different k values, the value corresponding to the largest evaluation factor was selected as the first-rank number of clusters. Secondly, the IASC algorithm adopts a density-sensitive distance to measure the similarity between the sample points. This rendered a high similarity to the data distributed in the same high-density area. Thirdly, to improve clustering accuracy, the IASC algorithm uses the cosine angle classification method instead of K-means to classify the eigenvectors. Six algorithms-K-means, fuzzy C-means, TSC, EIGENGAP, DBSCAN, and density peak-were compared with the proposed algorithm on six datasets. The results show that the IASC algorithm not only automatically determines the number of clusters but also obtains better clustering accuracy on both synthetic and UCI datasets.

스펙트럴 클러스터링 - 요약 및 최근 연구동향 (Spectral clustering: summary and recent research issues)

  • 정상훈;배수현;김충락
    • 응용통계연구
    • /
    • 제33권2호
    • /
    • pp.115-122
    • /
    • 2020
  • K-평균 클러스터링은 매우 널리 사용되고 있으나 유사도가 구면체 또는 타원체로 정의되어 각 클러스터가 볼록 집합 형태인 자료에는 좋은 결과를 주지만 그렇지 않은 경우에는 매우 형편 없는 결과를 나타낸다. 스펙트럴 클러스터링은 K-평균 클러스터링의 단점을 잘 보완해 줄 뿐아니라 여러 형태의 자료나 고차원 자료 등에 대해서도 좋은 결과를 나타내서 최근 인공 신경망 모형에 많이 이용되고 있다. 하지만, 개선되어야 할 단점도 여전히 많다. 본 논문에서는 스펙트럴 클러스터링에 대해 알기 쉽게 소개하고, 클러스터 갯수의 추정, 척도모수의 추정, 고차원 자료의 차원 축소 등 스펙트럴 클러스터링에 대한 최근의 연구 동향을 소개한다.

적응형 분광 군집 방법을 이용한 다중 특징 데이터 군집화 (Multiview Data Clustering by using Adaptive Spectral Co-clustering)

  • 손정우;전준기;이상윤;김선중
    • 정보과학회 논문지
    • /
    • 제43권6호
    • /
    • pp.686-691
    • /
    • 2016
  • 본 논문에서는 다수의 특징, 특히 셋 이상의 특징을 가지는 데이터에 대한 분광 군집 방법인 적응형 분광 군집 방법을 소개하고, 적응형 분광 군집 방법의 성능을 시뮬레이션 데이터와 다중 언어 데이터를 이용하여 분석한다. 적응형 분광 군집 방법에서는 특징 간 서로 다른 정보들을 공유하여 데이터를 군집화함으로써 군집 성능을 높인다. 이때, 서로 다른 특징 간의 정보 공유를 효율적으로 하기 위해, 협업학습을 도입했다. 협업 학습에서는 각 특징이 서로 독립이 되도록 가중치를 학습하고, 학습된 가중치에 따라 정보를 전달한다. 이러한 과정을 통해 일반적인 특징 결합이나, 모든 특징 간 독립을 가정한 기존 협업학습 기반의 분광 군집에 비해 정보 공유의 효율성을 높인다. 실험에서는 시뮬레이션 데이터와 다중 언어문서 데이터를 이용하여 성능을 검증하였으며, 반복과정에서의 성능 변화와 정보 전달 결과 변화하는 모습을 제시함으로써 적응형 분광 군집 방법의 유의미한 성능 향상에 대해 분석하였다.

A Max-Flow-Based Similarity Measure for Spectral Clustering

  • Cao, Jiangzhong;Chen, Pei;Zheng, Yun;Dai, Qingyun
    • ETRI Journal
    • /
    • 제35권2호
    • /
    • pp.311-320
    • /
    • 2013
  • In most spectral clustering approaches, the Gaussian kernel-based similarity measure is used to construct the affinity matrix. However, such a similarity measure does not work well on a dataset with a nonlinear and elongated structure. In this paper, we present a new similarity measure to deal with the nonlinearity issue. The maximum flow between data points is computed as the new similarity, which can satisfy the requirement for similarity in the clustering method. Additionally, the new similarity carries the global and local relations between data. We apply it to spectral clustering and compare the proposed similarity measure with other state-of-the-art methods on both synthetic and real-world data. The experiment results show the superiority of the new similarity: 1) The max-flow-based similarity measure can significantly improve the performance of spectral clustering; 2) It is robust and not sensitive to the parameters.

Semidefinite Programming을 통한 그래프의 동시 분할법 (K-Way Graph Partitioning: A Semidefinite Programming Approach)

  • Jaehwan, Kim;Seungjin, Choi;Sung-Yang, Bang
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2004년도 가을 학술발표논문집 Vol.31 No.2 (1)
    • /
    • pp.697-699
    • /
    • 2004
  • Despite many successful spectral clustering algorithm (based on the spectral decomposition of Laplacian(1) or stochastic matrix(2) ) there are several unsolved problems. Most spectral clustering Problems are based on the normalized of algorithm(3) . are close to the classical graph paritioning problem which is NP-hard problem. To get good solution in polynomial time. it needs to establish its convex form by using relaxation. In this paper, we apply a novel optimization technique. semidefinite programming(SDP). to the unsupervised clustering Problem. and present a new multiple Partitioning method. Experimental results confirm that the Proposed method improves the clustering performance. especially in the Problem of being mixed with non-compact clusters compared to the previous multiple spectral clustering methods.

  • PDF

STATISTICAL NOISE BAND REMOVAL FOR SURFACE CLUSTERING OF HYPERSPECTRAL DATA

  • Huan, Nguyen Van;Kim, Hak-Il
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2008년도 International Symposium on Remote Sensing
    • /
    • pp.111-114
    • /
    • 2008
  • The existence of noise bands may deform the typical shape of the spectrum, making the accuracy of clustering degraded. This paper proposes a statistical approach to remove noise bands in hyperspectral data using the correlation coefficient of bands as an indicator. Considering each band as a random variable, two adjacent signal bands in hyperspectral data are highly correlative. On the contrary, existence of a noise band will produce a low correlation. For clustering, the unsupervised ${\kappa}$-nearest neighbor clustering method is implemented in accordance with three well-accepted spectral matching measures, namely ED, SAM and SID. Furthermore, this paper proposes a hierarchical scheme of combining those measures. Finally, a separability assessment based on the between-class and the within-class scatter matrices is followed to evaluate the applicability of the proposed noise band removal method. Also, the paper brings out a comparison for spectral matching measures.

  • PDF