• Title/Summary/Keyword: Hierarchical Clustering

Search Result 563, Processing Time 0.022 seconds

A Survey of Advances in Hierarchical Clustering Algorithms and Applications

  • Munshi, Amr
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.17-24
    • /
    • 2022
  • Hierarchical clustering methods have been proposed for more than sixty years and yet are used in various disciplines for relation observation and clustering purposes. In 1965, divisive hierarchical methods were proposed in biological sciences and have been used in various disciplines such as, and anthropology, ecology. Furthermore, recently hierarchical methods are being deployed in economy and energy studies. Unlike most clustering algorithms that require the number of clusters to be specified by the user, hierarchical clustering is well suited for situations where the number of clusters is unknown. This paper presents an overview of the hierarchical clustering algorithm. The dissimilarity measurements that can be utilized in hierarchical clustering algorithms are discussed. Further, the paper highlights the various and recent disciplines where the hierarchical clustering algorithms are employed.

On hierarchical clustering in sufficient dimension reduction

  • Yoo, Chaeyeon;Yoo, Younju;Um, Hye Yeon;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.4
    • /
    • pp.431-443
    • /
    • 2020
  • The K-means clustering algorithm has had successful application in sufficient dimension reduction. Unfortunately, the algorithm does have reproducibility and nestness, which will be discussed in this paper. These are clear deficits for the K-means clustering algorithm; however, the hierarchical clustering algorithm has both reproducibility and nestness, but intensive comparison between K-means and hierarchical clustering algorithm has not yet been done in a sufficient dimension reduction context. In this paper, we rigorously study the two clustering algorithms for two popular sufficient dimension reduction methodology of inverse mean and clustering mean methods throughout intensive numerical studies. Simulation studies and two real data examples confirm that the use of hierarchical clustering algorithm has a potential advantage over the K-means algorithm.

Empirical Comparisons of Clustering Algorithms using Silhouette Information

  • Jun, Sung-Hae;Lee, Seung-Joo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.1
    • /
    • pp.31-36
    • /
    • 2010
  • Many clustering algorithms have been used in diverse fields. When we need to group given data set into clusters, many clustering algorithms based on similarity or distance measures are considered. Most clustering works have been based on hierarchical and non-hierarchical clustering algorithms. Generally, for the clustering works, researchers have used clustering algorithms case by case from these algorithms. Also they have to determine proper clustering methods subjectively by their prior knowledge. In this paper, to solve the subjective problem of clustering we make empirical comparisons of popular clustering algorithms which are hierarchical and non hierarchical techniques using Silhouette measure. We use silhouette information to evaluate the clustering results such as the number of clusters and cluster variance. We verify our comparison study by experimental results using data sets from UCI machine learning repository. Therefore we are able to use efficient and objective clustering algorithms.

An Agglomerative Hierarchical Variable-Clustering Method Based on a Correlation Matrix

  • Lee, Kwangjin
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.2
    • /
    • pp.387-397
    • /
    • 2003
  • Generally, most of researches that need a variable-clustering process use an exploratory factor analysis technique or a divisive hierarchical variable-clustering method based on a correlation matrix. And some researchers apply a object-clustering method to a distance matrix transformed from a correlation matrix, though this approach is known to be improper. On this paper an agglomerative hierarchical variable-clustering method based on a correlation matrix itself is suggested. It is derived from a geometric concept by using variate-spaces and a characterizing variate.

Clustering load patterns recorded from advanced metering infrastructure (AMI로부터 측정된 전력사용데이터에 대한 군집 분석)

  • Ann, Hyojung;Lim, Yaeji
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.969-977
    • /
    • 2021
  • We cluster the electricity consumption of households in A-apartment in Seoul, Korea using Hierarchical K-means clustering algorithm. The data is recorded from the advanced metering infrastructure (AMI), and we focus on the electricity consumption during evening weekdays in summer. Compare to the conventional clustering algorithms, Hierarchical K-means clustering algorithm is recently applied to the electricity usage data, and it can identify usage patterns while reducing dimension. We apply Hierarchical K-means algorithm to the AMI data, and compare the results based on the various clustering validity indexes. The results show that the electricity usage patterns are well-identified, and it is expected to be utilized as a major basis for future applications in various fields.

A Performance Improvement Study On Hierarchical Clustering (Centroid Linkage) Using A Priority Queue (Priority Queue 를 이용한 Hierarchical Clustering (Centroid Linkage) 성능 개선)

  • Jeon, Yongkweon;Yoon, Sungroh
    • Annual Conference of KIPS
    • /
    • 2010.11a
    • /
    • pp.1837-1838
    • /
    • 2010
  • 기존 hierarchical clustering 은 Time complexity 와 space complexity 가 Large data set 을 clustering 하기에는 적당하지 못하며 이것을 일반 PC 의 메모리 내에서 해결하는데 어려움이 있다. 따라서 본 연구에서는 이러한 어려움을 극복하기 위해 기존 Hierarchical clustering 중 Centroid Linkage 에 새로운 Algorithm 을 제안하여 보다 적은 메모리를 사용하고 빠르게 처리하는 방법을 제안하고자 한다.

Functional hierarchical clustering using shape distance

  • Kyungmin Ahn
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.5
    • /
    • pp.601-612
    • /
    • 2024
  • A functional clustering analysis is a crucial machine learning technique in functional data analysis. Many functional clustering methods have been developed to enhance clustering performance. Moreover, due to the phase variability between functions, elastic functional clustering methods, such as applying the Fisher-Rao metric, which can manage phase variation during clustering, have been developed to improve model performance. However, aligning functions without considering the phase variation can distort functional information because phase variation can be a natural characteristic of functions. Hence, we propose a state-of-the-art functional hierarchical clustering that can manage phase and amplitude variations of functional data. This approach is based on the phase and amplitude separation method using the norm-preserving time warping of functions. Due to its invariance property, this representation provides robust variability for phase and amplitude components of functions and improves clustering performance compared to conventional functional hierarchical clustering models. We demonstrate this framework using simulated and real data.

Development of Clustering Algorithm and Tool for DNA Microarray Data (DNA 마이크로어레이 데이타의 클러스터링 알고리즘 및 도구 개발)

  • 여상수;김성권
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.10
    • /
    • pp.544-555
    • /
    • 2003
  • Since the result data from DNA microarray experiments contain a lot of gene expression information, adequate analysis methods are required. Hierarchical clustering is widely used for analysis of gene expression profiles. In this paper, we study leaf-ordering, which is a post-processing for the dendrograms output by hierarchical clusterings to improve the efficiency of DNA microarray data analysis. At first, we analyze existing leaf-ordering algorithms and then present new approaches for leaf-ordering. And we introduce a software HCLO(Hierarchical Clustering & Leaf-Ordering Tool) that is our implementation of hierarchical clustering, some of existing leaf-ordering algorithms and those presented in this paper.

Selection of Cluster Topic Words in Hierarchical Clustering using K-Means Algorithm

  • Lee Shin Won;Yi Sang Seon;An Dong Un;Chung Sung Jong
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.885-889
    • /
    • 2004
  • Fast and high-quality document clustering algorithms play an important role in providing data exploration by organizing large amounts of information into a small number of meaningful clusters. Hierarchical clustering improves the performance of retrieval and makes that users can understand easily. For outperforming of clustering, we implemented hierarchical structure with variety and readability, by careful selection of cluster topic words and deciding the number of clusters dynamically. It is important to select topic words because hierarchical clustering structure is summarizes result of searching. We made choice of noun word as a cluster topic word. The quality of topic words is increased $33\%$ as follows. As the topic word of each cluster, the only noun word is extracted for the top-level cluster and the used topic words for the children clusters were not reused.

  • PDF

Agglomerative Hierarchical Clustering Analysis with Deep Convolutional Autoencoders (합성곱 오토인코더 기반의 응집형 계층적 군집 분석)

  • Park, Nojin;Ko, Hanseok
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.1
    • /
    • pp.1-7
    • /
    • 2020
  • Clustering methods essentially take a two-step approach; extracting feature vectors for dimensionality reduction and then employing clustering algorithm on the extracted feature vectors. However, for clustering images, the traditional clustering methods such as stacked auto-encoder based k-means are not effective since they tend to ignore the local information. In this paper, we propose a method first to effectively reduce data dimensionality using convolutional auto-encoder to capture and reflect the local information and then to accurately cluster similar data samples by using a hierarchical clustering approach. The experimental results confirm that the clustering results are improved by using the proposed model in terms of clustering accuracy and normalized mutual information.