• Title/Summary/Keyword: k-means

Search Result 17,902, Processing Time 0.055 seconds

Combined Artificial Bee Colony for Data Clustering (융합 인공벌군집 데이터 클러스터링 방법)

  • Kang, Bum-Su;Kim, Sung-Soo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.40 no.4
    • /
    • pp.203-210
    • /
    • 2017
  • Data clustering is one of the most difficult and challenging problems and can be formally considered as a particular kind of NP-hard grouping problems. The K-means algorithm is one of the most popular and widely used clustering method because it is easy to implement and very efficient. However, it has high possibility to trap in local optimum and high variation of solutions with different initials for the large data set. Therefore, we need study efficient computational intelligence method to find the global optimal solution in data clustering problem within limited computational time. The objective of this paper is to propose a combined artificial bee colony (CABC) with K-means for initialization and finalization to find optimal solution that is effective on data clustering optimization problem. The artificial bee colony (ABC) is an algorithm motivated by the intelligent behavior exhibited by honeybees when searching for food. The performance of ABC is better than or similar to other population-based algorithms with the added advantage of employing fewer control parameters. Our proposed CABC method is able to provide near optimal solution within reasonable time to balance the converged and diversified searches. In this paper, the experiment and analysis of clustering problems demonstrate that CABC is a competitive approach comparing to previous partitioning approaches in satisfactory results with respect to solution quality. We validate the performance of CABC using Iris, Wine, Glass, Vowel, and Cloud UCI machine learning repository datasets comparing to previous studies by experiment and analysis. Our proposed KABCK (K-means+ABC+K-means) is better than ABCK (ABC+K-means), KABC (K-means+ABC), ABC, and K-means in our simulations.

K-means clustering using a center of gravity for grid-based sample (그리드 기반 표본의 무게중심을 이용한 케이-평균군집화)

  • Lee, Sun-Myung;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.1
    • /
    • pp.121-128
    • /
    • 2010
  • K-means clustering is an iterative algorithm in which items are moved among sets of clusters until the desired set is reached. K-means clustering has been widely used in many applications, such as market research, pattern analysis or recognition, image processing, etc. It can identify dense and sparse regions among data attributes or object attributes. But k-means algorithm requires many hours to get k clusters that we want, because it is more primitive, explorative. In this paper we propose a new method of k-means clustering using a center of gravity for grid-based sample. It is more fast than any traditional clustering method and maintains its accuracy.

Modified K-means algorithm (수정된 K-means 알고리즘)

  • Kim Hyungcheol;Cho CheHwang
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.115-118
    • /
    • 1999
  • One of the typical methods to design a codebook is K-means algorithm. This algorithm has the drawbacks that converges to a locally optimal codebook and its performance is mainly decided by an initial codebook. D. Lee's method is almost same as the K-means algorithm except for a modification of a distance value. Those methods have a fixed distance value during all iterations. After many iterations. because the distance between new codevectors and old codevectors is much shorter than the distance in the early stage of iterations, the new codevectors are not affected by distance value. But new codevectors decided in the early stage of learning iterations are much affected by distance value. Therefore it is not appropriate to fix the distance value during all iterations. In this paper, we propose a new algorithm using each different distance value between codevectors for a limited iterations in the early stage of learning iteration. In the experiment, the result show that the proposed method can design better codebooks than the conventional K-means algorithms.

  • PDF

Analysis of Brokerage Commission Policy based on the Potential Customer Value (고객의 잠재가치에 기반한 증권사 수수료 정책 연구)

  • Shin, Hyung-Won;Sohn, So-Young
    • IE interfaces
    • /
    • v.16 no.spc
    • /
    • pp.123-126
    • /
    • 2003
  • In this paper, we use three cluster algorithms (K-means, Self-Organizing Map, and Fuzzy K-means) to find proper graded stock market brokerage commission rates based on the cumulative transactions on both stock exchange market and HTS (Home Trading System). Stock trading investors for both modes are classified in terms of the total transaction as well as the corresponding mode of investment, respectively. Empirical analysis results indicated that fuzzy K-means cluster analysis is the best fit for the segmentation of customers of both transaction modes in terms of robustness. We then propose the rules for three grouping of customers based on decision tree and apply different brokerage commission to be 0.4%, 0.45%, and 0.5% for exchange market while 0.06%, 0.1%, 0.18% for HTS.

Geodesic Clustering for Covariance Matrices

  • Lee, Haesung;Ahn, Hyun-Jung;Kim, Kwang-Rae;Kim, Peter T.;Koo, Ja-Yong
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.4
    • /
    • pp.321-331
    • /
    • 2015
  • The K-means clustering algorithm is a popular and widely used method for clustering. For covariance matrices, we consider a geodesic clustering algorithm based on the K-means clustering framework in consideration of symmetric positive definite matrices as a Riemannian (non-Euclidean) manifold. This paper considers a geodesic clustering algorithm for data consisting of symmetric positive definite (SPD) matrices, utilizing the Riemannian geometric structure for SPD matrices and the idea of a K-means clustering algorithm. A K-means clustering algorithm is divided into two main steps for which we need a dissimilarity measure between two matrix data points and a way of computing centroids for observations in clusters. In order to use the Riemannian structure, we adopt the geodesic distance and the intrinsic mean for symmetric positive definite matrices. We demonstrate our proposed method through simulations as well as application to real financial data.

Probabilistic reduced K-means cluster analysis (확률적 reduced K-means 군집분석)

  • Lee, Seunghoon;Song, Juwon
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.905-922
    • /
    • 2021
  • Cluster analysis is one of unsupervised learning techniques used for discovering clusters when there is no prior knowledge of group membership. K-means, one of the commonly used cluster analysis techniques, may fail when the number of variables becomes large. In such high-dimensional cases, it is common to perform tandem analysis, K-means cluster analysis after reducing the number of variables using dimension reduction methods. However, there is no guarantee that the reduced dimension reveals the cluster structure properly. Principal component analysis may mask the structure of clusters, especially when there are large variances for variables that are not related to cluster structure. To overcome this, techniques that perform dimension reduction and cluster analysis simultaneously have been suggested. This study proposes probabilistic reduced K-means, the transition of reduced K-means (De Soete and Caroll, 1994) into a probabilistic framework. Simulation shows that the proposed method performs better than tandem clustering or clustering without any dimension reduction. When the number of the variables is larger than the number of samples in each cluster, probabilistic reduced K-means show better formation of clusters than non-probabilistic reduced K-means. In the application to a real data set, it revealed similar or better cluster structure compared to other methods.

Program Development of Integrated Expression Profile Analysis System for DNA Chip Data Analysis (DNA칩 데이터 분석을 위한 유전자발연 통합분석 프로그램의 개발)

  • 양영렬;허철구
    • KSBB Journal
    • /
    • v.16 no.4
    • /
    • pp.381-388
    • /
    • 2001
  • A program for integrated gene expression profile analysis such as hierarchical clustering, K-means, fuzzy c-means, self-organizing map(SOM), principal component analysis(PCA), and singular value decomposition(SVD) was made for DNA chip data anlysis by using Matlab. It also contained the normalization method of gene expression input data. The integrated data anlysis program could be effectively used in DNA chip data analysis and help researchers to get more comprehensive analysis view on gene expression data of their own.

  • PDF

The Effect of Variable Learning Weights in Fuzzy c-means algorithm (Fuzzy c-means 알고리즘에서의 가변학습 가중치의 효과)

  • 박소희;조제황
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2001.06a
    • /
    • pp.109-112
    • /
    • 2001
  • 기존의 K-means 알고리즘은 학습벡터가 단일군집에 할당되는 방법이 crisp 이므로 다른 군집에 할당될 확률을 무시하게 된다. 따라서 군집화 작업과 관련하여 반복적인 코드북 설계 과정에서 각 학습벡터를 다중 군집으로 할당하는 Fuzzy c-means를 사용한다. 또한 Fuzzy c-means 알고리즘의 학습과정에서 구해지는 각 클래스 의 프로토타입에 가중치를 곱하여 다음 학습의 프로토타입으로 사용함으로써 Fuzzy c-means 알고리즘 적용 결과 얻어지는 코트북의 성능을 기존 알고리즘과 비교하여 개선된 Fuzzy c-means 알고리즘을 찾기 위한 근거를 마련한다.

  • PDF

Efficient Image Denoising Method Using Non-local Means Method in the Transform Domain (변환 영역에서 Non-local Means 방법을 이용한 효율적인 영상 잡음 제거 기법)

  • Kim, Dong Min;Lee, Chang Woo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.10
    • /
    • pp.69-76
    • /
    • 2016
  • In this paper, an efficient image denoising method using non-local means (NL-means) method in the transform domain is proposed. Survey for various image denoising methods has been given, and the performances of the image denoising method using NL-means method have been analyzed. We propose an efficient implementation method for NL-means method by calculating the weights for NL-means method in the DCT and LiftLT transform domain. By using the proposed method, the computational complexity is reduced, and the image denoising performance improves by using the characteristics of images in the tranform domain efficiently. Moreover, the proposed method can be applied efficiently for performing image denoising and image rescaling simultaneously. Extensive computer simulations show that the proposed method shows superior performance to the conventional methods.

Inverted Index based Modified Version of K-Means Algorithm for Text Clustering

  • Jo, Tae-Ho
    • Journal of Information Processing Systems
    • /
    • v.4 no.2
    • /
    • pp.67-76
    • /
    • 2008
  • This research proposes a new strategy where documents are encoded into string vectors and modified version of k means algorithm to be adaptable to string vectors for text clustering. Traditionally, when k means algorithm is used for pattern classification, raw data should be encoded into numerical vectors. This encoding may be difficult, depending on a given application area of pattern classification. For example, in text clustering, encoding full texts given as raw data into numerical vectors leads to two main problems: huge dimensionality and sparse distribution. In this research, we encode full texts into string vectors, and modify the k means algorithm adaptable to string vectors for text clustering.