• Title/Summary/Keyword: Incremental Clustering

Search Result 43, Processing Time 0.028 seconds

High-Dimensional Clustering Technique using Incremental Projection (점진적 프로젝션을 이용한 고차원 글러스터링 기법)

  • Lee, Hye-Myung;Park, Young-Bae
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.568-576
    • /
    • 2001
  • Most of clustering algorithms data to degenerate rapidly on high dimensional spaces. Moreover, high dimensional data often contain a significant a significant of noise. which causes additional ineffectiveness of algorithms. Therefore it is necessary to develop algorithms adapted to the structure and characteristics of the high dimensional data. In this paper, we propose a clustering algorithms CLIP using the projection The CLIP is designed to overcome efficiency and/or effectiveness problems on high dimensional clustering and it is the is based on clustering on each one dimensional subspace but we use the incremental projection to recover high dimensional cluster and to reduce the computational cost significantly at time To evaluate the performance of CLIP we demonstrate is efficiency and effectiveness through a series of experiments on synthetic data sets.

  • PDF

Incremental Clustering Algorithm by Modulating Vigilance Parameter Dynamically (경계변수 값의 동적인 변경을 이용한 점층적 클러스터링 알고리즘)

  • 신광철;한상용
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.11
    • /
    • pp.1072-1079
    • /
    • 2003
  • This study is purported for suggesting a new clustering algorithm that enables incremental categorization of numerous documents. The suggested algorithm adopts the natures of the spherical k-means algorithm, which clusters a mass amount of high-dimensional documents, and the fuzzy ART(adaptive resonance theory) neural network, which performs clustering incrementally. In short, the suggested algorithm is a combination of the spherical k-means vector space model and concept vector and fuzzy ART vigilance parameter. The new algorithm not only supports incremental clustering and automatically sets the appropriate number of clusters, but also solves the current problems of overfitting caused by outlier and noise. Additionally, concerning the objective function value, which measures the cluster's coherence that is used to evaluate the quality of produced clusters, tests on the CLASSIC3 data set showed that the newly suggested algorithm works better than the spherical k-means by 8.04% in average.

Incremental Conceptual Clustering Using Modified Category Utility (변형된 Category Utility를 이용한 점진 개념학습)

  • Kim Pyo Jae;Choi Jin Young
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.04a
    • /
    • pp.193-197
    • /
    • 2005
  • 점진적 개념 학습 알고리즘인 COBWEB은 클래스 정보가 주어지지 않은 사례들(instances)을 분류하기 위하여 사례의 속성과 값에 근거하여 학습하며 각 노드가 유사한 사례들의 집합인 클래스에 해당하는 분류 트리를 생성하는 알고리즘이다. 유사한 사례들을 같은 클래스로 분류하기 위한 기준으로 category utility가 사용되며 이는 클래스 내부의 유사도와 클래스간의 차이점을 최대화하는 방향으로 클래스를 분류한다 기존의 COBWEB에 사용되는 category utility는 클래스 사이즈와 예측 정확성 사이의 tradeoff 관계로 볼 수 있으며, 이로 인하여 예측 정확성은 약간 감소하나 클래스 사이즈가 커지는 방향으로 학습이 진행 될 수 있는 편향성(bias)를 가지고 있다. 이는 분류 트리에 불필요한 클래스 노드들(spurious nodes)을 생성하게 하여 학습 결과인 클래스 개념을 이해하는뎨 어렵게 한다. 본 논문에서는 클래스와 그에 속하는 사례들의 속성-값 분포를 고려하여 클래스와 속성의 연관성에 비례한 가충치를 더한 변형된 category utility를 제안하고, dataset에 대한 실험을 통하여 제안된 category utility가 기존의 큰 클래스 사이즈를 선호하는 bias를 완화시킴을 보이고자 한다.

  • PDF

Efficient Incremental Learning using the Preordered Training Data (미리 순서가 매겨진 학습 데이타를 이용한 효과적인 증가학습)

  • Lee, Sun-Young;Bang, Sung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.97-107
    • /
    • 2000
  • Incremental learning generally reduces training time and increases the generalization of a neural network by selecting training data incrementally during the training. However, the existing methods of incremental learning repeatedly evaluate the importance of training data every time they select additional data. In this paper, an incremental learning algorithm is proposed for pattern classification problems. It evaluates the importance of each piece of data only once before starting the training. The importance of the data depends on how close they are to the decision boundary. The current paper presents an algorithm which orders the data according to their distance to the decision boundary by using clustering. Experimental results of two artificial and real world classification problems show that this proposed incremental learning method significantly reduces the size of the training set without decreasing generalization performance.

  • PDF

Partial Dimensional Clustering based on Projection Filtering in High Dimensional Data Space (대용량의 고차원 데이터 공간에서 프로젝션 필터링 기반의 부분차원 클러스터링 기법)

  • 이혜명;정종진
    • The Journal of Society for e-Business Studies
    • /
    • v.8 no.4
    • /
    • pp.69-88
    • /
    • 2003
  • In high dimensional data, most of clustering algorithms tend to degrade the performance rapidly because of nature of sparsity and amount of noise. Recently, partial dimensional clustering algorithms have been studied, which have good performance in clustering. These algorithms select the dimensional data closely related to clustering but discard the dimensional data which are not directly related to clustering in entire dimensional data. However, the traditional algorithms have some problems. At first, the algorithms employ grid based techniques but the large amount of grids make worse the performance of algorithm in terms of computational time and memory space. Secondly, the algorithms explore dimensions related to clustering using k-medoid but it is very difficult to determine the best quality of k-medoids in large amount of high dimensional data. In this paper, we propose an efficient partial dimensional clustering algorithm which is called CLIP. CLIP explores dense regions for cluster on a certain dimension. Then, the algorithm probes dense regions on a next dimension. dependent on the dense regions of the explored dimension using incremental projection. CLIP repeats these probing work in all dimensions. Clustering by Incremental projection can prune the search space largely and reduce the computational time considerably. We evaluate the performance(efficiency, effectiveness and accuracy, etc.) of the proposed algorithm compared with other algorithms using common synthetic data.

  • PDF

The Study on Improvement of Cohesion of Clustering in Incremental Concept Learning (점진적 개념학습의 클러스터 응집도 개선)

  • Baek, Hey-Jung;Park, Young-Tack
    • The KIPS Transactions:PartB
    • /
    • v.10B no.3
    • /
    • pp.297-304
    • /
    • 2003
  • Nowdays, with the explosive growth of the web information, web users Increase requests of systems which collect and analyze web pages that are relevant. The systems which were develop to solve the request were used clustering methods to improve the duality of information. Clustering is defining inter relationship of unordered data and grouping data systematically. The systems using clustering provide the grouped information to the users. So, they understand the information efficiently. We proposed a hybrid clustering method to cluster a large quantity of data efficiently. By that method, We generate initial clusters using COBWEB Algorithm and refine them using Ezioni Algorithm. This paper adds two ideas in prior hybrid clustering method to increment accuracy and efficiency of clusters. Firstly, we propose the clustering method considering weight of attributes of data. Second, we redefine evaluation functions which generate initial clusters to increase efficiency in clustering. Clustering method proposed in this paper processes a large quantity of data and diminish of dependancy on sequence of input of data. So the clusters are useful to make user profiles in high quality. Ultimately, we will show that the proposed clustering method outperforms the pervious clustering method in the aspect of precision and execution speed.

Design of Incremental FCM-based Recursive RBF Neural Networks Pattern Classifier for Big Data Processing (빅 데이터 처리를 위한 증분형 FCM 기반 순환 RBF Neural Networks 패턴 분류기 설계)

  • Lee, Seung-Cheol;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.6
    • /
    • pp.1070-1079
    • /
    • 2016
  • In this paper, the design of recursive radial basis function neural networks based on incremental fuzzy c-means is introduced for processing the big data. Radial basis function neural networks consist of condition, conclusion and inference phase. Gaussian function is generally used as the activation function of the condition phase, but in this study, incremental fuzzy clustering is considered for the activation function of radial basis function neural networks, which could effectively do big data processing. In the conclusion phase, the connection weights of networks are given as the linear function. And then the connection weights are calculated by recursive least square estimation. In the inference phase, a final output is obtained by fuzzy inference method. Machine Learning datasets are employed to demonstrate the superiority of the proposed classifier, and their results are described from the viewpoint of the algorithm complexity and performance index.

An Incremental Clustering Technique of XML Documents using Cluster Histograms (클러스터의 히스토그램을 이용한 XML 문서의 점진적 클러스터링 기법)

  • Hwang, Jeong-Hee
    • Journal of KIISE:Databases
    • /
    • v.34 no.3
    • /
    • pp.261-269
    • /
    • 2007
  • As a basic research to integrate and to retrieve XML documents efficiently, this paper proposes a clustering method by structures of XML documents. We apply an algorithm processing the many transaction data to the clustering of XML documents, which is a quite different method from the previous algorithms measuring structure similarity. Our method performs the clustering of XML documents not only using the cluster histograms that represent the distribution of items in clusters but also considering the global cluster cohesion. We compare the proposed method with the existing techniques by performing experiments. Experiments show that our method not only creates good quality clusters but also improves the processing time.

Gathering Common-word and Document Reclassification to improve Accuracy of Document Clustering (문서 군집화의 정확률 향상을 위한 범용어 수집과 문서 재분류 알고리즘)

  • Shin, Joon-Choul;Ock, Cheol-Young;Lee, Eung-Bong
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.53-62
    • /
    • 2012
  • Clustering technology is used to deal efficiently with many searched documents in information retrieval system. But the accuracy of the clustering is satisfied to the requirement of only some domains. This paper proposes two methods to increase accuracy of the clustering. We define a common-word, that is frequently used but has low weight during clustering. We propose the method that automatically gathers the common-word and calculates its weight from the searched documents. From the experiments, the clustering error rates using the common-word is reduced to 34% compared with clustering using a stop-word. After generating first clusters using average link clustering from the searched documents, we propose the algorithm that reevaluates the similarity between document and clusters and reclassifies the document into more similar clusters. From the experiments using Naver JiSikIn category, the accuracy of reclassified clusters is increased to 1.81% compared with first clusters without reclassification.

A Non-linear Variant of Global Clustering Using Kernel Methods (커널을 이용한 전역 클러스터링의 비선형화)

  • Heo, Gyeong-Yong;Kim, Seong-Hoon;Woo, Young-Woon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.4
    • /
    • pp.11-18
    • /
    • 2010
  • Fuzzy c-means (FCM) is a simple but efficient clustering algorithm using the concept of a fuzzy set that has been proved to be useful in many areas. There are, however, several well known problems with FCM, such as sensitivity to initialization, sensitivity to outliers, and limitation to convex clusters. In this paper, global fuzzy c-means (G-FCM) and kernel fuzzy c-means (K-FCM) are combined to form a non-linear variant of G-FCM, called kernel global fuzzy c-means (KG-FCM). G-FCM is a variant of FCM that uses an incremental seed selection method and is effective in alleviating sensitivity to initialization. There are several approaches to reduce the influence of noise and accommodate non-convex clusters, and K-FCM is one of them. K-FCM is used in this paper because it can easily be extended with different kernels. By combining G-FCM and K-FCM, KG-FCM can resolve the shortcomings mentioned above. The usefulness of the proposed method is demonstrated by experiments using artificial and real world data sets.