• Title/Summary/Keyword: Fisher′s iris data

Search Result 15, Processing Time 0.018 seconds

A Study on Performance Evaluation of Clustering Algorithms using Neural and Statistical Method (신경망 및 통계적 방법에 의한 클러스터링 성능평가)

  • 윤석환;민준영;신용백
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.19 no.37
    • /
    • pp.41-51
    • /
    • 1996
  • This paper evaluates the clustering performance of a neural network and a statistical method. Algorithms which are used in this paper are the GLVQ(Generalized Learning vector Quantization) for a neural method and the k-means algorithm fer a statistical clustering method. For comparison of two methods, we calculate the Rand's c statistics. As a result, the mean of c value obtained with the GLVQ is higher than that obtained with the k-means algorithm, while standard deviation of c value is lower. Experimental data sets were the Fisher's IRIS data and patterns extracted from handwritten numerals.

  • PDF

3 Steps LVQ Learning Algorithm using Forward C.P. Net. (Forward C-P. Net.을 이용한 3단 LVQ 학습알고리즘)

  • Lee Yong-gu;Choi Woo-seung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.4 s.32
    • /
    • pp.33-39
    • /
    • 2004
  • In this paper. we design the learning algorithm of LVQ which is used Forward Counter Propagation Networks to improve classification performance of LVQ networks. The weights of Forward Counter Propagation Networks which is between input layer and cluster layer can be learned to determine initial reference vectors by using SOM algorithm and to learn reference vectors by using LVQ algorithm. Finally. pattern vectors is classified into subclasses by neurons which is being in the cluster layer, and the weights of Forward Counter Propagation Networks which is between cluster layer and output layer is learned to classify the classified subclass, which is enclosed a class. Also. kr the number of classes is determined, the number of neurons which is being in the input layer, cluster layer and output layer can be determined. To prove the performance of the proposed learning algorithm. the simulation is performed by using training vectors and test vectors that ate Fisher's Iris data, and classification performance of the proposed learning method is compared with ones of the conventional LVQ, and it was a confirmation that the proposed learning method is more successful classification than the conventional classification.

  • PDF

Double K-Means Clustering (이중 K-평균 군집화)

  • 허명회
    • The Korean Journal of Applied Statistics
    • /
    • v.13 no.2
    • /
    • pp.343-352
    • /
    • 2000
  • In this study. the author proposes a nonhierarchical clustering method. called the "Double K-Means Clustering", which performs clustering of multivariate observations with the following algorithm: Step I: Carry out the ordinary K-means clmitering and obtain k temporary clusters with sizes $n_1$,... , $n_k$, centroids $c_$1,..., $c_k$ and pooled covariance matrix S. $\bullet$ Step II-I: Allocate the observation x, to the cluster F if it satisfies ..... where N is the total number of observations, for -i = 1, . ,N. $\bullet$ Step II-2: Update cluster sizes $n_1$,... , $n_k$, centroids $c_$1,..., $c_k$ and pooled covariance matrix S. $\bullet$ Step II-3: Repeat Steps II-I and II-2 until the change becomes negligible. The double K-means clustering is nearly "optimal" under the mixture of k multivariate normal distributions with the common covariance matrix. Also, it is nearly affine invariant, with the data-analytic implication that variable standardizations are not that required. The method is numerically demonstrated on Fisher's iris data.

  • PDF

Use of Minimal Spanning Trees on Self-Organizing Maps (자기조직도에서 최소생성나무의 활용)

  • Jang, Yoo-Jin;Huh, Myung-Hoe;Park, Mi-Ra
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.2
    • /
    • pp.415-424
    • /
    • 2009
  • As one of the unsupervised learning neural network methods, self-organizing maps(SOM) are applied to various fields. It reduces the dimension of multidimensional data by representing observations on the low dimensional manifold. On the other hand, the minimal spanning tree(MST) of a graph that achieves the most economic subset of edges connecting all components by a single open loop. In this study, we apply the MST technique to SOM with subnodes. We propose SOM's with embedded MST and a distance measure for optimum choice of the size and shape of the map. We demonstrate the method with Fisher's Iris data and a real gene expression data. Simulated data sets are also analyzed to check the validity of the proposed method.

A Weighted Fuzzy Min-Max Neural Network for Pattern Classification (패턴 분류 문제에서 가중치를 고려한 퍼지 최대-최소 신경망)

  • Kim Ho-Joon;Park Hyun-Jung
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.8
    • /
    • pp.692-702
    • /
    • 2006
  • In this study, a weighted fuzzy min-max (WFMM) neural network model for pattern classification is proposed. The model has a modified structure of FMM neural network in which the weight concept is added to represent the frequency factor of feature values in a learning data set. First we present in this paper a new activation function of the network which is defined as a hyperbox membership function. Then we introduce a new learning algorithm for the model that consists of three kinds of processes: hyperbox creation/expansion, hyperbox overlap test, and hyperbox contraction. A weight adaptation rule considering the frequency factors is defined for the learning process. Finally we describe a feature analysis technique using the proposed model. Four kinds of relevance factors among feature values, feature types, hyperboxes and patterns classes are proposed to analyze relative importance of each feature in a given problem. Two types of practical applications, Fisher's Iris data and Cleveland medical data, have been used for the experiments. Through the experimental results, the effectiveness of the proposed method is discussed.