• 제목/요약/키워드: classification error

검색결과 822건 처리시간 0.064초

Multimodal 분포 데이터를 위한 Bhattacharyya distance 기반 분류 에러예측 기법 (Estimation of Classification Error Based on the Bhattacharyya Distance for Data with Multimodal Distribution)

  • 최의선;이철희
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 하계종합학술대회 논문집(4)
    • /
    • pp.85-87
    • /
    • 2000
  • In pattern classification, the Bhattacharyya distance has been used as a class separability measure and provides useful information for feature selection and extraction. In this paper, we propose a method to predict the classification error for multimodal data based on the Bhattacharyya distance. In our approach, we first approximate the pdf of multimodal distribution with a Gaussian mixture model and find the bhattacharyya distance and classification error. Exprimental results showed that there is a strong relationship between the Bhattacharyya distance and the classification error for multimodal data.

  • PDF

Utilizing Principal Component Analysis in Unsupervised Classification Based on Remote Sensing Data

  • Lee, Byung-Gul;Kang, In-Joan
    • 한국환경과학회:학술대회논문집
    • /
    • 한국환경과학회 2003년도 International Symposium on Clean Environment
    • /
    • pp.33-36
    • /
    • 2003
  • Principal component analysis (PCA) was used to improve image classification by the unsupervised classification techniques, the K-means. To do this, I selected a Landsat TM scene of Jeju Island, Korea and proposed two methods for PCA: unstandardized PCA (UPCA) and standardized PCA (SPCA). The estimated accuracy of the image classification of Jeju area was computed by error matrix. The error matrix was derived from three unsupervised classification methods. Error matrices indicated that classifications done on the first three principal components for UPCA and SPCA of the scene were more accurate than those done on the seven bands of TM data and that also the results of UPCA and SPCA were better than those of the raw Landsat TM data. The classification of TM data by the K-means algorithm was particularly poor at distinguishing different land covers on the island. From the classification results, we also found that the principal component based classifications had characteristics independent of the unsupervised techniques (numerical algorithms) while the TM data based classifications were very dependent upon the techniques. This means that PCA data has uniform characteristics for image classification that are less affected by choice of classification scheme. In the results, we also found that UPCA results are better than SPCA since UPCA has wider range of digital number of an image.

  • PDF

Bootstrap Confidence Intervals of Classification Error Rate for a Block of Missing Observations

  • Chung, Hie-Choon
    • Communications for Statistical Applications and Methods
    • /
    • 제16권4호
    • /
    • pp.675-686
    • /
    • 2009
  • In this paper, it will be assumed that there are two distinct populations which are multivariate normal with equal covariance matrix. We also assume that the two populations are equally likely and the costs of misclassification are equal. The classification rule depends on the situation when the training samples include missing values or not. We consider the bootstrap confidence intervals for classification error rate when a block of observation is missing.

Data-Adaptive ECOC for Multicategory Classification

  • Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제19권1호
    • /
    • pp.25-36
    • /
    • 2008
  • Error Correcting Output Codes (ECOC) can improve generalization performance when applied to multicategory classification problem. In this study we propose a new criterion to select hyperparameters included in ECOC scheme. Instead of margins of a data we propose to use the probability of misclassification error since it makes the criterion simple. Using this we obtain an upper bound of leave-one-out error of OVA(one vs all) method. Our experiments from real and synthetic data indicate that the bound leads to good estimates of parameters.

  • PDF

가우시안 분포의 다중클래스 데이터에 대한 최적 피춰추출 방법 (Optimal feature extraction for normally distributed multicall data)

  • 최의선;이철희
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1998년도 추계종합학술대회 논문집
    • /
    • pp.1263-1266
    • /
    • 1998
  • In this paper, we propose an optimal feature extraction method for normally distributed multiclass data. We search the whole feature space to find a set of features that give the smallest classification error for the Gaussian ML classifier. Initially, we start with an arbitrary feature vector. Assuming that the feature vector is used for classification, we compute the classification error. Then we move the feature vector slightly and compute the classification error with this vector. Finally we update the feature vector such that the classification error decreases most rapidly. This procedure is done by taking gradient. Alternatively, the initial vector can be those found by conventional feature extraction algorithms. We propose two search methods, sequential search and global search. Experiment results show that the proposed method compares favorably with the conventional feature extraction methods.

  • PDF

Comparison Study of Multi-class Classification Methods

  • Bae, Wha-Soo;Jeon, Gab-Dong;Seok, Kyung-Ha
    • Communications for Statistical Applications and Methods
    • /
    • 제14권2호
    • /
    • pp.377-388
    • /
    • 2007
  • As one of multi-class classification methods, ECOC (Error Correcting Output Coding) method is known to have low classification error rate. This paper aims at suggesting effective multi-class classification method (1) by comparing various encoding methods and decoding methods in ECOC method and (2) by comparing ECOC method and direct classification method. Both SVM (Support Vector Machine) and logistic regression model were used as binary classifiers in comparison.

Multimodal 데이터에 대한 분류 에러 예측 기법 (Error Estimation Based on the Bhattacharyya Distance for Classifying Multimodal Data)

  • 최의선;김재희;이철희
    • 대한전자공학회논문지SP
    • /
    • 제39권2호
    • /
    • pp.147-154
    • /
    • 2002
  • 본 논문에서는 multimodal 특성을 갖는 데이터에 대하여 패턴 분류 시 Bhattacharyya distance에 기반한 에러 예측 기법을 제안한다. 제안한 방법은 multimodal 데이터에 대하여 분류 에러와 Bhattacharyya distance를 각각 실험적으로 구하고 이 둘 사이의 관계를 유추하여 에러의 예측 가능성을 조사한다. 본 논문에서는 분류 에러 및 Bhattacharyya distance를 구하기 위하여 multimodal 데이터의 확률 밀도 함수를 정규 분포 특성을 갖는 부클래스들의 조합으로 추정한다. 원격 탐사 데이터를 이용하여 실험한 결과, multimodal 데이터의 분류 에러와 Bhattacharyya distance 사이에 밀접한 관련이 있음이 확인되었으며, Bhattacharyya distance를 이용한 에러 예측 가능성을 보여주었다.

최근점 이웃망에의한 참조벡터 학습 (Learning Reference Vectors by the Nearest Neighbor Network)

  • Kim Baek Sep
    • 전자공학회논문지B
    • /
    • 제31B권7호
    • /
    • pp.170-178
    • /
    • 1994
  • The nearest neighbor classification rule is widely used because it is not only simple but the error rate is asymptotically less than twice Bayes theoretical minimum error. But the method basically use the whole training patterns as the reference vectors. so that both storage and classification time increase as the number of training patterns increases. LVQ(Learning Vector Quantization) resolved this problem by training the reference vectors instead of just storing the whole training patterns. But it is a heuristic algorithm which has no theoretic background there is no terminating condition and it requires a lot of iterations to get to meaningful result. This paper is to propose a new training method of the reference vectors. which minimize the given error function. The nearest neighbor network,the network version of the nearest neighbor classification rule is proposed. The network is funtionally identical to the nearest neighbor classification rule is proposed. The network is funtionally identical to the nearest neighbor classification rule and the reference vectors are represented by the weights between the nodes. The network is trained to minimize the error function with respect to the weights by the steepest descent method. The learning algorithm is derived and it is shown that the proposed method can adjust more reference vectors than LVQ in each iteration. Experiment showed that the proposed method requires less iterations and the error rate is smaller than that of LVQ2.

  • PDF

Robust Minimum Squared Error Classification Algorithm with Applications to Face Recognition

  • Liu, Zhonghua;Yang, Chunlei;Pu, Jiexin;Liu, Gang;Liu, Sen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권1호
    • /
    • pp.308-320
    • /
    • 2016
  • Although the face almost always has an axisymmetric structure, it is generally not symmetrical image for the face image. However, the mirror image of the face image can reflect possible variation of the poses and illumination opposite to that of the original face image. A robust minimum squared error classification (RMSEC) algorithm is proposed in this paper. Concretely speaking, the original training samples and the mirror images of the original samples are taken to form a new training set, and the generated training set is used to perform the modified minimum sqreared error classification(MMSEC) algorithm. The extensive experiments show that the accuracy rate of the proposed RMSEC is greatly increased, and the the proposed RMSEC is not sensitive to the variations of the parameters.

Bathymetric mapping in Dong-Sha Atoll using SPOT data

  • Huang, Shih-Jen;Wen, Yao-Chung
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2006년도 Proceedings of ISRS 2006 PORSEC Volume II
    • /
    • pp.525-528
    • /
    • 2006
  • The remote sensing data can be used to calculate the water depth especially in the clear and shallow water area. In this study, the SPOT data was used for bathymetric mapping in Dong-Sha atoll, located in northern South China Sea. The in situ sea depth was collected by echo sounder as well. A global positioning system was employed to locate the accurate sampling points for sea depth. An empirical model between measurement sea depth and band digital count was determined and based on least squares regression analysis. Both non-classification and unsupervised classification were used in this study. The results show that the standard error is less than 0.9m for non-classification. Besides, the 10% error related to the measurement water depth can be satisfied for more than 85% in situ data points. Otherwise, the 10% relative error can reach more than 97%, 69%, and 51% data points at class 4, 5, and 6 respectively if supervised classification is applied. Meanwhile, we also find that the unsupervised classification can get more accuracy to estimate water depth with standard error less than 0.63, 0.93, and 0.68m at class 4, 5, and 6 respectively.

  • PDF