• Title/Summary/Keyword: data discriminant analysis

Search Result 766, Processing Time 0.028 seconds

Graphical Methods for the Sensitivity Analysis in Discriminant Analysis

  • Jang, Dae-Heung;Anderson-Cook, Christine M.;Kim, Youngil
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.5
    • /
    • pp.475-485
    • /
    • 2015
  • Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretable compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern of the change.

Principal Discriminant Variate (PDV) Method for Classification of Multicollinear Data: Application to Diagnosis of Mastitic Cows Using Near-Infrared Spectra of Plasma Samples

  • Jiang, Jian-Hui;Tsenkova, Roumiana;Yu, Ru-Qin;Ozaki, Yukihiro
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1244-1244
    • /
    • 2001
  • In linear discriminant analysis there are two important properties concerning the effectiveness of discriminant function modeling. The first is the separability of the discriminant function for different classes. The separability reaches its optimum by maximizing the ratio of between-class to within-class variance. The second is the stability of the discriminant function against noises present in the measurement variables. One can optimize the stability by exploring the discriminant variates in a principal variation subspace, i. e., the directions that account for a majority of the total variation of the data. An unstable discriminant function will exhibit inflated variance in the prediction of future unclassified objects, exposed to a significantly increased risk of erroneous prediction. Therefore, an ideal discriminant function should not only separate different classes with a minimum misclassification rate for the training set, but also possess a good stability such that the prediction variance for unclassified objects can be as small as possible. In other words, an optimal classifier should find a balance between the separability and the stability. This is of special significance for multivariate spectroscopy-based classification where multicollinearity always leads to discriminant directions located in low-spread subspaces. A new regularized discriminant analysis technique, the principal discriminant variate (PDV) method, has been developed for handling effectively multicollinear data commonly encountered in multivariate spectroscopy-based classification. The motivation behind this method is to seek a sequence of discriminant directions that not only optimize the separability between different classes, but also account for a maximized variation present in the data. Three different formulations for the PDV methods are suggested, and an effective computing procedure is proposed for a PDV method. Near-infrared (NIR) spectra of blood plasma samples from mastitic and healthy cows have been used to evaluate the behavior of the PDV method in comparison with principal component analysis (PCA), discriminant partial least squares (DPLS), soft independent modeling of class analogies (SIMCA) and Fisher linear discriminant analysis (FLDA). Results obtained demonstrate that the PDV method exhibits improved stability in prediction without significant loss of separability. The NIR spectra of blood plasma samples from mastitic and healthy cows are clearly discriminated between by the PDV method. Moreover, the proposed method provides superior performance to PCA, DPLS, SIMCA and FLDA, indicating that PDV is a promising tool in discriminant analysis of spectra-characterized samples with only small compositional difference, thereby providing a useful means for spectroscopy-based clinic applications.

  • PDF

PRINCIPAL DISCRIMINANT VARIATE (PDV) METHOD FOR CLASSIFICATION OF MULTICOLLINEAR DATA WITH APPLICATION TO NEAR-INFRARED SPECTRA OF COW PLASMA SAMPLES

  • Jiang, Jian-Hui;Yuqing Wu;Yu, Ru-Qin;Yukihiro Ozaki
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1042-1042
    • /
    • 2001
  • In linear discriminant analysis there are two important properties concerning the effectiveness of discriminant function modeling. The first is the separability of the discriminant function for different classes. The separability reaches its optimum by maximizing the ratio of between-class to within-class variance. The second is the stability of the discriminant function against noises present in the measurement variables. One can optimize the stability by exploring the discriminant variates in a principal variation subspace, i. e., the directions that account for a majority of the total variation of the data. An unstable discriminant function will exhibit inflated variance in the prediction of future unclassified objects, exposed to a significantly increased risk of erroneous prediction. Therefore, an ideal discriminant function should not only separate different classes with a minimum misclassification rate for the training set, but also possess a good stability such that the prediction variance for unclassified objects can be as small as possible. In other words, an optimal classifier should find a balance between the separability and the stability. This is of special significance for multivariate spectroscopy-based classification where multicollinearity always leads to discriminant directions located in low-spread subspaces. A new regularized discriminant analysis technique, the principal discriminant variate (PDV) method, has been developed for handling effectively multicollinear data commonly encountered in multivariate spectroscopy-based classification. The motivation behind this method is to seek a sequence of discriminant directions that not only optimize the separability between different classes, but also account for a maximized variation present in the data. Three different formulations for the PDV methods are suggested, and an effective computing procedure is proposed for a PDV method. Near-infrared (NIR) spectra of blood plasma samples from daily monitoring of two Japanese cows have been used to evaluate the behavior of the PDV method in comparison with principal component analysis (PCA), discriminant partial least squares (DPLS), soft independent modeling of class analogies (SIMCA) and Fisher linear discriminant analysis (FLDA). Results obtained demonstrate that the PDV method exhibits improved stability in prediction without significant loss of separability. The NIR spectra of blood plasma samples from two cows are clearly discriminated between by the PDV method. Moreover, the proposed method provides superior performance to PCA, DPLS, SIMCA md FLDA, indicating that PDV is a promising tool in discriminant analysis of spectra-characterized samples with only small compositional difference.

  • PDF

A Comparison of the Discrimination of Business Failure Prediction Models (부실기업예측모형의 판별력 비교)

  • 최태성;김형기;김성호
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.27 no.2
    • /
    • pp.1-13
    • /
    • 2002
  • In this paper, we compares the business failure prediction accuracy among Linear Programming Discriminant Analysis(LPDA) model, Multivariate Discriminant Analysis (MDA) model and logit analysis model. The Data for 417 companies analyzed were gathered from KIS-FAS Published by Korea Information Service in 1999. The result of comparison for four time horizons shows that LPDA Is advantageous in prediction accuracy over the other two models when over all tilt ratio and business failure accuracy are considered simultaneously.

THE PERFORMANCE OF THE BINARY TREE CLASSIFIER AND DATA CHARACTERISTICS

  • Park, Jeong-sun
    • Management Science and Financial Engineering
    • /
    • v.3 no.1
    • /
    • pp.39-56
    • /
    • 1997
  • This paper applies the binary tree classifier and discriminant analysis methods to predicting failures of banks and insurance companies. In this study, discriminant analysis is generally better than the binary tree classifier in the classification of bank defaults; the binary tree is generally better than discriminant analysis in the classification of insurance company defaults. This situation can be explained that the performance of a classifier depends on the characteristics of the data. If the data are dispersed appropriately for the classifier, the classifier will show a good performance. Otherwise, it may show a poor performance. The two data sets (bank and insurance) are analyzed to explain the better performance of the binary tree in insurance and the worse performance in bank; the better performance of discriminant analysis in bank and the worse performance in insurance.

  • PDF

Some Diagnostic Results in Discriminant Analysis

  • Bae, Whasoo;Hwang, Soonyoung
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.1
    • /
    • pp.139-151
    • /
    • 2001
  • Although lots of works are done in influence diagnostics, results in the multivariate analysis are quite rare. One of recent works done by Fung(1995) is about the single case influence diagnostics in the linear discriminant analysis. In this paper we extend Fung's results to the multiple cases diagnostics which are necessary in the linear discriminant analysis for two reasons among others; First, the masking effect cannot be detected by single case diagnostics and secondly two populations are concerned in the discriminant analysis, i.e., influential cases can occur in one or both populations.

  • PDF

Objective Cloud Type Classification of Meteorological Satellite Data Using Linear Discriminant Analysis (선형판별법에 의한 GMS 영상의 객관적 운형분류)

  • 서애숙;김금란
    • Korean Journal of Remote Sensing
    • /
    • v.6 no.1
    • /
    • pp.11-24
    • /
    • 1990
  • This is the study about the meteorological satellite cloud image classification by objective methods. For objective cloud classification, linear discriminant analysis was tried. In the linear discriminant analysis 27 cloud characteristic parameters were retrieved from GMS infrared image data. And, linear cloud classification model was developed from major parameters and cloud type coefficients. The model was applied to GMS IR image for weather forecasting operation and cloud image was classified into 5 types such as Sc, Cu, CiT, CiM and Cb. The classification results were reasonably compared with real image.

Derivation and Application of In uence Function in Discriminant Analysis for Three Groups (세 집단 판별분석 상황에서의 영향함수 유도 및 그 응용)

  • Lee, Hae-Jung;Kim, Hong-Gie
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.5
    • /
    • pp.941-949
    • /
    • 2011
  • The influence function is used to develop criteria to detect outliers in discriminant analysis. We derive the influence function of observations that estimate the the misclassification probability in discriminant analysis for three groups. The proposed measures are applied to the facial image data to define outliers and redo the discriminant analysis excluding the outliers. The study proves that the derived influence function is more efficient than using the discriminant probability approach.

CANCER CLASSIFICATION AND PREDICTION USING MULTIVARIATE ANALYSIS

  • Shon, Ho-Sun;Lee, Heon-Gyu;Ryu, Keun-Ho
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.706-709
    • /
    • 2006
  • Cancer is one of the major causes of death; however, the survival rate can be increased if discovered at an early stage for timely treatment. According to the statistics of the World Health Organization of 2002, breast cancer was the most prevalent cancer for all cancers occurring in women worldwide, and it account for 16.8% of entire cancers inflicting Korean women today. In order to classify the type of breast cancer whether it is benign or malignant, this study was conducted with the use of the discriminant analysis and the decision tree of data mining with the breast cancer data disclosed on the web. The discriminant analysis is a statistical method to seek certain discriminant criteria and discriminant function to separate the population groups on the basis of observation values obtained from two or more population groups, and use the values obtained to allow the existing observation value to the population group thereto. The decision tree analyzes the record of data collected in the part to show it with the pattern existing in between them, namely, the combination of attribute for the characteristics of each class and make the classification model tree. Through this type of analysis, it may obtain the systematic information on the factors that cause the breast cancer in advance and prevent the risk of recurrence after the surgery.

  • PDF

A Comparative Study of Classification Methods Using Data with Label Noise (레이블 노이즈가 존재하는 자료의 판별분석 방법 비교연구)

  • Kwon, So Young;Kim, Kyoung Hee
    • Journal of the Korean Data Analysis Society
    • /
    • v.20 no.6
    • /
    • pp.2853-2864
    • /
    • 2018
  • Discriminant analysis predicts a class label of a new observation with an unknown label, using information from the existing labeled data. Hence, observed labels play a critical role in the analysis and we usually assume that these labels are correct. If the observed label contains an error, the data has label noise. Label noise can frequently occur in real data, which would affect classification performance. In order to resolve this, a comparative study was carried out using simulated data with label noise. In particular, we considered 4 different classification techniques such as LDA (linear discriminant analysis classifiers), QDA (quadratic discriminant analysis classifiers), KNN (k-nearest neighbour), and SVM (support vector machine). Then we evaluated each method via average accuracy using generated data from various scenarios. The effect of label noise was investigated through its occurrence rate and type (noise location). We confirmed that the label noise is a significant factor influencing the classification performance.