• Title/Summary/Keyword: logistic classification

검색결과 376건 처리시간 0.027초

Logistic Regression Classification by Principal Component Selection

  • Kim, Kiho;Lee, Seokho
    • Communications for Statistical Applications and Methods
    • /
    • 제21권1호
    • /
    • pp.61-68
    • /
    • 2014
  • We propose binary classification methods by modifying logistic regression classification. We use variable selection procedures instead of original variables to select the principal components. We describe the resulting classifiers and discuss their properties. The performance of our proposals are illustrated numerically and compared with other existing classification methods using synthetic and real datasets.

Using Classification function to integrate Discriminant Analysis, Logistic Regression and Backpropagation Neural Networks for Interest Rates Forecasting

  • Oh, Kyong-Joo;Ingoo Han
    • 한국지능정보시스템학회:학술대회논문집
    • /
    • 한국지능정보시스템학회 2000년도 추계정기학술대회:지능형기술과 CRM
    • /
    • pp.417-426
    • /
    • 2000
  • This study suggests integrated neural network models for Interest rate forecasting using change-point detection, classifiers, and classification functions based on structural change. The proposed model is composed of three phases with tee-staged learning. The first phase is to detect successive and appropriate structural changes in interest rare dataset. The second phase is to forecast change-point group with classifiers (discriminant analysis, logistic regression, and backpropagation neural networks) and their. combined classification functions. The fecal phase is to forecast the interest rate with backpropagation neural networks. We propose some classification functions to overcome the problems of two-staged learning that cannot measure the performance of the first learning. Subsequently, we compare the structured models with a neural network model alone and, in addition, determine which of classifiers and classification functions can perform better. This article then examines the predictability of the proposed classification functions for interest rate forecasting using structural change.

  • PDF

다구찌 디자인을 이용한 데이터 퓨전 및 군집분석 분류 성능 비교 (Comparison Study for Data Fusion and Clustering Classification Performances)

  • 신형원;손소영
    • 한국경영과학회:학술대회논문집
    • /
    • 대한산업공학회/한국경영과학회 2000년도 춘계공동학술대회 논문집
    • /
    • pp.601-604
    • /
    • 2000
  • In this paper, we compare the classification performance of both data fusion and clustering algorithms (Data Bagging, Variable Selection Bagging, Parameter Combining, Clustering) to logistic regression in consideration of various characteristics of input data. Four factors used to simulate the logistic model are (1) correlation among input variables (2) variance of observation (3) training data size and (4) input-output function. Since the relationship between input & output is not typically known, we use Taguchi design to improve the practicality of our study results by letting it as a noise factor. Experimental study results indicate the following: Clustering based logistic regression turns out to provide the highest classification accuracy when input variables are weakly correlated and the variance of data is high. When there is high correlation among input variables, variable bagging performs better than logistic regression. When there is strong correlation among input variables and high variance between observations, bagging appears to be marginally better than logistic regression but was not significant.

  • PDF

직교요인을 이용한 국소선형 로지스틱 마이크로어레이 자료의 판별분석 (Local Linear Logistic Classification of Microarray Data Using Orthogonal Components)

  • 백장선;손영숙
    • 응용통계연구
    • /
    • 제19권3호
    • /
    • pp.587-598
    • /
    • 2006
  • 본 논문에서는 마이크로어레이 (microarray) 자료에 판별분석을 적용 시 나타나는 고차원 및 소표본 문제의 해결방법으로서 직교요인을 새로운 특징변수로 사용한 비모수적 국소선형 로지스틱 판별분석을 제안한다. 제안된 방법은 국소우도에 기반한 것으로서 다범주 판별분석에 적용될 수 있으며, 고려된 직교인자는 주성분 요인, 부분최소제곱 요인, 인자분석 요인 등이다. 대표적인 두 가지 실제 마이크로어레이 자료에 적용한 결과 직교요인들 중에서 부분최소제곱 요인을 특징변수로 사용한 경우 고전적인 통계적 판별분석보다 향상된 분류 능력을 나타내고 있음을 확인하였다.

다구찌 디자인을 이용한 앙상블 및 군집분석 분류 성능 비교 (Comparing Classification Accuracy of Ensemble and Clustering Algorithms Based on Taguchi Design)

  • 신형원;손소영
    • 대한산업공학회지
    • /
    • 제27권1호
    • /
    • pp.47-53
    • /
    • 2001
  • In this paper, we compare the classification performances of both ensemble and clustering algorithms (Data Bagging, Variable Selection Bagging, Parameter Combining, Clustering) to logistic regression in consideration of various characteristics of input data. Four factors used to simulate the logistic model are (1) correlation among input variables (2) variance of observation (3) training data size and (4) input-output function. In view of the unknown relationship between input and output function, we use a Taguchi design to improve the practicality of our study results by letting it as a noise factor. Experimental study results indicate the following: When the level of the variance is medium, Bagging & Parameter Combining performs worse than Logistic Regression, Variable Selection Bagging and Clustering. However, classification performances of Logistic Regression, Variable Selection Bagging, Bagging and Clustering are not significantly different when the variance of input data is either small or large. When there is strong correlation in input variables, Variable Selection Bagging outperforms both Logistic Regression and Parameter combining. In general, Parameter Combining algorithm appears to be the worst at our disappointment.

  • PDF

마할라노비스-다구치 시스템과 로지스틱 회귀의 성능비교 : 사례연구 (Performance Comparison of Mahalanobis-Taguchi System and Logistic Regression : A Case Study)

  • 이승훈;임근
    • 대한산업공학회지
    • /
    • 제39권5호
    • /
    • pp.393-402
    • /
    • 2013
  • The Mahalanobis-Taguchi System (MTS) is a diagnostic and predictive method for multivariate data. In the MTS, the Mahalanobis space (MS) of reference group is obtained using the standardized variables of normal data. The Mahalanobis space can be used for multi-class classification. Once this MS is established, the useful set of variables is identified to assist in the model analysis or diagnosis using orthogonal arrays and signal-to-noise ratios. And other several techniques have already been used for classification, such as linear discriminant analysis and logistic regression, decision trees, neural networks, etc. The goal of this case study is to compare the ability of the Mahalanobis-Taguchi System and logistic regression using a data set.

단계별 비행훈련 성패 예측 모형의 성능 비교 연구 (Comparison of Classification Models for Sequential Flight Test Results)

  • 손소영;조용관;최성옥;김영준
    • 대한인간공학회지
    • /
    • 제21권1호
    • /
    • pp.1-14
    • /
    • 2002
  • The main purpose of this paper is to present selection criteria for ROK Airforce pilot training candidates in order to save costs involved in sequential pilot training. We use classification models such Decision Tree, Logistic Regression and Neural Network based on aptitude test results of 288 ROK Air Force applicants in 1994-1996. Different models are compared in terms of classification accuracy, ROC and Lift-value. Neural network is evaluated as the best model for each sequential flight test result while Logistic regression model outperforms the rest of them for discriminating the last flight test result. Therefore we suggest a pilot selection criterion based on this logistic regression. Overall. we find that the factors such as Attention Sharing, Speed Tracking, Machine Comprehension and Instrument Reading Ability having significant effects on the flight results. We expect that the use of our criteria can increase the effectiveness of flight resources.

Neural Networks and Logistic Models for Classification: A Case Study

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제7권1호
    • /
    • pp.13-19
    • /
    • 1996
  • In this paper, we study and compare two types of methods for classification when both continuous and categorical variables are used to describe each individual. One is neural network(NN) method using backpropagation learning(BPL). The other is logistic model(LM) method. Both the NN and LM are based on projections of the data in directions determined from interconnection weights.

  • PDF

Receiver Operating Characteristic (ROC) Curves Using Neural Network in Classification

  • Lee, Jea-Young;Lee, Yong-Won
    • Journal of the Korean Data and Information Science Society
    • /
    • 제15권4호
    • /
    • pp.911-920
    • /
    • 2004
  • We try receiver operating characteristic(ROC) curves by neural networks of logistic function. The models are shown to arise from model classification for normal (diseased) and abnormal (nondiseased) groups in medical research. A few goodness-of-fit test statistics using normality curves are discussed and the performances using neural networks of logistic function are conducted.

  • PDF

시프트 시그모이드 분류함수를 가진 로지스틱 회귀를 이용한 신입생 중도탈락 예측모델 연구 (A Study of Freshman Dropout Prediction Model Using Logistic Regression with Shift-Sigmoid Classification Function)

  • 김동형
    • 디지털산업정보학회논문지
    • /
    • 제19권4호
    • /
    • pp.137-146
    • /
    • 2023
  • The dropout of university freshmen is a very important issue in the financial problems of universities. Moreover, the dropout rate is one of the important indicators among the external evaluation items of universities. Therefore, universities need to predict dropout students in advance and apply various dropout prevention programs targeting them. This paper proposes a method to predict such dropout students in advance. This paper is about a method for predicting dropout students. It proposes a method to select dropouts by applying logistic regression using a shift sigmoid classification function using only quantitative data from the first semester of the first year, which most universities have. It is based on logistic regression and can select the number of prediction subjects and prediction accuracy by using the shift sigmoid function as an classification function. As a result of the experiment, when the proposed algorithm was applied, the number of predicted dropout subjects varied from 100% to 20% compared to the actual number of dropout subjects, and it was found to have a prediction accuracy of 75% to 98%.