• Title/Summary/Keyword: classifier ensemble

검색결과 112건 처리시간 0.028초

A Comparative Study of Phishing Websites Classification Based on Classifier Ensemble

  • Tama, Bayu Adhi;Rhee, Kyung-Hyune
    • 한국멀티미디어학회논문지
    • /
    • 제21권5호
    • /
    • pp.617-625
    • /
    • 2018
  • Phishing website has become a crucial concern in cyber security applications. It is performed by fraudulently deceiving users with the aim of obtaining their sensitive information such as bank account information, credit card, username, and password. The threat has led to huge losses to online retailers, e-business platform, financial institutions, and to name but a few. One way to build anti-phishing detection mechanism is to construct classification algorithm based on machine learning techniques. The objective of this paper is to compare different classifier ensemble approaches, i.e. random forest, rotation forest, gradient boosted machine, and extreme gradient boosting against single classifiers, i.e. decision tree, classification and regression tree, and credal decision tree in the case of website phishing. Area under ROC curve (AUC) is employed as a performance metric, whilst statistical tests are used as baseline indicator of significance evaluation among classifiers. The paper contributes the existing literature on making a benchmark of classifier ensembles for web phishing detection.

Multi-classifier Fusion Based Facial Expression Recognition Approach

  • Jia, Xibin;Zhang, Yanhua;Powers, David;Ali, Humayra Binte
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권1호
    • /
    • pp.196-212
    • /
    • 2014
  • Facial expression recognition is an important part in emotional interaction between human and machine. This paper proposes a facial expression recognition approach based on multi-classifier fusion with stacking algorithm. The kappa-error diagram is employed in base-level classifiers selection, which gains insights about which individual classifier has the better recognition performance and how diverse among them to help improve the recognition accuracy rate by fusing the complementary functions. In order to avoid the influence of the chance factor caused by guessing in algorithm evaluation and get more reliable awareness of algorithm performance, kappa and informedness besides accuracy are utilized as measure criteria in the comparison experiments. To verify the effectiveness of our approach, two public databases are used in the experiments. The experiment results show that compared with individual classifier and two other typical ensemble methods, our proposed stacked ensemble system does recognize facial expression more accurately with less standard deviation. It overcomes the individual classifier's bias and achieves more reliable recognition results.

Context-aware Video Surveillance System

  • An, Tae-Ki;Kim, Moon-Hyun
    • Journal of Electrical Engineering and Technology
    • /
    • 제7권1호
    • /
    • pp.115-123
    • /
    • 2012
  • A video analysis system used to detect events in video streams generally has several processes, including object detection, object trajectories analysis, and recognition of the trajectories by comparison with an a priori trained model. However, these processes do not work well in a complex environment that has many occlusions, mirror effects, and/or shadow effects. We propose a new approach to a context-aware video surveillance system to detect predefined contexts in video streams. The proposed system consists of two modules: a feature extractor and a context recognizer. The feature extractor calculates the moving energy that represents the amount of moving objects in a video stream and the stationary energy that represents the amount of still objects in a video stream. We represent situations and events as motion changes and stationary energy in video streams. The context recognizer determines whether predefined contexts are included in video streams using the extracted moving and stationary energies from a feature extractor. To train each context model and recognize predefined contexts in video streams, we propose and use a new ensemble classifier based on the AdaBoost algorithm, DAdaBoost, which is one of the most famous ensemble classifier algorithms. Our proposed approach is expected to be a robust method in more complex environments that have a mirror effect and/or a shadow effect.

부도예측을 위한 KNN 앙상블 모형의 동시 최적화 (Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis)

  • 민성환
    • 지능정보연구
    • /
    • 제22권1호
    • /
    • pp.139-157
    • /
    • 2016
  • 앙상블 분류기란 개별 분류기보다 더 좋은 성과를 내기 위해 다수의 분류기를 결합하는 것을 의미한다. 이와 같은 앙상블 분류기는 단일 분류기의 일반화 성능을 향상시키는데 매우 유용한 것으로 알려져 있다. 랜덤 서브스페이스 앙상블 기법은 각각의 기저 분류기들을 위해 원 입력 변수 집합으로부터 랜덤하게 입력 변수 집합을 선택하며 이를 통해 기저 분류기들을 다양화 시키는 기법이다. k-최근접 이웃(KNN: k nearest neighbor)을 기저 분류기로 하는 랜덤 서브스페이스 앙상블 모형의 성과는 단일 모형의 성과를 개선시키는 데 효과적인 것으로 알려져 있으며, 이와 같은 랜덤 서브스페이스 앙상블의 성과는 각 기저 분류기를 위해 랜덤하게 선택된 입력 변수 집합과 KNN의 파라미터 k의 값이 중요한 영향을 미친다. 하지만, 단일 모형을 위한 k의 최적 선택이나 단일 모형을 위한 입력 변수 집합의 최적 선택에 관한 연구는 있었지만 KNN을 기저 분류기로 하는 앙상블 모형에서 이들의 최적화와 관련된 연구는 없는 것이 현실이다. 이에 본 연구에서는 KNN을 기저 분류기로 하는 앙상블 모형의 성과 개선을 위해 각 기저 분류기들의 k 파라미터 값과 입력 변수 집합을 동시에 최적화하는 새로운 형태의 앙상블 모형을 제안하였다. 본 논문에서 제안한 방법은 앙상블을 구성하게 될 각각의 KNN 기저 분류기들에 대해 최적의 앙상블 성과가 나올 수 있도록 각각의 기저 분류기가 사용할 파라미터 k의 값과 입력 변수를 유전자 알고리즘을 이용해 탐색하였다. 제안한 모형의 검증을 위해 국내 기업의 부도 예측 관련 데이터를 가지고 다양한 실험을 하였으며, 실험 결과 제안한 모형이 기존의 앙상블 모형보다 기저 분류기의 다양화와 예측 성과 개선에 효과적임을 알 수 있었다.

머신러닝을 활용한 모돈의 생산성 예측모델 (Forecasting Sow's Productivity using the Machine Learning Models)

  • 이민수;최영찬
    • 농촌지도와개발
    • /
    • 제16권4호
    • /
    • pp.939-965
    • /
    • 2009
  • The Machine Learning has been identified as a promising approach to knowledge-based system development. This study aims to examine the ability of machine learning techniques for farmer's decision making and to develop the reference model for using pig farm data. We compared five machine learning techniques: logistic regression, decision tree, artificial neural network, k-nearest neighbor, and ensemble. All models are well performed to predict the sow's productivity in all parity, showing over 87.6% predictability. The model predictability of total litter size are highest at 91.3% in third parity and decreasing as parity increases. The ensemble is well performed to predict the sow's productivity. The neural network and logistic regression is excellent classifier for all parity. The decision tree and the k-nearest neighbor was not good classifier for all parity. Performance of models varies over models used, showing up to 104% difference in lift values. Artificial Neural network and ensemble models have resulted in highest lift values implying best performance among models.

  • PDF

An Ensemble Classifier using Two Dimensional LDA

  • Park, Cheong-Hee
    • 한국멀티미디어학회논문지
    • /
    • 제13권6호
    • /
    • pp.817-824
    • /
    • 2010
  • Linear Discriminant Analysis (LDA) has been successfully applied for dimension reduction in face recognition. However, LDA requires the transformation of a face image to a one-dimensional vector and this process can cause the correlation information among neighboring pixels to be disregarded. On the other hand, 2D-LDA uses 2D images directly without a transformation process and it has been shown to be superior to the traditional LDA. Nevertheless, there are some problems in 2D-LDA. First, it is difficult to determine the optimal number of feature vectors in a reduced dimensional space. Second, the size of rectangular windows used in 2D-LDA makes strong impacts on classification accuracies but there is no reliable way to determine an optimal window size. In this paper, we propose a new algorithm to overcome those problems in 2D-LDA. We adopt an ensemble approach which combines several classifiers obtained by utilizing various window sizes. And a practical method to determine the number of feature vectors is also presented. Experimental results demonstrate that the proposed method can overcome the difficulties with choosing an optimal window size and the number of feature vectors.

GA-SVM Ensemble 모델에서의 accuracy와 diversity를 고려한 feature subset population 선택

  • 성기석;조성준
    • 한국경영과학회:학술대회논문집
    • /
    • 한국경영과학회/대한산업공학회 2005년도 춘계공동학술대회 발표논문
    • /
    • pp.614-620
    • /
    • 2005
  • Ensemble에서 feature selection은 각 classifier의 학습할 데이터의 변수를 다르게 하여 diversity를 높이며, 이것은 일반적인 성능향상을 가져온다. Feature selection을 할 때 쓰는 방법 중의 하나가 Genetic Algorithm (GA)이며, GA-SVM은 GA를 기본으로 한 wrapper based feature selection mechanism으로 response model과 keystroke dynamics identity verification model을 만들 때 좋은 성능을 보였다. 하지만 population 안의 후보들간의 diversity를 보장해주지 못한다는 단점 때문에 classifier들의 accuracy와 diversity의 균형을 맞추기 위한 heuristic parameter setting이 존재하며 이를 조정해야만 하였다. 우리는 GA-SVM 알고리즘을 바탕으로, population안 후보들의 fitness를 측정할 때 accuracy와 diversity 둘 다 고려하는 fitness function을 도입하여 추가적인 classifier 선택 작업을 제거하면서 성능을 유지시키는 방안을 연구하였으며 결과적으로 알고리즘의 복잡성을 줄이면서도 모델의 성능을 유지시켰다.

  • PDF

Double-Bagging Ensemble Using WAVE

  • Kim, Ahhyoun;Kim, Minji;Kim, Hyunjoong
    • Communications for Statistical Applications and Methods
    • /
    • 제21권5호
    • /
    • pp.411-422
    • /
    • 2014
  • A classification ensemble method aggregates different classifiers obtained from training data to classify new data points. Voting algorithms are typical tools to summarize the outputs of each classifier in an ensemble. WAVE, proposed by Kim et al. (2011), is a new weight-adjusted voting algorithm for ensembles of classifiers with an optimal weight vector. In this study, when constructing an ensemble, we applied the WAVE algorithm on the double-bagging method (Hothorn and Lausen, 2003) to observe if any significant improvement can be achieved on performance. The results showed that double-bagging using WAVE algorithm performs better than other ensemble methods that employ plurality voting. In addition, double-bagging with WAVE algorithm is comparable with the random forest ensemble method when the ensemble size is large.

유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택 (Optimal Selection of Classifier Ensemble Using Genetic Algorithms)

  • 김명종
    • 지능정보연구
    • /
    • 제16권4호
    • /
    • pp.99-112
    • /
    • 2010
  • 앙상블 학습은 분류 및 예측 알고리즘의 성과개선을 위하여 제안된 기계학습 기법이다. 그러나 앙상블 학습은 기저 분류자의 다양성이 부족한 경우 다중공선성 문제로 인하여 성과개선 효과가 미약하고 심지어는 성과가 악화될 수 있다는 문제점이 제기되었다. 본 연구에서는 기저 분류자의 다양성을 확보하고 앙상블 학습의 성과개선 효과를 제고하기 위하여 유전자 알고리즘 기반의 범위 최적화 기법을 제안하고자 한다. 본 연구에서 제안된 최적화 기법을 기업 부실예측 인공신경망 앙상블에 적용한 결과 기저 분류자의 다양성이 확보되고 인공신경망 앙상블의 성과가 유의적으로 개선되었음을 보여주었다.

랜덤화 배깅을 이용한 재무 부실화 예측 (Randomized Bagging for Bankruptcy Prediction)

  • 민성환
    • 한국IT서비스학회지
    • /
    • 제15권1호
    • /
    • pp.153-166
    • /
    • 2016
  • Ensemble classification is an approach that combines individually trained classifiers in order to improve prediction accuracy over individual classifiers. Ensemble techniques have been shown to be very effective in improving the generalization ability of the classifier. But base classifiers need to be as accurate and diverse as possible in order to enhance the generalization abilities of an ensemble model. Bagging is one of the most popular ensemble methods. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. In this study we proposed a new bagging variant ensemble model, Randomized Bagging (RBagging) for improving the standard bagging ensemble model. The proposed model was applied to the bankruptcy prediction problem using a real data set and the results were compared with those of the other models. The experimental results showed that the proposed model outperformed the standard bagging model.