• Title/Summary/Keyword: 배깅

Search Result 41, Processing Time 0.022 seconds

지능형 IoT서비스를 위한 기계학습 기반 동작 인식 기술

  • Choe, Dae-Ung;Jo, Hyeon-Jung
    • The Proceeding of the Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.27 no.4
    • /
    • pp.19-28
    • /
    • 2016
  • 최근 RFID와 같은 무선 센싱 네트워크 기술과 객체 추적을 위한 센싱 디바이스 및 다양한 컴퓨팅 자원들이 빠르게 발전함에 따라, 기존 웹의 형태는 소셜 웹에서 유비쿼터스 컴퓨팅 웹으로 자연스럽게 진화되고 있다. 유비쿼터스 컴퓨팅 웹에서 사물인터넷(IoT)은 기존의 컴퓨터를 대체할 수 있는데, 이것은 곧 한 사람과 주변 사물들 간에 연결되는 네트워크가 확장되는 것과 동시에 네트워크 안에서 생성되는 데이터의 수가 기하급수적으로 증가되는 것을 의미한다. 따라서 보다 지능적인 IoT 서비스를 위해서는, 수많은 미가공 데이터들 사이에서 사람의 의도와 상황을 실시간으로 정확히 파악할 수 있어야 한다. 이때 사물과의 상호작용을 위한 동작 인식 기술(Gesture recognition)은 집적적인 접촉을 필요로 하지 않기 때문에, 미래의 사람-사물 간 상호작용에 응용될 수 있는 잠재력을 갖고 있다. 한편, 기계학습 분야의 최신 알고리즘들은 다양한 문제에서 사람의 인지능력을 종종 뛰어넘는 성능을 보이고 있는데, 그 중에서도 의사결정나무(Decision Tree)를 기반으로 한 Decision Forest는 분류(Classification)와 회귀(Regression)를 포함한 전 영역에 걸쳐 우월한 성능을 보이고 있다. 따라서 본 논문에서는 지능형 IoT 서비스를 위한 다양한 동작 인식 기술들을 알아보고, 동작 인식을 위한 Decision Forest의 기본 개념과 구현을 위한 학습, 테스팅에 대해 구체적으로 소개한다. 특히 대표적으로 사용되는 3가지 학습방법인 배깅(Bagging), 부스팅(Boosting) 그리고 Random Forest에 대해 소개하고, 이것들이 동작 인식을 위해 어떠한 특징을 갖는지 기존의 연구결과를 토대로 알아보았다.

Ensemble learning of Regional Experts (지역 전문가의 앙상블 학습)

  • Lee, Byung-Woo;Yang, Ji-Hoon;Kim, Seon-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.2
    • /
    • pp.135-139
    • /
    • 2009
  • We present a new ensemble learning method that employs the set of region experts, each of which learns to handle a subset of the training data. We split the training data and generate experts for different regions in the feature space. When classifying a data, we apply a weighted voting among the experts that include the data in their region. We used ten datasets to compare the performance of our new ensemble method with that of single classifiers as well as other ensemble methods such as Bagging and Adaboost. We used SMO, Naive Bayes and C4.5 as base learning algorithms. As a result, we found that the performance of our method is comparable to that of Adaboost and Bagging when the base learner is C4.5. In the remaining cases, our method outperformed the benchmark methods.

Generating Korean Energy Contours Using Vector-regression Tree (벡터 회귀 트리를 이용한 한국어 에너지 궤적 생성)

  • 이상호;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.4
    • /
    • pp.323-328
    • /
    • 2003
  • This study describes an energy contour generation method for Korean n systems. We propose a vector-regression tree, which is a vector version of a scalar regression tree. A vector-regression tree predicts a response vector for an unknown feature vector. In our study, the tree yields a vector containing ten sampled energy values for each phone. After collecting 500 sentences and its corresponding speech corpus, we trained trees on 300 sentences and tested them on 200 sentences. We construct a bagged tree and a born again one to improve the performance of contour prediction. In the experiment, we got a 0.803 correlation coefficient for the observed and predicted energy values.

Prediction of electricity consumption in A hotel using ensemble learning with temperature (앙상블 학습과 온도 변수를 이용한 A 호텔의 전력소모량 예측)

  • Kim, Jaehwi;Kim, Jaehee
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.2
    • /
    • pp.319-330
    • /
    • 2019
  • Forecasting the electricity consumption through analyzing the past electricity consumption a advantageous for energy planing and policy. Machine learning is widely used as a method to predict electricity consumption. Among them, ensemble learning is a method to avoid the overfitting of models and reduce variance to improve prediction accuracy. However, ensemble learning applied to daily data shows the disadvantages of predicting a center value without showing a peak due to the characteristics of ensemble learning. In this study, we overcome the shortcomings of ensemble learning by considering the temperature trend. We compare nine models and propose a model using random forest with the linear trend of temperature.

PE file malware detection using opcode and IAT (Opcode와 IAT를 활용한 PE 파일 악성코드 탐지)

  • JeongHun Lee;Ah Reum Kang
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.103-106
    • /
    • 2023
  • 코로나 팬데믹 사태로 인해 업무환경이 재택근무를 하는 환경으로 바뀌고 악성코드의 변종 또한 빠르게 발전하고 있다. 악성코드를 분석하고 백신 프로그램을 만들면 새로운 변종 악성코드가 생기고 변종에 대한 백신프로그램이 만들어 질 때까지 변종된 악성코드는 사용자에게 위협이 된다. 본 연구에서는 머신러닝 알고리즘을 사용하여 악성파일 여부를 예측하는 방법을 제시하였다. 일반적인 악성코드의 구조를 갖는 Portable Executable 구조 파일을 파이썬의 LIEF 라이브러리를 사용하여 Certificate, Imports, Opcode 등 3가지 feature에 대해 정적분석을 하였다. 학습 데이터로는 정상파일 320개와 악성파일 530개를 사용하였다. Certificate는 hasSignature(디지털 서명정보), isValidcertificate(디지털 서명의 유효성), isNotExpired(인증서의 유효성)의 feature set을 사용하고, Imports는 Import Address Table의 function 빈도수를 비교하여 feature set을 구축하였다. Opcode는 tri-gram으로 추출하여 빈도수를 비교하여 feature set을 구축하였다. 테스트 데이터로는 정상파일 360개 악성파일 610개를 사용하였으며 Feature set을 사용하여 random forest, decision tree, bagging, adaboost 등 4가지 머신러닝 알고리즘을 대상으로 성능을 비교하였고, bagging 알고리즘에서 약 0.98의 정확도를 보였다.

  • PDF

A New Ensemble Machine Learning Technique with Multiple Stacking (다중 스태킹을 가진 새로운 앙상블 학습 기법)

  • Lee, Su-eun;Kim, Han-joon
    • The Journal of Society for e-Business Studies
    • /
    • v.25 no.3
    • /
    • pp.1-13
    • /
    • 2020
  • Machine learning refers to a model generation technique that can solve specific problems from the generalization process for given data. In order to generate a high performance model, high quality training data and learning algorithms for generalization process should be prepared. As one way of improving the performance of model to be learned, the Ensemble technique generates multiple models rather than a single model, which includes bagging, boosting, and stacking learning techniques. This paper proposes a new Ensemble technique with multiple stacking that outperforms the conventional stacking technique. The learning structure of multiple stacking ensemble technique is similar to the structure of deep learning, in which each layer is composed of a combination of stacking models, and the number of layers get increased so as to minimize the misclassification rate of each layer. Through experiments using four types of datasets, we have showed that the proposed method outperforms the exiting ones.

Developing an Ensemble Classifier for Bankruptcy Prediction (부도 예측을 위한 앙상블 분류기 개발)

  • Min, Sung-Hwan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.7
    • /
    • pp.139-148
    • /
    • 2012
  • An ensemble of classifiers is to employ a set of individually trained classifiers and combine their predictions. It has been found that in most cases the ensembles produce more accurate predictions than the base classifiers. Combining outputs from multiple classifiers, known as ensemble learning, is one of the standard and most important techniques for improving classification accuracy in machine learning. An ensemble of classifiers is efficient only if the individual classifiers make decisions as diverse as possible. Bagging is the most popular method of ensemble learning to generate a diverse set of classifiers. Diversity in bagging is obtained by using different training sets. The different training data subsets are randomly drawn with replacement from the entire training dataset. The random subspace method is an ensemble construction technique using different attribute subsets. In the random subspace, the training dataset is also modified as in bagging. However, this modification is performed in the feature space. Bagging and random subspace are quite well known and popular ensemble algorithms. However, few studies have dealt with the integration of bagging and random subspace using SVM Classifiers, though there is a great potential for useful applications in this area. The focus of this paper is to propose methods for improving SVM performance using hybrid ensemble strategy for bankruptcy prediction. This paper applies the proposed ensemble model to the bankruptcy prediction problem using a real data set from Korean companies.

A Comparison of Ensemble Methods Combining Resampling Techniques for Class Imbalanced Data (데이터 전처리와 앙상블 기법을 통한 불균형 데이터의 분류모형 비교 연구)

  • Leea, Hee-Jae;Lee, Sungim
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.3
    • /
    • pp.357-371
    • /
    • 2014
  • There are many studies related to imbalanced data in which the class distribution is highly skewed. To address the problem of imbalanced data, previous studies deal with resampling techniques which correct the skewness of the class distribution in each sampled subset by using under-sampling, over-sampling or hybrid-sampling such as SMOTE. Ensemble methods have also alleviated the problem of class imbalanced data. In this paper, we compare around a dozen algorithms that combine the ensemble methods and resampling techniques based on simulated data sets generated by the Backbone model, which can handle the imbalance rate. The results on various real imbalanced data sets are also presented to compare the effectiveness of algorithms. As a result, we highly recommend the resampling technique combining ensemble methods for imbalanced data in which the proportion of the minority class is less than 10%. We also find that each ensemble method has a well-matched sampling technique. The algorithms which combine bagging or random forest ensembles with random undersampling tend to perform well; however, the boosting ensemble appears to perform better with over-sampling. All ensemble methods combined with SMOTE outperform in most situations.

Performance Comparison of Machine Learning Based on Neural Networks and Statistical Methods for Prediction of Drifter Movement (뜰개 이동 예측을 위한 신경망 및 통계 기반 기계학습 기법의 성능 비교)

  • Lee, Chan-Jae;Kim, Gyoung-Do;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.10
    • /
    • pp.45-52
    • /
    • 2017
  • Drifter is an equipment for observing the characteristics of seawater in the ocean, and it can be used to predict effluent oil diffusion and to observe ocean currents. In this paper, we design models or the prediction of drifter trajectory using machine learning. We propose methods for estimating the trajectory of drifter using support vector regression, radial basis function network, Gaussian process, multilayer perceptron, and recurrent neural network. When the propose mothods were compared with the existing MOHID numerical model, performance was improve on three of the four cases. In particular, LSTM, the best performed method, showed the imporvement by 47.59% Future work will improve the accuracy by weighting using bagging and boosting.

Exploring the Feature Selection Method for Effective Opinion Mining: Emphasis on Particle Swarm Optimization Algorithms

  • Eo, Kyun Sun;Lee, Kun Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.11
    • /
    • pp.41-50
    • /
    • 2020
  • Sentimental analysis begins with the search for words that determine the sentimentality inherent in data. Managers can understand market sentimentality by analyzing a number of relevant sentiment words which consumers usually tend to use. In this study, we propose exploring performance of feature selection methods embedded with Particle Swarm Optimization Multi Objectives Evolutionary Algorithms. The performance of the feature selection methods was benchmarked with machine learning classifiers such as Decision Tree, Naive Bayesian Network, Support Vector Machine, Random Forest, Bagging, Random Subspace, and Rotation Forest. Our empirical results of opinion mining revealed that the number of features was significantly reduced and the performance was not hurt. In specific, the Support Vector Machine showed the highest accuracy. Random subspace produced the best AUC results.