• Title/Summary/Keyword: 부스팅 알고리즘

Search Result 54, Processing Time 0.025 seconds

An Empirical Analysis of Boosing of Neural Networks for Bankruptcy Prediction (부스팅 인공신경망학습의 기업부실예측 성과비교)

  • Kim, Myoung-Jong;Kang, Dae-Ki
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.1
    • /
    • pp.63-69
    • /
    • 2010
  • Ensemble is one of widely used methods for improving the performance of classification and prediction models. Two popular ensemble methods, Bagging and Boosting, have been applied with great success to various machine learning problems using mostly decision trees as base classifiers. This paper performs an empirical comparison of Boosted neural networks and traditional neural networks on bankruptcy prediction tasks. Experimental results on Korean firms indicated that the boosted neural networks showed the improved performance over traditional neural networks.

An Active Learning-based Method for Composing Training Document Set in Bayesian Text Classification Systems (베이지언 문서분류시스템을 위한 능동적 학습 기반의 학습문서집합 구성방법)

  • 김제욱;김한준;이상구
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.12
    • /
    • pp.966-978
    • /
    • 2002
  • There are two important problems in improving text classification systems based on machine learning approach. The first one, called "selection problem", is how to select a minimum number of informative documents from a given document collection. The second one, called "composition problem", is how to reorganize selected training documents so that they can fit an adopted learning method. The former problem is addressed in "active learning" algorithms, and the latter is discussed in "boosting" algorithms. This paper proposes a new learning method, called AdaBUS, which proactively solves the above problems in the context of Naive Bayes classification systems. The proposed method constructs more accurate classification hypothesis by increasing the valiance in "weak" hypotheses that determine the final classification hypothesis. Consequently, the proposed algorithm yields perturbation effect makes the boosting algorithm work properly. Through the empirical experiment using the Routers-21578 document collection, we show that the AdaBUS algorithm more significantly improves the Naive Bayes-based classification system than other conventional learning methodson system than other conventional learning methods

A Study on Smoker Prediction Using Machine Learning Algorithm (기계학습 알고리즘을 이용한 흡연자 예측 연구)

  • Jongwoo Baek;Joonil Bang;Joowon Lee;Hwajong Kim
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.537-538
    • /
    • 2023
  • 본 논문에서는 사람에게서 나타나는 생체 특성과 흡연여부의 상관관계 분석을 위해 랜덤 포레스트와 그래디언트 부스팅 트리의 두 가지 기계학습 알고리즘을 사용하였다. 연구에 사용된 데이터는 국민건강보험공단에서 제공하고 Kaggle에서 취합하여 정리한 건강검진 정보를 사용하였다. 분류 모델의 학습에 있어 혈청 정보가 높은 관계성을 보일 것으로 예상하였으나, 실제 결과는 성별이 가장 큰 영향을 끼치는 것으로 확인되었다.

  • PDF

Performance Analysis of Trading Strategy using Gradient Boosting Machine Learning and Genetic Algorithm

  • Jang, Phil-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.147-155
    • /
    • 2022
  • In this study, we developed a system to dynamically balance a daily stock portfolio and performed trading simulations using gradient boosting and genetic algorithms. We collected various stock market data from stocks listed on the KOSPI and KOSDAQ markets, including investor-specific transaction data. Subsequently, we indexed the data as a preprocessing step, and used feature engineering to modify and generate variables for training. First, we experimentally compared the performance of three popular gradient boosting algorithms in terms of accuracy, precision, recall, and F1-score, including XGBoost, LightGBM, and CatBoost. Based on the results, in a second experiment, we used a LightGBM model trained on the collected data along with genetic algorithms to predict and select stocks with a high daily probability of profit. We also conducted simulations of trading during the period of the testing data to analyze the performance of the proposed approach compared with the KOSPI and KOSDAQ indices in terms of the CAGR (Compound Annual Growth Rate), MDD (Maximum Draw Down), Sharpe ratio, and volatility. The results showed that the proposed strategies outperformed those employed by the Korean stock market in terms of all performance metrics. Moreover, our proposed LightGBM model with a genetic algorithm exhibited competitive performance in predicting stock price movements.

Comparison of Handball Result Predictions Using Bagging and Boosting Algorithms (배깅과 부스팅 알고리즘을 이용한 핸드볼 결과 예측 비교)

  • Kim, Ji-eung;Park, Jong-chul;Kim, Tae-gyu;Lee, Hee-hwa;Ahn, Jee-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.8
    • /
    • pp.279-286
    • /
    • 2021
  • The purpose of this study is to compare the predictive power of the Bagging and Boosting algorithm of ensemble method based on the motion information that occurs in woman handball matches and to analyze the availability of motion information. To this end, this study analyzed the predictive power of the result of 15 practice matches based on inertial motion by analyzing the predictive power of Random Forest and Adaboost algorithms. The results of the study are as follows. First, the prediction rate of the Random Forest algorithm was 66.9 ± 0.1%, and the prediction rate of the Adaboost algorithm was 65.6 ± 1.6%. Second, Random Forest predicted all of the winning results, but none of the losing results. On the other hand, the Adaboost algorithm shows 91.4% prediction of winning and 10.4% prediction of losing. Third, in the verification of the suitability of the algorithm, the Random Forest had no overfitting error, but Adaboost showed an overfitting error. Based on the results of this study, the availability of motion information is high when predicting sports events, and it was confirmed that the Random Forest algorithm was superior to the Adaboost algorithm.

Biological Early Warning System for Toxicity Detection (독성 감지를 위한 생물 조기 경보 시스템)

  • Kim, Sung-Yong;Kwon, Ki-Yong;Lee, Won-Don
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.9
    • /
    • pp.1979-1986
    • /
    • 2010
  • Biological early warning system detects toxicity by looking at behavior of organisms in water. The system uses classifier for judgement about existence and amount of toxicity in water. Boosting algorithm is one of possible application method for improving performance in a classifier. Boosting repetitively change training example set by focusing on difficult examples in basic classifier. As a result, prediction performance is improved for the events which are difficult to classify, but the information contained in the events which can be easily classified are discarded. In this paper, an incremental learning method to overcome this shortcoming is proposed by using the extended data expression. In this algorithm, decision tree classifier define class distribution information using the weight parameter in the extended data expression by exploiting the necessary information not only from the well classified, but also from the weakly classified events. Experimental results show that the new algorithm outperforms the former Learn++ method without using the weight parameter.

Cancer Diagnosis System using Genetic Algorithm and Multi-boosting Classifier (Genetic Algorithm과 다중부스팅 Classifier를 이용한 암진단 시스템)

  • Ohn, Syng-Yup;Chi, Seung-Do
    • Journal of the Korea Society for Simulation
    • /
    • v.20 no.2
    • /
    • pp.77-85
    • /
    • 2011
  • It is believed that the anomalies or diseases of human organs are identified by the analysis of the patterns. This paper proposes a new classification technique for the identification of cancer disease using the proteome patterns obtained from two-dimensional polyacrylamide gel electrophoresis(2-D PAGE). In the new classification method, three different classification methods such as support vector machine(SVM), multi-layer perceptron(MLP) and k-nearest neighbor(k-NN) are extended by multi-boosting method in an array of subclassifiers and the results of each subclassifier are merged by ensemble method. Genetic algorithm was applied to obtain optimal feature set in each subclassifier. We applied our method to empirical data set from cancer research and the method showed the better accuracy and more stable performance than single classifier.

Study on Fault Detection of a Gas Pressure Regulator Based on Machine Learning Algorithms

  • Seo, Chan-Yang;Suh, Young-Joo;Kim, Dong-Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.19-27
    • /
    • 2020
  • In this paper, we propose a machine learning method for diagnosing the failure of a gas pressure regulator. Originally, when implementing a machine learning model for detecting abnormal operation of a facility, it is common to install sensors to collect data. However, failure of a gas pressure regulator can lead to fatal safety problems, so that installing an additional sensor on a gas pressure regulator is not simple. In this paper, we propose various machine learning approach for diagnosing the abnormal operation of a gas pressure regulator with only the flow rate and gas pressure data collected from a gas pressure regulator itself. Since the fault data of a gas pressure regulator is not enough, the model is trained in all classes by applying the over-sampling method. The classification model was implemented using Gradient boosting, 1D Convolutional Neural Networks, and LSTM algorithm, and gradient boosting model showed the best performance among classification models with 99.975% accuracy.

Ensemble Design of Machine Learning Technigues: Experimental Verification by Prediction of Drifter Trajectory (앙상블을 이용한 기계학습 기법의 설계: 뜰개 이동경로 예측을 통한 실험적 검증)

  • Lee, Chan-Jae;Kim, Yong-Hyuk
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.3
    • /
    • pp.57-67
    • /
    • 2018
  • The ensemble is a unified approach used for getting better performance by using multiple algorithms in machine learning. In this paper, we introduce boosting and bagging, which have been widely used in ensemble techniques, and design a method using support vector regression, radial basis function network, Gaussian process, and multilayer perceptron. In addition, our experiment was performed by adding a recurrent neural network and MOHID numerical model. The drifter data used for our experimental verification consist of 683 observations in seven regions. The performance of our ensemble technique is verified by comparison with four algorithms each. As verification, mean absolute error was adapted. The presented methods are based on ensemble models using bagging, boosting, and machine learning. The error rate was calculated by assigning the equal weight value and different weight value to each unit model in ensemble. The ensemble model using machine learning showed 61.7% improvement compared to the average of four machine learning technique.

Smarter Classification for Imbalanced Data Set and Its Application to Patent Evaluation (불균형 데이터 집합에 대한 스마트 분류방법과 특허 평가에의 응용)

  • Kwon, Ohbyung;Lee, Jonathan Sangyun
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.15-34
    • /
    • 2014
  • Overall, accuracy as a performance measure does not fully consider modular accuracy: the accuracy of classifying 1 (or true) as 1 is not same as classifying 0 (or false) as 0. A smarter classification algorithm would optimize the classification rules to match the modular accuracies' goals according to the nature of problem. Correspondingly, smarter algorithms must be both more generalized with respect to the nature of problems, and free from decretization, which may cause distortion of the real performance. Hence, in this paper, we propose a novel vertical boosting algorithm that improves modular accuracies. Rather than decretizing items, we use simple classifiers such as a regression model that accepts continuous data types. To improve the generalization, and to select a classification model that is well-suited to the nature of the problem domain, we developed a model selection algorithm with smartness. To show the soundness of the proposed method, we performed an experiment with a real-world application: predicting the intellectual properties of e-transaction technology, which had a 47,000+ record data set.