• Title/Summary/Keyword: Xgboost 알고리즘

Search Result 54, Processing Time 0.025 seconds

Analysis on the Determinants of Land Compensation Cost: The Use of the Construction CALS Data (토지 보상비 결정 요인 분석 - 건설CALS 데이터 중심으로)

  • Lee, Sang-Gyu;Seo, Myoung-Bae;Kim, Jin-Uk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.461-470
    • /
    • 2020
  • This study analyzed the determinants of land compensation costs using the CALS (Continuous Acquisition & Life-Cycle Support) system to generate data for the construction (planning, design, building, management) process. For analysis, variables used in the related research on land costs were used, which included eight variables (Land Area, Individual Public Land Price, Appraisal & Assessment, Land Category, Use District 1, Terrain Elevation, Terrain Shape, and Road). Also, the variables were analyzed using the machine learning-based Xgboost algorithm. Individual Public Land Price was identified as the most important variable in determining land cost. We used a linear multiple regression analysis to verify the determinants of land compensation. For this verification, the dependent variable included was the Individual Public Land Price, and the independent variables were the numeric variable (Land Area) and factor variables (Land Category, Use District 1, Terrain Elevation, Terrain Shape, Road). This study found that the significant variables were Land Category, Use District 1, and Road.

Apartment Price Prediction Using Deep Learning and Machine Learning (딥러닝과 머신러닝을 이용한 아파트 실거래가 예측)

  • Hakhyun Kim;Hwankyu Yoo;Hayoung Oh
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.59-76
    • /
    • 2023
  • Since the COVID-19 era, the rise in apartment prices has been unconventional. In this uncertain real estate market, price prediction research is very important. In this paper, a model is created to predict the actual transaction price of future apartments after building a vast data set of 870,000 from 2015 to 2020 through data collection and crawling on various real estate sites and collecting as many variables as possible. This study first solved the multicollinearity problem by removing and combining variables. After that, a total of five variable selection algorithms were used to extract meaningful independent variables, such as Forward Selection, Backward Elimination, Stepwise Selection, L1 Regulation, and Principal Component Analysis(PCA). In addition, a total of four machine learning and deep learning algorithms were used for deep neural network(DNN), XGBoost, CatBoost, and Linear Regression to learn the model after hyperparameter optimization and compare predictive power between models. In the additional experiment, the experiment was conducted while changing the number of nodes and layers of the DNN to find the most appropriate number of nodes and layers. In conclusion, as a model with the best performance, the actual transaction price of apartments in 2021 was predicted and compared with the actual data in 2021. Through this, I am confident that machine learning and deep learning will help investors make the right decisions when purchasing homes in various economic situations.

A Study on Classification of Crown Classes and Selection of Thinned Trees for Major Conifers Using Machine Learning Techniques (머신러닝 기법을 활용한 주요 침엽수종의 수관급 분류와 간벌목 선정 연구)

  • Lee, Yong-Kyu;Lee, Jung-Soo;Park, Jin-Woo
    • Journal of Korean Society of Forest Science
    • /
    • v.111 no.2
    • /
    • pp.302-310
    • /
    • 2022
  • Here we aimed to classify the major coniferous tree species (Pinus densiflora, Pinus koraiensis, and Larix kaempferi) by tree measurement information and machine learning algorithms to establish an efficient forest management plan. We used national forest monitoring information amassed over nine years for the measurement information of trees, and random forest (RF), XGBoost (XGB), and light GBM (LGBM) as machine learning algorithms. We compared and evaluated the accuracy of the algorithm through performance evaluation using the accuracy, precision, recall, and F1 score of the algorithm. The RF algorithm had the highest performance evaluation score for all tree species, and highest scores for Pinus densiflora, with an accuracy of about 65%, a precision of about 72%, a recall of about 60%, and an F1 score of about 66%. The classification accuracy for the dominant trees was higher than about 80% in the crown classes, but that of the co-dominant trees, the intermediate trees, and the overtopper trees was evaluated as low. We consider that the results of this study can be used as reference data for decision-making in the selection of thinning trees for forest management.

Performance Analysis of Trading Strategy using Gradient Boosting Machine Learning and Genetic Algorithm

  • Jang, Phil-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.147-155
    • /
    • 2022
  • In this study, we developed a system to dynamically balance a daily stock portfolio and performed trading simulations using gradient boosting and genetic algorithms. We collected various stock market data from stocks listed on the KOSPI and KOSDAQ markets, including investor-specific transaction data. Subsequently, we indexed the data as a preprocessing step, and used feature engineering to modify and generate variables for training. First, we experimentally compared the performance of three popular gradient boosting algorithms in terms of accuracy, precision, recall, and F1-score, including XGBoost, LightGBM, and CatBoost. Based on the results, in a second experiment, we used a LightGBM model trained on the collected data along with genetic algorithms to predict and select stocks with a high daily probability of profit. We also conducted simulations of trading during the period of the testing data to analyze the performance of the proposed approach compared with the KOSPI and KOSDAQ indices in terms of the CAGR (Compound Annual Growth Rate), MDD (Maximum Draw Down), Sharpe ratio, and volatility. The results showed that the proposed strategies outperformed those employed by the Korean stock market in terms of all performance metrics. Moreover, our proposed LightGBM model with a genetic algorithm exhibited competitive performance in predicting stock price movements.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

A Study on the Prediction of Mortality Rate after Lung Cancer Diagnosis for Men and Women in 80s, 90s, and 100s Based on Deep Learning (딥러닝 기반 80대·90대·100대 남녀 대상 폐암 진단 후 사망률 예측에 관한 연구)

  • Kyung-Keun Byun;Doeg-Gyu Lee;Se-Young Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.2
    • /
    • pp.87-96
    • /
    • 2023
  • Recently, research on predicting the treatment results of diseases using deep learning technology is also active in the medical community. However, small patient data and specific deep learning algorithms were selected and utilized, and research was conducted to show meaningful results under specific conditions. In this study, in order to generalize the research results, patients were further expanded and subdivided to derive the results of a study predicting mortality after lung cancer diagnosis for men and women in their 80s, 90s, and 100s. Using AutoML, which provides large-scale medical information and various deep learning algorithms from the Health Insurance Review and Assessment Service, five algorithms such as Decision Tree, Random Forest, Gradient Boosting, XGBoost, and Logistic Registration were created to predict mortality rates for 84 months after lung cancer diagnosis. As a result of the study, men in their 80s and 90s had a higher mortality prediction rate than women, and women in their 100s had a higher mortality prediction rate than men. And the factor that has the greatest influence on the mortality rate was analyzed as the treatment period.

A Study on Classification Models for Predicting Bankruptcy Based on XAI (XAI 기반 기업부도예측 분류모델 연구)

  • Jihong Kim;Nammee Moon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.333-340
    • /
    • 2023
  • Efficient prediction of corporate bankruptcy is an important part of making appropriate lending decisions for financial institutions and reducing loan default rates. In many studies, classification models using artificial intelligence technology have been used. In the financial industry, even if the performance of the new predictive models is excellent, it should be accompanied by an intuitive explanation of the basis on which the result was determined. Recently, the US, EU, and South Korea have commonly presented the right to request explanations of algorithms, so transparency in the use of AI in the financial sector must be secured. In this paper, an artificial intelligence-based interpretable classification prediction model was proposed using corporate bankruptcy data that was open to the outside world. First, data preprocessing, 5-fold cross-validation, etc. were performed, and classification performance was compared through optimization of 10 supervised learning classification models such as logistic regression, SVM, XGBoost, and LightGBM. As a result, LightGBM was confirmed as the best performance model, and SHAP, an explainable artificial intelligence technique, was applied to provide a post-explanation of the bankruptcy prediction process.

Injection Process Yield Improvement Methodology Based on eXplainable Artificial Intelligence (XAI) Algorithm (XAI(eXplainable Artificial Intelligence) 알고리즘 기반 사출 공정 수율 개선 방법론)

  • Ji-Soo Hong;Yong-Min Hong;Seung-Yong Oh;Tae-Ho Kang;Hyeon-Jeong Lee;Sung-Woo Kang
    • Journal of Korean Society for Quality Management
    • /
    • v.51 no.1
    • /
    • pp.55-65
    • /
    • 2023
  • Purpose: The purpose of this study is to propose an optimization process to improve product yield in the process using process data. Recently, research for low-cost and high-efficiency production in the manufacturing process using machine learning or deep learning has continued. Therefore, this study derives major variables that affect product defects in the manufacturing process using eXplainable Artificial Intelligence(XAI) method. After that, the optimal range of the variables is presented to propose a methodology for improving product yield. Methods: This study is conducted using the injection molding machine AI dataset released on the Korea AI Manufacturing Platform(KAMP) organized by KAIST. Using the XAI-based SHAP method, major variables affecting product defects are extracted from each process data. XGBoost and LightGBM were used as learning algorithms, 5-6 variables are extracted as the main process variables for the injection process. Subsequently, the optimal control range of each process variable is presented using the ICE method. Finally, the product yield improvement methodology of this study is proposed through a validation process using Test Data. Results: The results of this study are as follows. In the injection process data, it was confirmed that XGBoost had an improvement defect rate of 0.21% and LightGBM had an improvement defect rate of 0.29%, which were improved by 0.79%p and 0.71%p, respectively, compared to the existing defect rate of 1.00%. Conclusion: This study is a case study. A research methodology was proposed in the injection process, and it was confirmed that the product yield was improved through verification.

A Machine Learning-based Popularity Prediction Model for YouTube Mukbang Content (머신러닝 기반의 유튜브 먹방 콘텐츠 인기 예측 모델)

  • Beomgeun Seo;Hanjun Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.49-55
    • /
    • 2023
  • In this study, models for predicting the popularity of mukbang content on YouTube were proposed, and factors influencing the popularity of mukbang content were identified through post-analysis. To accomplish this, information on 22,223 pieces of content was collected from top mukbang channels in terms of subscribers using APIs and Pretty Scale. Machine learning algorithms such as Random Forest, XGBoost, and LGBM were used to build models for predicting views and likes. The results of SHAP analysis showed that subscriber count had the most significant impact on view prediction models, while the attractiveness of a creator emerged as the most important variable in the likes prediction model. This confirmed that the precursor factors for content views and likes reactions differ. This study holds academic significance in analyzing a large amount of online content and conducting empirical analysis. It also has practical significance as it informs mukbang creators about viewer content consumption trends and provides guidance for producing high-quality, marketable content.

Development of Long-Term Hospitalization Prediction Model for Minor Automobile Accident Patients (자동차 사고 경상환자의 장기입원 예측 모델 개발)

  • DoegGyu Lee;DongHyun Nam;Sung-Phil Heo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.6
    • /
    • pp.11-20
    • /
    • 2023
  • The cost of medical treatment for motor vehicle accidents is increasing every year. In this study, we created a model to predict long-term hospitalization(more than 18 days) among minor patients, which is the main item of increasing traffic accident medical expenses, using five algorithms such as decision tree, and analyzed the factors affecting long-term hospitalization. As a result, the accuracy of the prediction models ranged from 91.377 to 91.451, and there was no significant difference between each model, but the random forest and XGBoost models had the highest accuracy of 91.451. There were significant differences between models in the importance of explanatory variables, such as hospital location, name of disease, and type of hospital, between the long-stay and non-long-stay groups. Model validation was tested by comparing the average accuracy of each model cross-validated(10 times) on the training data with the accuracy of the validation data. To test of the explanatory variables, the chi-square test was used for categorical variables.