• 제목/요약/키워드: Gradient Boosting Decision Tree

검색결과 50건 처리시간 0.028초

한국 지방자치단체의 주민참여예산제도 운영에 관한 연구 - Support Vector Machine 기법을 이용한 유형 구분 (A Study on Korean Local Governments' Operation of Participatory Budgeting System : Classification by Support Vector Machine Technique)

  • 한준현;유재민;배재연;임충혁
    • 문화기술의 융합
    • /
    • 제10권3호
    • /
    • pp.461-466
    • /
    • 2024
  • 한국의 주민참여예산제도는 자치단체별로 자율적으로 운영되도록 하고 있어서, 본 연구는 이들을 몇 개의 유사한 유형들로 구분하여서 각각의 특징들을 살펴보고자 한다. 본 연구는 다양한 머신 러닝 기법들을 활용하여 2022년도 기초 시(市)를 중심으로 운영유형을 분류하였다. 그 결과, 여러 머신 러닝 기법(Neural Network, Rule Induction(CN2), KNN, Decision Tree, Random Forest, Gradient Boosting, SVM, Naïve Bayes) 중에서 SVM 기법이 성능이 가장 좋은 것으로 확인되었다. SVM 기법이 밝혀낸 운영유형은 모두 3개인데, 하나는 위원회 활동은 적게 하지만, 참여예산은 많이 확보하는 클러스터(C1)이고, 다른 하나는 주민참여예산제에 매우 소극적인 도시들의 클러스터(C3)이다. 마지막 클러스터(C2)는 참여예산에 전반적으로 적극적인데, 대다수 지역이 여기에 해당한다. 결론적으로 한국의 대다수 자치단체는 주민참여예산제를 긍정적으로 운영하고 있으며, 오직 소수의 자치단체만 소극적이다. 후속 연구로 지난 10여 년간의 시계열 자료를 분석한다면, 우리는 주민참여예산에 관한 지방자치단체 유형 분류의 신뢰도를 더욱 높일 수 있을 것으로 기대한다.

Inhalation Configuration Detection for COVID-19 Patient Secluded Observing using Wearable IoTs Platform

  • Sulaiman Sulmi Almutairi;Rehmat Ullah;Qazi Zia Ullah;Habib Shah
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권6호
    • /
    • pp.1478-1499
    • /
    • 2024
  • Coronavirus disease (COVID-19) is an infectious disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus. COVID-19 become an active epidemic disease due to its spread around the globe. The main causes of the spread are through interaction and transmission of the droplets through coughing and sneezing. The spread can be minimized by isolating the susceptible patients. However, it necessitates remote monitoring to check the breathing issues of the patient remotely to minimize the interactions for spread minimization. Thus, in this article, we offer a wearable-IoTs-centered framework for remote monitoring and recognition of the breathing pattern and abnormal breath detection for timely providing the proper oxygen level required. We propose wearable sensors accelerometer and gyroscope-based breathing time-series data acquisition, temporal features extraction, and machine learning algorithms for pattern detection and abnormality identification. The sensors provide the data through Bluetooth and receive it at the server for further processing and recognition. We collect the six breathing patterns from the twenty subjects and each pattern is recorded for about five minutes. We match prediction accuracies of all machine learning models under study (i.e. Random forest, Gradient boosting tree, Decision tree, and K-nearest neighbor. Our results show that normal breathing and Bradypnea are the most correctly recognized breathing patterns. However, in some cases, algorithm recognizes kussmaul well also. Collectively, the classification outcomes of Random Forest and Gradient Boost Trees are better than the other two algorithms.

Machine Learning Methods to Predict Vehicle Fuel Consumption

  • Ko, Kwangho
    • 한국컴퓨터정보학회논문지
    • /
    • 제27권9호
    • /
    • pp.13-20
    • /
    • 2022
  • 본 연구에서는 주행 차량의 실시간 연료소모량을 예측할 수 있는 머신러닝 기법을 제안하고 그 특성을 분석하였다. 머신러닝 학습을 위해 실도로 주행을 실시하여 주행 속도, 가속도, 도로 구배와 함께 연료소모량을 측정하였다. 특성 데이터로 속도, 가속도, 도로구배를, 타깃으로 연료소모량을 지정하여 다양한 머신러닝 모델을 학습시켰다. 회귀법에 해당하는 K-최근접이웃회귀 및 선형회귀와 함께, 분류법에 해당하는 K-최근접이웃분류, 로지스틱회귀, 결정트리, 랜덤포레스트, 그래디언부스팅을 사용하였다. 실시간 연료소모량에 대한 예측 정확도는 0.5 ~ 0.6 수준으로 전반적으로 낮았고, 회귀법의 경우 분류법보다 정확도가 떨어졌다. 총연료소모량에 대한 예측 오차는 0.2 ~ 2.0% 수준으로 상당히 정확했고, 분류법보다 회귀법의 오차가 더 낮았다. 이는 예측 정확도의 기준으로 결정계수(R2)를 사용했기 때문인데, 이 값이 작을수록 타깃의 평균 부근에 예측치가 좁게 분포하기 때문이다. 따라서 실시간 연료소모량 예측에는 분류법이, 총연료소모량 예측에는 회귀법이 적합하다고 할 수 있다.

A Hybrid Multi-Level Feature Selection Framework for prediction of Chronic Disease

  • G.S. Raghavendra;Shanthi Mahesh;M.V.P. Chandrasekhara Rao
    • International Journal of Computer Science & Network Security
    • /
    • 제23권12호
    • /
    • pp.101-106
    • /
    • 2023
  • Chronic illnesses are among the most common serious problems affecting human health. Early diagnosis of chronic diseases can assist to avoid or mitigate their consequences, potentially decreasing mortality rates. Using machine learning algorithms to identify risk factors is an exciting strategy. The issue with existing feature selection approaches is that each method provides a distinct set of properties that affect model correctness, and present methods cannot perform well on huge multidimensional datasets. We would like to introduce a novel model that contains a feature selection approach that selects optimal characteristics from big multidimensional data sets to provide reliable predictions of chronic illnesses without sacrificing data uniqueness.[1] To ensure the success of our proposed model, we employed balanced classes by employing hybrid balanced class sampling methods on the original dataset, as well as methods for data pre-processing and data transformation, to provide credible data for the training model. We ran and assessed our model on datasets with binary and multivalued classifications. We have used multiple datasets (Parkinson, arrythmia, breast cancer, kidney, diabetes). Suitable features are selected by using the Hybrid feature model consists of Lassocv, decision tree, random forest, gradient boosting,Adaboost, stochastic gradient descent and done voting of attributes which are common output from these methods.Accuracy of original dataset before applying framework is recorded and evaluated against reduced data set of attributes accuracy. The results are shown separately to provide comparisons. Based on the result analysis, we can conclude that our proposed model produced the highest accuracy on multi valued class datasets than on binary class attributes.[1]

입력자료 군집화에 따른 앙상블 머신러닝 모형의 수질예측 특성 연구 (The Effect of Input Variables Clustering on the Characteristics of Ensemble Machine Learning Model for Water Quality Prediction)

  • 박정수
    • 한국물환경학회지
    • /
    • 제37권5호
    • /
    • pp.335-343
    • /
    • 2021
  • Water quality prediction is essential for the proper management of water supply systems. Increased suspended sediment concentration (SSC) has various effects on water supply systems such as increased treatment cost and consequently, there have been various efforts to develop a model for predicting SSC. However, SSC is affected by both the natural and anthropogenic environment, making it challenging to predict SSC. Recently, advanced machine learning models have increasingly been used for water quality prediction. This study developed an ensemble machine learning model to predict SSC using the XGBoost (XGB) algorithm. The observed discharge (Q) and SSC in two fields monitoring stations were used to develop the model. The input variables were clustered in two groups with low and high ranges of Q using the k-means clustering algorithm. Then each group of data was separately used to optimize XGB (Model 1). The model performance was compared with that of the XGB model using the entire data (Model 2). The models were evaluated by mean squared error-ob servation standard deviation ratio (RSR) and root mean squared error. The RSR were 0.51 and 0.57 in the two monitoring stations for Model 2, respectively, while the model performance improved to RSR 0.46 and 0.55, respectively, for Model 1.

협동로봇의 건전성 관리를 위한 머신러닝 알고리즘의 비교 분석 (Comparative Analysis of Machine Learning Algorithms for Healthy Management of Collaborative Robots)

  • 김재은;장길상;임국화
    • 대한안전경영과학회지
    • /
    • 제23권4호
    • /
    • pp.93-104
    • /
    • 2021
  • In this paper, we propose a method for diagnosing overload and working load of collaborative robots through performance analysis of machine learning algorithms. To this end, an experiment was conducted to perform pick & place operation while changing the payload weight of a cooperative robot with a payload capacity of 10 kg. In this experiment, motor torque, position, and speed data generated from the robot controller were collected, and as a result of t-test and f-test, different characteristics were found for each weight based on a payload of 10 kg. In addition, to predict overload and working load from the collected data, machine learning algorithms such as Neural Network, Decision Tree, Random Forest, and Gradient Boosting models were used for experiments. As a result of the experiment, the neural network with more than 99.6% of explanatory power showed the best performance in prediction and classification. The practical contribution of the proposed study is that it suggests a method to collect data required for analysis from the robot without attaching additional sensors to the collaborative robot and the usefulness of a machine learning algorithm for diagnosing robot overload and working load.

The Analysis of the Activity Patterns of Dog with Wearable Sensors Using Machine Learning

  • ;;김희철
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 춘계학술대회
    • /
    • pp.141-143
    • /
    • 2021
  • The Activity patterns of animal species are difficult to access and the behavior of freely moving individuals can not be assessed by direct observation. As it has become large challenge to understand the activity pattern of animals such as dogs, and cats etc. One approach for monitoring these behaviors is the continuous collection of data by human observers. Therefore, in this study we assess the activity patterns of dog using the wearable sensors data such as accelerometer and gyroscope. A wearable, sensor -based system is suitable for such ends, and it will be able to monitor the dogs in real-time. The basic purpose of this study was to develop a system that can detect the activities based on the accelerometer and gyroscope signals. Therefore, we purpose a method which is based on the data collected from 10 dogs, including different nine breeds of different sizes and ages, and both genders. We applied six different state-of-the-art classifiers such as Random forests (RF), Support vector machine (SVM), Gradient boosting machine (GBM), XGBoost, k-nearest neighbors (KNN), and Decision tree classifier, respectively. The Random Forest showed a good classification result. We achieved an accuracy 86.73% while the detecting the activity.

  • PDF

In-situ stresses ring hole measurement of concrete optimized based on finite element and GBDT algorithm

  • Chen Guo;Zheng Yang;Yanchao Yue;Wenxiao Li;Hantao Wu
    • Computers and Concrete
    • /
    • 제34권4호
    • /
    • pp.477-487
    • /
    • 2024
  • The in-situ stresses of concrete are an essential index for assessing the safety performance of concrete structures. Conventional methods for pore pressure release often face challenges in selecting drilling ring parameters, uncontrollable stress release, and unstable detection accuracy. In this paper, the parameters affecting the results of the concrete ring hole stress release method are cross-combined, and finite elements are used to simulate the combined parameters and extract the stress release values to establish a training set. The GridSearchCV function is utilized to determine the optimal hyperparameters. The mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) are used as evaluation indexes to train the gradient boosting decision tree (GBDT) algorithm, and the other three common algorithms are compared. The RMSE of the GBDT algorithm for the test set is 4.499, and the R2 of the GBDT algorithm for the test set is 0.962, which is 9.66% higher than the R2 of the best-performing comparison algorithm. The model generated by the GBDT algorithm can accurately calculate the concrete in-situ stresses based on the drilling ring parameters and the corresponding stress release values and has a high accuracy and generalization ability.

머신러닝 및 딥러닝을 활용한 강우침식능인자 예측 평가 (Evaluation of Rainfall Erosivity Factor Estimation Using Machine and Deep Learning Models)

  • 이지민;이서로;이관재;김종건;임경재
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2021년도 학술발표회
    • /
    • pp.450-450
    • /
    • 2021
  • 기후변화 보고서에 따르면 집중 호우의 강도 및 빈도 증가가 향후 몇 년동안 지속될 것이라 제시하였다. 이러한 집중호우가 빈번히 발생하게 된다면 강우 침식성이 증가하여 표토 침식에 더 취약하게 발생된다. Universal Soil Loss Equation (USLE) 입력 매개 변수 중 하나인 강우침식능인자는 토양 유실을 예측할때 강우 강도의 미치는 영향을 제시하는 인자이다. 선행 연구에서 USLE 방법을 사용하여 강우침식능인자를 산정하였지만, 60분 단위 강우자료를 이용하였기 때문에 정확한 30분 최대 강우강도 산정을 고려하지 못하는 한계점이 있다. 본 연구의 목적은 강우침식능인자를 이전의 진행된 방법보다 더 빠르고 정확하게 예측하는 머신러닝 모델을 개발하며, 총 월별 강우량, 최대 일 강우량 및 최대 시간별 강우량 데이터만 있어도 산정이 가능하도록 하였다. 이를 위해 본 연구에서는 강우침식능인자의 산정 값의 정확도를 높이기 위해 1분 간격 강우 데이터를 사용하며, 최근 강우 패턴을 반영하기 위해서 2013-2019년 자료로 이용했다. 우선, 월별 특성을 파악하기 위해 USLE 계산 방법을 사용하여 월별 강우침식능인자를 산정하였고, 국내 50개 지점을 대상으로 계산된 월별 강우침식능인자를 실측 값으로 정하여, 머신러닝 모델을 통하여 강우침식능인자 예측하도록 학습시켜 분석하였다. 이 연구에 사용된 머신러닝 모델들은 Decision Tree, Random Forest, K-Nearest Neighbors, Gradient Boosting, eXtreme Gradient Boost 및 Deep Neural Network을 이용하였다. 또한, 교차 검증을 통해서 모델 중 Deep Neural Network이 강우침식능인자 예측 정확도가 가장 높게 산정하였다. Deep Neural Network은 Nash-Sutcliffe Efficiency (NSE) 와 Coefficient of determination (R2)의 결과값이 0.87로서 모델의 예측성을 입증하였으며, 검증 모델을 테스트 하기 위해 국내 6개 지점을 무작위로 선별하여 강우침식능인자를 분석하였다. 본 연구 결과에서 나온 Deep Neural Network을 이용하면, 훨씬 적은 노력과 시간으로 원하는 지점에서 월별 강우침식능인자를 예측할 수 있으며, 한국 강우 패턴을 효율적으로 분석 할 수 있을 것이라 판단된다. 이를 통해 향후 토양 침식 위험을 지표화하는 것뿐만 아니라 토양 보전 계획을 수립할 수 있으며, 위험 지역을 우선적으로 선별하고 제시하는데 유용하게 사용 될 것이라 사료된다.

  • PDF

An Intelligent Game Theoretic Model With Machine Learning For Online Cybersecurity Risk Management

  • Alharbi, Talal
    • International Journal of Computer Science & Network Security
    • /
    • 제22권6호
    • /
    • pp.390-399
    • /
    • 2022
  • Cyber security and resilience are phrases that describe safeguards of ICTs (information and communication technologies) from cyber-attacks or mitigations of cyber event impacts. The sole purpose of Risk models are detections, analyses, and handling by considering all relevant perceptions of risks. The current research effort has resulted in the development of a new paradigm for safeguarding services offered online which can be utilized by both service providers and users. customers. However, rather of relying on detailed studies, this approach emphasizes task selection and execution that leads to successful risk treatment outcomes. Modelling intelligent CSGs (Cyber Security Games) using MLTs (machine learning techniques) was the focus of this research. By limiting mission risk, CSGs maximize ability of systems to operate unhindered in cyber environments. The suggested framework's main components are the Threat and Risk models. These models are tailored to meet the special characteristics of online services as well as the cyberspace environment. A risk management procedure is included in the framework. Risk scores are computed by combining probabilities of successful attacks with findings of impact models that predict cyber catastrophe consequences. To assess successful attacks, models emulating defense against threats can be used in topologies. CSGs consider widespread interconnectivity of cyber systems which forces defending all multi-step attack paths. In contrast, attackers just need one of the paths to succeed. CSGs are game-theoretic methods for identifying defense measures and reducing risks for systems and probe for maximum cyber risks using game formulations (MiniMax). To detect the impacts, the attacker player creates an attack tree for each state of the game using a modified Extreme Gradient Boosting Decision Tree (that sees numerous compromises ahead). Based on the findings, the proposed model has a high level of security for the web sources used in the experiment.