• Title/Summary/Keyword: Decision Tree Technique

Search Result 206, Processing Time 0.023 seconds

Development of Hypertension Predictive Model (고혈압 발생 예측 모형 개발)

  • Yong, Wang-Sik;Park, Il-Su;Kang, Sung-Hong;Kim, Won-Joong;Kim, Kong-Hyun;Kim, Kwang-Kee;Park, No-Yai
    • Korean Journal of Health Education and Promotion
    • /
    • v.23 no.4
    • /
    • pp.13-28
    • /
    • 2006
  • Objectives: This study used the characteristics of the knowledge discovery and data mining algorithms to develop hypertension predictive model for hypertension management using the Korea National Health Insurance Corporation database(the insureds' screening and health care benefit data). Methods: This study validated the predictive power of data mining algorithms by comparing the performance of logistic regression, decision tree, and ensemble technique. On the basis of internal and external validation, it was found that the model performance of logistic regression method was the best among the above three techniques. Results: Major results of logistic regression analysis suggested that the probability of hypertension was: - lower for the female(compared with the male)(OR=0.834) - higher for the persons whose ages were 60 or above(compared with below 40)(OR=4.628) - higher for obese persons(compared with normal persons)(OR= 2.103) - higher for the persons with high level of glucose(compared with normal persons)(OR=1.086) - higher for the persons who had family history of hypertension(compared with the persons who had not)(OR=1.512) - higher for the persons who periodically drank alcohol(compared with the persons who did not)$(OR=1.037{\sim}1.291)$ Conclusions: This study produced several factors affecting the outbreak of hypertension using screening. It is considered to be a contributing factor towards the nation's building of a Hypertension Management System in the near future by bringing forth representative results on the rise and care of hypertension.

Algorithms for Handling Incomplete Data in SVM and Deep Learning (SVM과 딥러닝에서 불완전한 데이터를 처리하기 위한 알고리즘)

  • Lee, Jong-Chan
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.3
    • /
    • pp.1-7
    • /
    • 2020
  • This paper introduces two different techniques for dealing with incomplete data and algorithms for learning this data. The first method is to process the incomplete data by assigning the missing value with equal probability that the missing variable can have, and learn this data with the SVM. This technique ensures that the higher the frequency of missing for any variable, the higher the entropy so that it is not selected in the decision tree. This method is characterized by ignoring all remaining information in the missing variable and assigning a new value. On the other hand, the new method is to calculate the entropy probability from the remaining information except the missing value and use it as an estimate of the missing variable. In other words, using a lot of information that is not lost from incomplete learning data to recover some missing information and learn using deep learning. These two methods measure performance by selecting one variable in turn from the training data and iteratively comparing the results of different measurements with varying proportions of data lost in the variable.

Utilizing Purely Symmetric J Measure for Association Rules (연관성 규칙의 탐색을 위한 순수 대칭적 J 측도의 활용)

  • Park, Hee-Chang
    • Journal of the Korean Data Analysis Society
    • /
    • v.20 no.6
    • /
    • pp.2865-2872
    • /
    • 2018
  • In the field of data mining technique, there are various methods such as association rules, cluster analysis, decision tree, neural network. Among them, association rules are defined by using various association evaluation criteria such as support, confidence, and lift. Agrawal et al. (1993) first proposed this association rule, and since then research has been conducted by many scholars. Recently, studies related to crossover entropy have been published (Park, 2016b). In this paper, we proposed a purely symmetric J measure considering directionality and purity in the previously published J measure, and examined its usefulness by using examples. As a result, it is found that the pure symmetric J measure changes more clearly than the conventional J measure, the symmetric J measure, and the pure crossover entropy measure as the frequency of coincidence increases. The variation of the pure symmetric J measure was also larger depending on the magnitude of the inconsistency, and the presence or absence of the association was more clearly understood.

Ensemble Machine Learning Model Based YouTube Spam Comment Detection (앙상블 머신러닝 모델 기반 유튜브 스팸 댓글 탐지)

  • Jeong, Min Chul;Lee, Jihyeon;Oh, Hayoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.5
    • /
    • pp.576-583
    • /
    • 2020
  • This paper proposes a technique to determine the spam comments on YouTube, which have recently seen tremendous growth. On YouTube, the spammers appeared to promote their channels or videos in popular videos or leave comments unrelated to the video, as it is possible to monetize through advertising. YouTube is running and operating its own spam blocking system, but still has failed to block them properly and efficiently. Therefore, we examined related studies on YouTube spam comment screening and conducted classification experiments with six different machine learning techniques (Decision tree, Logistic regression, Bernoulli Naive Bayes, Random Forest, Support vector machine with linear kernel, Support vector machine with Gaussian kernel) and ensemble model combining these techniques in the comment data from popular music videos - Psy, Katy Perry, LMFAO, Eminem and Shakira.

An effective automated ontology construction based on the agriculture domain

  • Deepa, Rajendran;Vigneshwari, Srinivasan
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.573-587
    • /
    • 2022
  • The agricultural sector is completely different from other sectors since it completely relies on various natural and climatic factors. Climate changes have many effects, including lack of annual rainfall and pests, heat waves, changes in sea level, and global ozone/atmospheric CO2 fluctuation, on land and agriculture in similar ways. Climate change also affects the environment. Based on these factors, farmers chose their crops to increase productivity in their fields. Many existing agricultural ontologies are either domain-specific or have been created with minimal vocabulary and no proper evaluation framework has been implemented. A new agricultural ontology focused on subdomains is designed to assist farmers using Jaccard relative extractor (JRE) and Naïve Bayes algorithm. The JRE is used to find the similarity between two sentences and words in the agricultural documents and the relationship between two terms is identified via the Naïve Bayes algorithm. In the proposed method, the preprocessing of data is carried out through natural language processing techniques and the tags whose dimensions are reduced are subjected to rule-based formal concept analysis and mapping. The subdomain ontologies of weather, pest, and soil are built separately, and the overall agricultural ontology are built around them. The gold standard for the lexical layer is used to evaluate the proposed technique, and its performance is analyzed by comparing it with different state-of-the-art systems. Precision, recall, F-measure, Matthews correlation coefficient, receiver operating characteristic curve area, and precision-recall curve area are the performance metrics used to analyze the performance. The proposed methodology gives a precision score of 94.40% when compared with the decision tree(83.94%) and K-nearest neighbor algorithm(86.89%) for agricultural ontology construction.

Nakdong River Estuary Salinity Prediction Using Machine Learning Methods (머신러닝 기법을 활용한 낙동강 하구 염분농도 예측)

  • Lee, Hojun;Jo, Mingyu;Chun, Sejin;Han, Jungkyu
    • Smart Media Journal
    • /
    • v.11 no.2
    • /
    • pp.31-38
    • /
    • 2022
  • Promptly predicting changes in the salinity in rivers is an important task to predict the damage to agriculture and ecosystems caused by salinity infiltration and to establish disaster prevention measures. Because machine learning(ML) methods show much less computation cost than physics-based hydraulic models, they can predict the river salinity in a relatively short time. Due to shorter training time, ML methods have been studied as a complementary technique to physics-based hydraulic model. Many studies on salinity prediction based on machine learning have been studied actively around the world, but there are few studies in South Korea. With a massive number of datasets available publicly, we evaluated the performance of various kinds of machine learning techniques that predict the salinity of the Nakdong River Estuary Basin. As a result, LightGBM algorithm shows average 0.37 in RMSE as prediction performance and 2-20 times faster learning speed than other algorithms. This indicates that machine learning techniques can be applied to predict the salinity of rivers in Korea.

Study on Soil Moisture Predictability using Machine Learning Technique (머신러닝 기법을 활용한 토양수분 예측 가능성 연구)

  • Jo, Bongjun;Choi, Wanmin;Kim, Youngdae;kim, Kisung;Kim, Jonggun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.248-248
    • /
    • 2020
  • 토양수분은 증발산, 유출, 침투 등 물수지 요소들과 밀접한 연관이 있는 주요한 변수 중에 하나이다. 토양수분의 정도는 토양의 특성, 토지이용 형태, 기상 상태 등에 따라 공간적으로 상이하며, 특히 기상 상태에 따라 시간적 변동성을 보이고 있다. 기존 토양수분 측정은 토양시료 채취를 통한 실내 실험 측정과 측정 장비를 통한 현장 조사 방법이 있으나 시간적, 경제적 한계점이 있으며, 원격탐사 기법은 공간적으로 넓은 범위를 포함하지만 시간 해상도가 낮은 단점이 있다. 또한, 모델링을 통한 토양수분 예측 기술은 전문적인 지식이 요구되며, 복잡한 입력자료의 구축이 요구된다. 최근 머신러닝 기법은 수많은 자료 학습을 통해 사용자가 원하는 출력값을 도출하는데 널리 활용되고 있다. 이에 본 연구에서는 토양수분과 연관된 다양한 기상 인자들(강수량, 풍속, 습도 등)을 활용하여 머신러닝기법의 반복학습을 통한 토양수분의 예측 가능성을 분석하고자 한다. 이를 위해 시공간적으로 토양수분 실측 자료가 잘 구축되어 있는 청미천과 설마천 유역을 대상으로 머신러닝 기법을 적용하였다. 두 대상지에서 2008년~2012년 수문자료를 확보하였으며, 기상자료는 기상자료개방포털과 WAMIS를 통해 자료를 확보하였다. 토양수분 자료와 기상자료를 머신러닝 알고리즘을 통해 학습하고 2012년 기상 자료를 바탕으로 토양수분을 예측하였다. 사용되는 머신러닝 기법은 의사결정 나무(Decision Tree), 신경망(Multi Layer Perceptron, MLP), K-최근접 이웃(K-Nearest Neighbors, KNN), 서포트 벡터 머신(Support Vector Machine, SVM), 랜덤 포레스트(Random Forest), 그래디언트 부스팅 (Gradient Boosting)이다. 토양수분과 기상인자 간의 상관관계를 분석하기 위해 히트맵(Heat Map)을 이용하였다. 히트맵 분석 결과 토양수분의 시간적 변동은 다양한 기상 자료 중 강수량과 상대습도가 가장 큰 영향력을 보여주었다. 또한 다양한 기상 인자 기반 머신러닝 기법 적용 결과에서는 두 지역 모두 신경망(MLP) 기법을 제외한 모든 기법이 전반적으로 실측값과 유사한 형태를 보였으며 비교 그래프에서도 실측값과 예측 값이 유사한 추세를 나타냈다. 따라서 상관관계있는 과거 기상자료를 통해 머신러닝 기법 기반 토양수분의 시간적 변동 예측이 가능할 것으로 판단된다.

  • PDF

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Landslide Susceptibility Mapping by Comparing GIS-based Spatial Models in the Java, Indonesia (GIS 기반 공간예측모델 비교를 통한 인도네시아 자바지역 산사태 취약지도 제작)

  • Kim, Mi-Kyeong;Kim, Sangpil;Nho, Hyunju;Sohn, Hong-Gyoo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.37 no.5
    • /
    • pp.927-940
    • /
    • 2017
  • Landslide has been a major disaster in Indonesia, and recent climate change and indiscriminate urban development around the mountains have increased landslide risks. Java Island, Indonesia, where more than half of Indonesia's population lives, is experiencing a great deal of damage due to frequent landslides. However, even in such a dangerous situation, the number of inhabitants residing in the landslide-prone area increases year by year, and it is necessary to develop a technique for analyzing landslide-hazardous and vulnerable areas. In this regard, this study aims to evaluate landslide susceptibility of Java, an island of Indonesia, by using GIS-based spatial prediction models. We constructed the geospatial database such as landslide locations, topography, hydrology, soil type, and land cover over the study area and created spatial prediction models by applying Weight of Evidence (WoE), decision trees algorithm and artificial neural network. The three models showed prediction accuracy of 66.95%, 67.04%, and 69.67%, respectively. The results of the study are expected to be useful for prevention of landslide damage for the future and landslide disaster management policies in Indonesia.

Evaluation of Suitable REDD+ Sites Based on Multiple-Criteria Decision Analysis (MCDA): A Case Study of Myanmar

  • Park, Jeongmook;Sim, Woodam;Lee, Jungsoo
    • Journal of Forest and Environmental Science
    • /
    • v.34 no.6
    • /
    • pp.461-471
    • /
    • 2018
  • In this study, the deforestation and forest degradation areas have been obtained in Myanmar using a land cover lamp (LCM) and a tree cover map (TCM) to get the $CO_2$ potential reduction and the strength of occurrence was evaluated by using the geostatistical technique. By applying a multiple criteria decision-making method to the regions having high strength of occurrence for the $CO_2$ potential reduction for the deforestation and forest degradation areas, the priority was selected for candidate lands for REDD+ project. The areas of deforestation and forest degradation were 609,690ha and 43,515ha each from 2010 to 2015. By township, Mong Kung had the highest among the area of deforestation with 3,069ha while Thlangtlang had the highest in the area of forest degradation with 9,213 ha. The number of $CO_2$ potential reduction hotspot areas among the deforestation areas was 15, taking up the $CO_2$ potential reduction of 192,000 ton in average, which is 6 times higher than that of all target areas. Especially, the township of Hsipaw inside the Shan region had a $CO_2$ potential reduction of about 772,000 tons, the largest reduction potential among the hotpot areas. There were many $CO_2$ potential reduction hot spot areas among the forest degradation area in the eastern part of the target region and has the $CO_2$ potential reduction of 1,164,000 tons, which was 27 times higher than that of the total area. AHP importance analysis showed that the topographic characteristic was 0.41 (0.40 for height from surface, 0.29 for the slope and 0.31 for the distance from water area) while the geographical characteristic was 0.59 (0.56 for the distance from road, 0.56 for the distance from settlement area and 0.19 for the distance from Capital). Yawunghwe, Kalaw, and Hsi Hseng were selected as the preferred locations for the REDD+ candidate region for the deforestation area while Einme, Tiddim, and Falam were selected as the preferred locations for the forest degradation area.