• Title/Summary/Keyword: Random forests

Search Result 106, Processing Time 0.023 seconds

Study on the ensemble methods with kernel ridge regression

  • Kim, Sun-Hwa;Cho, Dae-Hyeon;Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.2
    • /
    • pp.375-383
    • /
    • 2012
  • The purpose of the ensemble methods is to increase the accuracy of prediction through combining many classifiers. According to recent studies, it is proved that random forests and forward stagewise regression have good accuracies in classification problems. However they have great prediction error in separation boundary points because they used decision tree as a base learner. In this study, we use the kernel ridge regression instead of the decision trees in random forests and boosting. The usefulness of our proposed ensemble methods was shown by the simulation results of the prostate cancer and the Boston housing data.

Medical Image Classification and Keyword Annotation Using Combination of Random Forests and Relation Weight (Random Forests와 관계 가중치 결합을 이용한 의료 영상 분류 및 주석 자동 생성)

  • Lee, Ji-hyun;Kim, Seong-hoon;Ko, Byoung-chul;Nam, Jae-Yeal
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.596-598
    • /
    • 2010
  • 본 논문에서는 의료영상 중 X-ray 영상을 대상으로 영상을 분류하고 분류 결과에 따라 다중 키워드를 생성하는 방법을 제시한다. X-ray영상은 대부분 그레이 영상임으로 Local Binary Patterns (LBP)을 이용하여 픽셀간의 연관성을 특징으로 추출하고, 실시간 학습 및 분류가 가능한 Random Forests 분류기로 영상들을 30개의 클래스로 분류한다. 또한, 미리 정의된 신체 부위간의 관계 가중치를 분류 스코어에 결합하여 신뢰값을 생성하고 이를 기반으로 영상에 대해 다중 주석을 부여하게 된다. 이렇게 부여된 다중 주석은 키워드 기반의 의료영상을 가능케 함으로 보다 쉽고 효율적인 검색 환경을 제공할 수 있다.

A Prediction Model for the Development of Cataract Using Random Forests (Random Forests 기법을 이용한 백내장 예측모형 - 일개 대학병원 건강검진 수검자료에서 -)

  • Han, Eun-Jeong;Song, Ki-Jun;Kim, Dong-Geon
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.4
    • /
    • pp.771-780
    • /
    • 2009
  • Cataract is the main cause of blindness and visual impairment, especially, age-related cataract accounts for about half of the 32 million cases of blindness worldwide. As the life expectancy and the expansion of the elderly population are increasing, the cases of cataract increase as well, which causes a serious economic and social problem throughout the country. However, the incidence of cataract can be reduced dramatically through early diagnosis and prevention. In this study, we developed a prediction model of cataracts for early diagnosis using hospital data of 3,237 subjects who received the screening test first and then later visited medical center for cataract check-ups cataract between 1994 and 2005. To develop the prediction model, we used random forests and compared the predictive performance of this model with other common discriminant models such as logistic regression, discriminant model, decision tree, naive Bayes, and two popular ensemble model, bagging and arcing. The accuracy of random forests was 67.16%, sensitivity was 72.28%, and main factors included in this model were age, diabetes, WBC, platelet, triglyceride, BMI and so on. The results showed that it could predict about 70% of cataract existence by screening test without any information from direct eye examination by ophthalmologist. We expect that our model may contribute to diagnose cataract and help preventing cataract in early stages.

Compromising Multiple Objectives in Production Scheduling: A Data Mining Approach

  • Hwang, Wook-Yeon;Lee, Jong-Seok
    • Management Science and Financial Engineering
    • /
    • v.20 no.1
    • /
    • pp.1-9
    • /
    • 2014
  • In multi-objective scheduling problems, the objectives are usually in conflict. To obtain a satisfactory compromise and resolve the issue of NP-hardness, most existing works have suggested employing meta-heuristic methods, such as genetic algorithms. In this research, we propose a novel data-driven approach for generating a single solution that compromises multiple rules pursuing different objectives. The proposed method uses a data mining technique, namely, random forests, in order to extract the logics of several historic schedules and aggregate those. Since it involves learning predictive models, future schedules with the same previous objectives can be easily and quickly obtained by applying new production data into the models. The proposed approach is illustrated with a simulation study, where it appears to successfully produce a new solution showing balanced scheduling performances.

Purchase Prediction by Analyzing Users' Online Behaviors Using Machine Learning and Information Theory Approaches

  • Kim, Minsung;Im, Il;Han, Sangman
    • Asia pacific journal of information systems
    • /
    • v.26 no.1
    • /
    • pp.66-79
    • /
    • 2016
  • The availability of detailed data on customers' online behaviors and advances in big data analysis techniques enable us to predict consumer behaviors. In the past, researchers have built purchase prediction models by analyzing clickstream data; however, these clickstream-based prediction models have had several limitations. In this study, we propose a new method for purchase prediction that combines information theory with machine learning techniques. Clickstreams from 5,000 panel members and data on their purchases of electronics, fashion, and cosmetics products were analyzed. Clickstreams were summarized using the 'entropy' concept from information theory, while 'random forests' method was applied to build prediction models. The results show that prediction accuracy of this new method ranges from 0.56 to 0.83, which is a significant improvement over values for clickstream-based prediction models presented in the past. The results indicate further that consumers' information search behaviors differ significantly across product categories.

Generalized Partially Linear Additive Models for Credit Scoring

  • Shim, Ju-Hyun;Lee, Young-K.
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.4
    • /
    • pp.587-595
    • /
    • 2011
  • Credit scoring is an objective and automatic system to assess the credit risk of each customer. The logistic regression model is one of the popular methods of credit scoring to predict the default probability; however, it may not detect possible nonlinear features of predictors despite the advantages of interpretability and low computation cost. In this paper, we propose to use a generalized partially linear model as an alternative to logistic regression. We also introduce modern ensemble technologies such as bagging, boosting and random forests. We compare these methods via a simulation study and illustrate them through a German credit dataset.

Visualizing Multi-Variable Prediction Functions by Segmented k-CPG's

  • Huh, Myung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.1
    • /
    • pp.185-193
    • /
    • 2009
  • Machine learning methods such as support vector machines and random forests yield nonparametric prediction functions of the form y = $f(x_1,{\ldots},x_p)$. As a sequel to the previous article (Huh and Lee, 2008) for visualizing nonparametric functions, I propose more sensible graphs for visualizing y = $f(x_1,{\ldots},x_p)$ herein which has two clear advantages over the previous simple graphs. New graphs will show a small number of prototype curves of $f(x_1,{\ldots},x_{j-1},x_j,x_{j+1}{\ldots},x_p)$, revealing statistically plausible portion over the interval of $x_j$ which changes with ($x_1,{\ldots},x_{j-1},x_{j+1},{\ldots},x_p$). To complement the visual display, matching importance measures for each of p predictor variables are produced. The proposed graphs and importance measures are validated in simulated settings and demonstrated for an environmental study.

A review of tree-based Bayesian methods

  • Linero, Antonio R.
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.6
    • /
    • pp.543-559
    • /
    • 2017
  • Tree-based regression and classification ensembles form a standard part of the data-science toolkit. Many commonly used methods take an algorithmic view, proposing greedy methods for constructing decision trees; examples include the classification and regression trees algorithm, boosted decision trees, and random forests. Recent history has seen a surge of interest in Bayesian techniques for constructing decision tree ensembles, with these methods frequently outperforming their algorithmic counterparts. The goal of this article is to survey the landscape surrounding Bayesian decision tree methods, and to discuss recent modeling and computational developments. We provide connections between Bayesian tree-based methods and existing machine learning techniques, and outline several recent theoretical developments establishing frequentist consistency and rates of convergence for the posterior distribution. The methodology we present is applicable for a wide variety of statistical tasks including regression, classification, modeling of count data, and many others. We illustrate the methodology on both simulated and real datasets.

Identifying SDC-Causing Instructions Based on Random Forests Algorithm

  • Liu, LiPing;Ci, LinLin;Liu, Wei;Yang, Hui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1566-1582
    • /
    • 2019
  • Silent Data Corruptions (SDCs) is a serious reliability issue in many domains of computer system. The identification and protection of the program instructions that cause SDCs is one of the research hotspots in computer reliability field at present. A lot of solutions have already been proposed to solve this problem. However, many of them are hard to be applied widely due to time-consuming and expensive costs. This paper proposes an intelligent approach named SDCPredictor to identify the instructions that cause SDCs. SDCPredictor identifies SDC-causing Instructions depending on analyzing the static and dynamic features of instructions rather than fault injections. The experimental results demonstrate that SDCPredictor is highly accurate in predicting the SDCs proneness. It can achieve higher fault coverage than previous similar techniques in a moderate time cost.

Assessment of Carbon Sequestration Potential in Degraded and Non-Degraded Community Forests in Terai Region of Nepal

  • Joshi, Rajeev;Singh, Hukum;Chhetri, Ramesh;Yadav, Karan
    • Journal of Forest and Environmental Science
    • /
    • v.36 no.2
    • /
    • pp.113-121
    • /
    • 2020
  • This study was carried out in degraded and non-degraded community forests (CF) in the Terai region of Kanchanpur district, Nepal. A total of 63 concentric sample plots each of 500 ㎡ was laid in the inventory for estimating above and below-ground biomass of forests by using systematic random sampling with a sampling intensity of 0.5%. Mallotus philippinensis and Shorea robusta were the most dominant species in degraded and non-degraded CF accounting Importance Value Index (I.V.I) of 97.16 and 178.49, respectively. Above-ground tree biomass carbon in degraded and non-degraded community forests was 74.64±16.34 t ha-1 and 163.12±20.23 t ha-1, respectively. Soil carbon sequestration in degraded and non-degraded community forests was 42.55±3.10 t ha-1 and 54.21±3.59 t ha-1, respectively. Hence, the estimated total carbon stock was 152.68±22.95 t ha-1 and 301.08±27.07 t ha-1 in degraded and non-degraded community forests, respectively. It was found that the carbon sequestration in the non-degraded community forest was 1.97 times higher than in the degraded community forest. CO2 equivalent in degraded and non-degraded community forests was 553 t ha-1 and 1105 t ha-1, respectively. Statistical analysis showed a significant difference between degraded and non-degraded community forests in terms of its total biomass and carbon sequestration potential (p<0.05). Studies indicate that the community forest has huge potential and can reward economic benefits from carbon trading to benefit from the REDD+/CDM mechanism by promoting the sustainable conservation of community forests.