DOI QR코드

DOI QR Code

Tree-Structured Nonlinear Regression

  • 투고 : 20110600
  • 심사 : 20110800
  • 발행 : 2011.10.31

초록

Tree algorithms have been widely developed for regression problems. One of the good features of a regression tree is the flexibility of fitting because it can correctly capture the nonlinearity of data well. Especially, data with sudden structural breaks such as the price of oil and exchange rates could be fitted well with a simple mixture of a few piecewise linear regression models. Now that split points are determined by chi-squared statistics related with residuals from fitting piecewise linear models and the split variable is chosen by an objective criterion, we can get a quite reasonable fitting result which goes in line with the visual interpretation of data. The piecewise linear regression by a regression tree can be used as a good fitting method, and can be applied to a dataset with much fluctuation.

키워드

참고문헌

  1. Breiman, L. (1996). Bagging predictors, Machine Learning, 24, 123-140.
  2. Breiman, L. (2001). Random Forests, Machine Learning, 45, 5-32. https://doi.org/10.1023/A:1010933404324
  3. Breiman, L., Friedman, J., Stone, C. and Olshen, R. A. (1984). Classification and Regression Trees, 1st Edition, Chapman & Hall/CRC.
  4. Chang, Y. (2010). The analysis of factors which affect business survey index using regression trees, The Korean Journal of Applied Statistics, 23, 63-71. https://doi.org/10.5351/KJAS.2010.23.1.063
  5. Kim, H., Loh, W.-Y., Shih, Y.-S. and Chaudhuri, P. (2006). A visualizable and interpretable regression model with good prediction power, IIE Transactions, Special Issue on Data Mining and Web Mining.
  6. Loh, W.-Y. (2002). Regression trees with unbiased variable selection and interaction detection, Statistica Sinica, 12, 361-386.
  7. Loh, W.-Y. (2008). Regression by parts: Fitting visually interpretable models with GUIDE, In Handbook of Data Visualization, C. Chen, W. Hardle, and A. Unwin, Eds. Springer, 447-469.
  8. Strobl, C., Boulesteix, A.-L., Zeileis, A. and Hothorn, T. (2007). Bias in random forest variable importance measures: Illustrations, sources and a solution, BMC Bioinformatics, 8, 25. https://doi.org/10.1186/1471-2105-8-25

피인용 문헌

  1. Panel data analysis with regression trees vol.25, pp.6, 2014, https://doi.org/10.7465/jkdi.2014.25.6.1253
  2. Variable selection with quantile regression tree vol.29, pp.6, 2016, https://doi.org/10.5351/KJAS.2016.29.6.1095
  3. An analysis of changes in the influence of GDP gap on inflation vol.26, pp.6, 2015, https://doi.org/10.7465/jkdi.2015.26.6.1377
  4. Estimation of Nonlinear Impulse Responses of Stock Indices by Asset Class vol.25, pp.2, 2012, https://doi.org/10.5351/KJAS.2012.25.2.239