1 |
Kim, H., Kim, H., Moon, H. and Ahn, H. (2011). A weight-adjusted voting algorithm for ensembles of classifiers, Journal of the Korean Statistical Society, 40, 437-449.
과학기술학회마을
DOI
|
2 |
Efron, B. and Tibshirani, R. (1986). Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy, Statistical Science, 1, 54-75.
DOI
ScienceOn
|
3 |
Freund, Y. and Schapire, R. (1996). Experiments with a new boosting algorithm, In Proceedings of the Thirteenth International Conference on Machine Learning, 96, 148-156.
|
4 |
Heinz, G., Peterson, L. J., Johnson, R. W. and Kerk, C. J. (2003). Exploring relationships in body dimensions, Journal of Statistics Education, 11, http://www.amstat.org/publications/jse/v11n2/datasets.heinz.html.
|
5 |
Ho, T. K., Hull, J. J. and Srihari, S. N. (1994). Decision combination in multiple classifier systems, IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 832-844.
|
6 |
Hothorn, T. and Lausen, B. (2003). Double-bagging: Combining classifiers by bootstrap aggregation, Pattern Recognition, 36, 1303-1309.
DOI
ScienceOn
|
7 |
Kim, H. and Loh, W. Y. (2001). Classification trees with unbiased multiway splits, Journal of American Statistical Association, 96, 589-604.
DOI
ScienceOn
|
8 |
Kim, H. and Loh, W. Y. (2003). Classification trees with bivariate linear discriminant node models, Journal of Computational and Graphical Statistics, 12, 512-530.
DOI
ScienceOn
|
9 |
Liew, A. and Wiener, M. (2002). Classification and regression by random forest, R News, 2, 18-22.
|
10 |
Bauer, E. and Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bag-ging, boosting, and variants, Machine Learning, 36, 105-139.
DOI
ScienceOn
|
11 |
Asuncion, A. and Newman, D. J. (2007). UCI machine learning repository, University of California, Irvine, School of Information and Computer Sciences, http://archive.ics.uci.edu/ml/.
|
12 |
Breiman, L. (1996a). Bagging predictors, Machine Learning, 24, 123-140.
|
13 |
Breiman, L. (2001). Random forests, Machine Learning, 45, 5-32.
DOI
ScienceOn
|
14 |
Breiman, L., Friedman, J. H., Olshen, R. A. and Stone, C. J. (1984). Classification and Regression Trees, Chapman and Hall, New York.
|
15 |
Dietterich, T. (2000). Ensemble Methods in Machine Learning, Springer, Berlin.
|
16 |
Skurichina, M. and Duin, R. P. (1998). Bagging for linear classifiers, Pattern Recognition, 31, 909-930.
DOI
ScienceOn
|
17 |
Loh, W. Y. (2009). Improving the precision of classification trees, The Annals of Applied Statistics, 3, 1710-1737.
DOI
|
18 |
Oza, N. C. and Tumer, K. (2008). Classifier ensembles: Select real-world applications, Information Fusion, 9, 4-20.
DOI
|
19 |
Therneau, T. and Atkinson, E. (1997). An introduction to recursive partitioning using the RPART routines, Mayo Foundation, Rochester, New York. http://eric.univlyon2.fr/ricco/cours/didacticiels/r/longdocrpart.pdf.
|
20 |
Statlib (2010). Datasets archive, Carnegie Mellon University, Department of Statistics, http://lib.stat.cmu.edu.
|
21 |
Terhune, J. M. (1994). Geographical variation of harp seal underwater vocalisations, Canadian Journal of Zoology, 72, 892-897.
DOI
ScienceOn
|
22 |
Zhu, J., Zou, H., Rosset, S. and Hastie, T. (2009). Multi-class AdaBoost, Statistics and Its Interface, 2, 349-360.
DOI
|
23 |
Breiman, L. (1996b). Out-of-bag estimation, Technical Report, Statistics Department, University of California Berkeley, Berkeley, California 94708, http://www.stat.berkeley.edu/ breiman/ OOBes-timation.pdf.
|
24 |
Tumer, K. and Oza, N. C. (2003). Input decimated ensembles, Pattern Analysis and Applications, 6, 65-77.
DOI
|
25 |
Opitz, D. and Maclin, R. (1999). Popular ensemble methods: An empirical study, Journal of Artificial Intelligence Research, 11, 169-198.
|