Acknowledgement
이 성과는 정부(과학기술정보통신부)의 재원으로 한국연구재단의 지원을 받아 수행된 연구임 (NRF-2021R1C1C1004562 and RS-2023-00218231). 또한, 이 연구는 서울대학교 신임교수 연구정착금으로 지원되는 연구비에 의하여 수행되었음.
References
- Breiman L (2001). Random forests, Machine Learning, 45, 5-32. https://doi.org/10.1023/A:1010933404324
- Burges C, Shaked T, Renshaw E, Lazier A, Deeds M, Hamilton N, and Hullender G (2005). Learning to rank using gradient descent. In Proceedings of the 22nd International Conference on Machine Learning. Association for Computing Machinery, New York, NY, 89-96.
- Burges C, Ragno R, and Le Q (2006). Learning to rank with nonsmooth cost functions, Advances in Neural Information Processing Systems, 19.
- Burges CJ (2010). From ranknet to lambdarank to lambdamart: An overview, Learning, 11, 81.
- Chen T and Guestrin C (2016). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, San Francisco, CA, USA, 785-794).
- Choe H, Hwang N, Hwang C, and Song J (2015). Analysis of horse races: Prediction of winning horses in horse races using statistical models, The Korean Journal of Applied Statistics, 28, 1133-1146. https://doi.org/10.5351/KJAS.2015.28.6.1133
- Grinsztajn L, Oyallon E, and Varoquaux G (2022). Why do tree-based models still outperform deep learning on typical tabular data?, Advances in Neural Information Processing Systems, 35, 507-520.
- Hu Z, Wang Y, Peng Q, and Li H (2019). Unbiased lambdamart: An unbiased pairwise learning-to-rank algorithm. In Proceedings of The World Wide Web Conference. Association for Computing Machinery, New York, NY, USA, 2830-2836.
- Jarvelin K and Kekalainen J (2017). IR evaluation methods for retrieving highly relevant documents, ACM SIGIR Forum, 51, 243-250. https://doi.org/10.1145/3130348.3130374
- Ke G, Meng Q, Finley T, Wang T, Chen W, Ma W, Ye Q, and Liu TY (2017). Lightgbm: A highly efficient gradient boosting decision tree, Advances in Neural Information Processing Systems, 30.
- Kholkine L, Servotte T, De Leeuw AW, De Schepper T, Hellinckx P, Verdonck T, and Latre S (2021). A learn-to-rank approach for predicting road cycling race outcomes, Frontiers in Sports and Active Living, 3, 714107.
- Li P, Qin Z, Wang X, and Metzler D (2019). Combining decision trees and neural networks for learning-to-rank in personal search. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 2032-2040.
- Liu TY (2009). Learning to rank for information retrieval, Foundations and Trends® in Information Retrieval, 3, 225-331. https://doi.org/10.1561/1500000016
- Park G, Park R, and Song J (2017). Analysis of cycle racing ranking using statistical prediction models, The Korean Journal of Applied Statistics, 30, 25-39. https://doi.org/10.5351/KJAS.2017.30.1.025
- Prokhorenkova L, Gusev G, Vorobev A, Dorogush AV, and Gulin A (2018). CatBoost: Unbiased boosting with categorical features, Advances in Neural Information Processing Systems, 31.
- Pudaruth S, Medard N, and Dookhun ZB (2013). Horse racing prediction at the champ de mars using a weighted probabilistic approach, International Journal of Computer Applications, 72, 39-42. https://doi.org/10.5120/12493-9048
- Soldaini L and Goharian N (2017). Learning to rank for consumer health search: A semantic approach. In Advances in Information Retrieval: 39th European Conference on IR Research, ECIR 2017, Aberdeen, UK, April 8-13, 2017, Proceedings 39 (pp. 640-646). Springer International Publishing.
- Wang X, Li C, Golbandi N, Bendersky M, and Najork M (2018). The lambdaloss framework for ranking metric optimization. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, Torino, Italy, 1313-1322.