DOI QR코드

DOI QR Code

A Prediction Method of Learning Outcomes based on Regression Model for Effective Peer Review Learning

효율적인 피어리뷰 학습을 위한 회귀 모델 기반 학습성과 예측 방법

  • 신효정 (삼성전자) ;
  • 정혜욱 (성균관대학교 컴퓨터공학과) ;
  • 조광수 (성균관대학교 인터렉션사이언스학과) ;
  • 이지형 (성균관대학교 컴퓨터공학과)
  • Received : 2012.06.28
  • Accepted : 2012.09.26
  • Published : 2012.10.25

Abstract

The peer review learning is a method which improves learning outcome of students through feedback between students and the observation and analysis of other students. One of the important problems in a peer review system is to find proper evaluators to each learner considering characteristics of students for improving learning outcomes. Some of peer review systems randomly assign peer review evaluators to learners, or chose evaluators based on limited strategies. However, these systems have a problem that they do not consider various characteristics of learners and evaluators who participate in peer reviews. In this paper, we propose a novel prediction approach of learning outcomes to apply peer review systems considering various characteristics of learners and evaluators. The proposed approach extracts representative attributes from the profiles of students and predicts learning outcomes using various regression models. In order to verify how much outliers affect on the prediction of learning outcomes, we also apply several outlier removal methods to the regression models and compare the predictive performance of learning outcomes. The experiment result says that the SVR model which does not removes outliers shows an error rate of 0.47% on average and has the best predictive performance.

피어리뷰(peer review)를 통한 학습은 학습자간 피드백을 주고받으며 다양한 정보를 관찰, 분석하는 과정을 통해 학습성과를 향상시키는 방법이다. 피어리뷰 시스템의 중요한 문제 중 하나는, 학습자의 여러 특징을 고려하여 학습자의 학습성과를 향상시키는데 적합한 평가자를 찾는 것이다. 그러나 기존 피어리뷰 시스템에서는 학습자들이 가지는 다양한 특징을 고려하지 않고 단순히 피어리뷰 평가자를 임의로 할당하거나 제한적인 학습 전략에 따라 피어리뷰 평가자를 편성하였다. 본 논문에서는 학습자와 평가자의 다양한 특징을 고려하여, 특정 학습자와 평가자의 조합으로 피어리뷰 학습이 이루어졌을 때 학습자에게 어느 정도의 학습성과 향상이 있을지 예측하는 방법을 제안한다. 제안하는 방법은 학습자와 평가자의 프로파일 정보로부터 대표 속성을 추출하고 다양한 회귀 모델을 적용하였다. 또한 학습자들의 다양한 특징으로 인하여 나타날 수 있는 이상치(outlier)가 학습성과 예측에 미치는 영향을 알아보기 위해, 회귀 모델에 다양한 이상치 제거 방법을 적용하여 학습성과 예측성능을 비교하였다. 실험 결과 이상치를 제거 하지 않은 SVR 모델이 평균 0.47%의 에러율을 보이며 가장 우수한 학습성과 예측결과를 보였다.

Keywords

References

  1. Cho, K., Chung, T. R., King, W. R., and Schunn, C. "Peer-based computer-supported knowledge refinement: an empirical investigation," Commun. ACM, vol 51, no. 3, pp. 83-88. 2008. https://doi.org/10.1145/1325555.1325571
  2. Eric Zhi-Feng Liu, Sunny S. J. Lin, Chi-Huang Chiu, and Shyan-Ming Yuan, "Web-Based Peer Review: The Learner as both Adapter and Reviewer," IEEE Transactions on education, vol. 44, no. 3, pp. 246-251, 2001. https://doi.org/10.1109/13.940995
  3. R.M. Crespo, A. Pardo, J.P. Somolinos and C. Delgado-Kloos, "An algorithm for peer review matching using students profiles based on fuzzy classification and genetic algorithms," Lecture Notes in Computer Science, vol. 3533, pp. 685-69, 2005.
  4. Gehringer, E. F, "Assignment and quality control of peer reviewers," Proceedings, ASEE Annual Conference, Session 3230, 2001.
  5. Hye-Wuk Jung, Kwangsu Cho and Jee-Hyong Lee, "The Analysis of student pattern using peer review information of writing instruction" Proceedings of the Korean Institute of Intelligent Systems, vol. 21, no. 1, pp. 75-77, 2011.
  6. Machine Learning Group at University of Waikato, "Weka 3: Data Mining Software in Java," Available: http://www.cs.waikato.ac.nz/ml/weka/, 2008. [Accessed: May 1, 2012]
  7. Kenji Kira, Larry A. Rendell, "A Practical Approach to Feature Selection," Proceedings of the ninth international workshop on Machine learning, pp. 249-256, 1992.
  8. Hall, Mark A., Smith, Lloyd A., "Feature Subset Selection: A Correlation Based Filter Approach," International Conference on Neural Information Processing and Intelligent Information Systems, pp. 855-858, 1997.
  9. Michael H. Kutner, John Neter, Chris J. Nachtsheim: Applied Linear Statistical Models. ${\copyright}$ Richard D. Irwin, Inc. 1990.
  10. Elias Masry, "Multivariate regression estimation: Local polynomial fitting for time series", Stochastic Processes and Their Applics, vol. 65, pp. 81-101, Dec. 1996. https://doi.org/10.1016/S0304-4149(96)00095-6
  11. Breiman, L., Friendman, J. F., Olshen, R. A., Stone C. J.: Classifcation and Regression Trees, ${\copyright}$ Wadsworth International Group, 1984.
  12. Vladimir Vapnik , Steven E. Golowich , Alex Smola, "Support vector method for function approximation, regression estimation, and signal processing," Advances in Neural Information Processing Systems, vol.9, pp. 281-287, 1996.
  13. SPSS Inc: SPSS for Windows: http://www.spss.co.kr/trial/trial_main.asp#
  14. R.-E. Fan, P.-H. Chen, and C.-J. Lin. "Working set selection using second order information for training SVM," Journal of Machine Learning Research 6, pp. 1889-1918, 2005.
  15. Hyojoung Shin, Hye-Wuk Jung, Kwangsu Cho and Jee-Hyong Lee, "Inference Model for Learning outcomes based on Support Vector Regression with Outlier Detection" Proceedings of the Korean Institute of Intelligent Systems, vol. 21, no. 2, pp. 224-225, 2011. https://doi.org/10.5391/JKIIS.2011.21.2.224
  16. Stone M., "An Asymptotic Equivalence of Choice of Model by Cross-Validation and Akaike's Criterion" Journal of the Royal Statistical Society. Series B (Methodological), vol. 39, no. 1, pp. 44-47, 1977.
  17. David M. J. Tax, Robert P. W. Duin, "Support Vector Data Description," Machine Learning Journal, vol. 54 Issue 1, pp. 45-66, January 2004. https://doi.org/10.1023/B:MACH.0000008084.60811.49
  18. Franck Dufrenois, Johan Colliez, and Denis Hamad. "Bounded Influence Support Vector Regression for Robust Single-Model Estimation," IEEE Transactions On Neural Network, vol. 20, no. 11, pp. 1689-1705, 2009. https://doi.org/10.1109/TNN.2009.2024202

Cited by

  1. A Big Data Learning for Patent Analysis vol.23, pp.5, 2013, https://doi.org/10.5391/JKIIS.2013.23.5.406
  2. Genetic Outlier Detection for a Robust Support Vector Machine vol.15, pp.2, 2015, https://doi.org/10.5391/IJFIS.2015.15.2.96