DOI QR코드

DOI QR Code

Analysis of the impact of mathematics education research using explainable AI

설명가능한 인공지능을 활용한 수학교육 연구의 영향력 분석

  • 오세준 (이화여자대학교사범대학부속이화.금란고등학교 )
  • Received : 2023.07.26
  • Accepted : 2023.08.18
  • Published : 2023.08.31

Abstract

This study primarily focused on the development of an Explainable Artificial Intelligence (XAI) model to discern and analyze papers with significant impact in the field of mathematics education. To achieve this, meta-information from 29 domestic and international mathematics education journals was utilized to construct a comprehensive academic research network in mathematics education. This academic network was built by integrating five sub-networks: 'paper and its citation network', 'paper and author network', 'paper and journal network', 'co-authorship network', and 'author and affiliation network'. The Random Forest machine learning model was employed to evaluate the impact of individual papers within the mathematics education research network. The SHAP, an XAI model, was used to analyze the reasons behind the AI's assessment of impactful papers. Key features identified for determining impactful papers in the field of mathematics education through the XAI included 'paper network PageRank', 'changes in citations per paper', 'total citations', 'changes in the author's h-index', and 'citations per paper of the journal'. It became evident that papers, authors, and journals play significant roles when evaluating individual papers. When analyzing and comparing domestic and international mathematics education research, variations in these discernment patterns were observed. Notably, the significance of 'co-authorship network PageRank' was emphasized in domestic mathematics education research. The XAI model proposed in this study serves as a tool for determining the impact of papers using AI, providing researchers with strategic direction when writing papers. For instance, expanding the paper network, presenting at academic conferences, and activating the author network through co-authorship were identified as major elements enhancing the impact of a paper. Based on these findings, researchers can have a clear understanding of how their work is perceived and evaluated in academia and identify the key factors influencing these evaluations. This study offers a novel approach to evaluating the impact of mathematics education papers using an explainable AI model, traditionally a process that consumed significant time and resources. This approach not only presents a new paradigm that can be applied to evaluations in various academic fields beyond mathematics education but also is expected to substantially enhance the efficiency and effectiveness of research activities.

본 연구는 수학 교육 분야에서 중요한 영향을 미치는 논문을 판별하고 분석하기 위한 설명가능한 인공지능(XAI) 모델을 개발하였다. 29개 국내외 수학교육 학술지의 논문 메타정보를 활용하여 수학교육 학술연구 네트워크를 구축하였다. 구축된 네트워크는 '논문과 다른 논문의 인용 네트워크', '논문과 저자 네트워크', '논문과 학술지 네트워크', '공동 저자 네트워크', '저자와 소속기관 네트워크' 등 총 5개의 세부 네트워크로 구성되었다. 랜덤포레스트 기계학습 모델을 사용하여 네트워크 내의 개별 논문의 영향력을 평가하였으며, SHAP을 이용해 영향력 있는 논문의 판별 기준을 분석하였다. '논문 네트워크 PageRank', '논문당 인용횟수의 변화량', '총 인용횟수', '저자의 h-index 변화량', '학술지의 논문당 인용횟수' 등이 중요한 판별 요인으로 나타났다. 국내와 국외 수학교육 연구의 판별 패턴을 비교 분석한 결과, 국내 연구에서는 '공동 저자 네트워크 PageRank'의 중요성이 도드라졌다. 본 연구의 XAI 모델은 논문의 영향력 판별 도구로써 연구자에게 논문 작성 시 전략적인 방향성을 제공할 수 있게 해준다. 논문 네트워크 확장, 학술대회 발표, 공동 저술 활동을 통한 저자 네트워크 활성화 등이 논문의 영향력 증진에 크게 기여한다는 결과를 얻었다. 이를 통해 연구자는 학계에서 자신의 연구가 어떠한 평가 기준에 따라 어떻게 인식되고 있는지, 그리고 그 평가에 기여하는 주요 요인이 무엇인지를 명확히 파악할 수 있을 것이다. 본 연구는 설명가능한 인공지능을 활용하여 전통적으로 많은 시간과 비용이 필요하던 수학교육 논문의 영향력 평가 방식을 혁신하였다. 이 방법은 수학교육 연구 뿐만 아니라 다른 학문 분야에서도 활용될 수 있으며, 연구활동의 효율성과 효과성을 향상시킬 것으로 기대된다.

Keywords

References

  1. Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138-52160. https://doi.org/10.1109/access.2018.2870052
  2. Ahn, D., & Lee, K. (2022). Analysis of achievement predictive factors and predictive AI model development - Focused on blended math classes. The Mathematical Education, 61(2), 257-271. https://doi.org/10.7468/mathedu.2022.61.2.257
  3. Alonso, J. M. (2020). Teaching explainable artificial intelligence to high school students. International Journal of Computational Intelligence Systems, 13(1), 974-987. https://doi.org/10.2991/ijcis.d.200715.003
  4. Andrade-Molina, M., Montecino, A., & Aguilar, M. S. (2020). Beyond quality metrics: defying journal rankings as the philosopher's stone of mathematics education research. Educational Studies in Mathematics, 103(3), 359-374. https://doi.org/10.1007/s10649-020-09932-9
  5. Arik, S. O., & Pfister, T. (2021, May). Tabnet: Attentive interpretable tabular learning. In Proceedings of the AAAI conference on artificial intelligence, 35(8), 6679-6687. https://doi.org/10.3390/rs14030716
  6. Arrieta, A. B., Diaz-Rodriguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115. https://doi.org/10.1016/j.inffus.2019.12.012
  7. Bollen, J., Rodriquez, M. A., & Van de Sompel, H. (2006). Journal status. Scientometrics, 69(3), 669-687. https://doi.org/10.1007/s11192-006-0176-z
  8. Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems, 30(1-7), 107-117. https://doi.org/10.1016/s0169-7552(98)00110-x
  9. Chung, K. (2022). The effects of explainable artificial intelligence education program based on AI literacy. Journal of The Korean Association of Artificial Intelligence Education, 3(1), 1-12. https://doi.org/10.52618/aied.2022.3.1.1
  10. DARPA. (2016). Broad Agency Announcement, Explainable Artificial Intelligence (XAI). DARPA-BAA-16-53, 7-8.
  11. Datta, A., Sen, S., & Zick, Y. (2016, May). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE Symposium on Security and Privacy (SP) IEEE. 598-617. https://doi.org/10.1109/sp.2016.42
  12. Funk, R. J., & Owen-Smith, J. (2017). A dynamic network measure of technological change. Management Science, 63(3), 791-817. https://doi.org/10.1287/mnsc.2015.2366
  13. Garfield, E. (2009). From the science of science to Scientometrics visualizing the history of science with HistCite software. Journal of Informetrics, 3(3), 173-179. https://doi.org/10.1016/j.joi.2009.03.009
  14. Gonzalez-Alcaide, G., Valderrama-Zurian, J. C., & Aleixandre-Benavent, R. (2012). The impact factor in non-English-speaking countries. Scientometrics, 92(2), 297-311. https://doi.org/10.1007/s11192-012-0692-y
  15. Haensly, P. J., Hodges, P. E., & Davenport, S. A. (2008). Acceptance rates and journal quality: An analysis of journals in economics and finance. Journal of Business & Finance Librarianship, 14(1), 2-31. https://doi.org/10.1080/08963560802176330
  16. Hamilton, W. L., Ying, R., & Leskovec, J. (2017). Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584.
  17. Inhaber, H., & Przednowek, K. (1976). Quality of research and the Nobel prizes. Social Studies of Science, 6(1), 33-50. https://doi.org/10.1142/9789814299381_0002
  18. Kim, S., & Choi, M. K. (2022). AI-Based educational platform analysis supporting personalized mathematics learning. Communication of Mathematics Education, 36(3), 417-438. https://doi.org/10.7468/jksmee.2022.36.3.417
  19. Kim, S., Kim, W., Jang, Y., & Kim, H.(2021). Development of explainable AI-based learning support system. The Journal of Korean Association of Computer Education, 24(1), 107-115. https://doi.org/10.5392/JKCA.2021.21.12.013
  20. Lee, J. (2014). A comparative study on the centrality measures for analyzing research collaboration networks. Journal of the Korean Society for Information Management, 31(3), 153-179. https://doi.org/10.3743/kosim.2014.31.3.153
  21. Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30.
  22. Mariani, M. S., Medo, M., & Zhang, Y. C. (2016). Identification of milestone papers through time-balanced network centrality. Journal of Informetrics, 10(4), 1207-1223. https://doi.org/10.1016/j.joi.2016.10.005
  23. Moed, H. F. (2005). Citation analysis of scientific journals and journal impact measures. Current Science , 1990-1996. https://doi.org/10.1007/1-4020-3714-7_6
  24. Molnar, C. (2020). Interpretable machine learning, Lulu. com.
  25. Nivens, R. A., & Otten, S. (2017). Assessing journal quality in mathematics education. Journal for Research in Mathematics Education, 48(4), 348-368. https://doi.org/10.5951/jresematheduc.48.4.0348
  26. Oh, S. & Kwon, O. (2023). Development of an impact identification program in mathematical education research using machine learning and network. Communications of Mathematical Education, 37(1), 21-45. https://doi.org/10.7468/jksmee.2023.37.1.21
  27. Park, D., & Shin. S. (2021). A study on the educational meaning of eXplainable artificial intelligence for elementary artificial intelligence education. Journal of the Korean Association of Information Education, 25(5), 803-812. https://doi.org/10.14352/jkaie.2021.25.5.803
  28. Park, H. Y., Son, B. E., & Ko, H. K. (2022). Study on the mathematics teaching and learning artificial intelligence platform analysis. Journal of the Korean Society of Mathematics Education Series E: Communication of Mathematics Education, 36(1), 1-21. https://doi.org/10.7468/jksmee.2022.36.1.1
  29. Price, D. (1963). Little science, big science... and beyond (Vol. 480). Columbia University Press.
  30. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). Why should i trust you: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144), https://doi.org/10.18653/v1/n16-3020
  31. Sarigol, E., Pfitzner, R., Scholtes, I., Garas, A., & Schweitzer, F. (2014). Predicting scientific success based on coauthorship networks. EPJ Data Science, 3, 1-16. https://doi.org/10.1140/epjds/s13688-014-0009-x
  32. Stolerman, I. P., & Stenius, K. (2008). The language barrier and institutional provincialism in science Drug and Alcohol Dependence, 92(1-3), 1-2. https://doi.org/10.1016/j.drugalcdep.2007.07.010
  33. Weihs, L., & Etzioni, O. (2017, June). Learning to predict citation-based impact measures. In 2017 ACM/IEEE joint conference on digital libraries (JCDL) (pp. 1-10). IEEE. https://doi.org/10.1109/jcdl.2017.7991559
  34. Weis, J. W., & Jacobson, J. M. (2021). Learning on knowledge graph dynamics provides an early warning of impactful research. Nature Biotechnology, 39(10), 1300-1307. https://doi.org/10.1038/s41587-021-00907-6
  35. Williams, S. R., & Leatham, K. R. (2017). Journal quality in mathematics education. Journal for Research in Mathematics Education, 48(4), 369-396. https://doi.org/10.5951/jresematheduc.48.4.0369
  36. Zhou, Y., Li, Q., Yang, X., & Cheng, H. (2021). Predicting the popularity of scientific publications by an age-based diffusion model. Journal of Informetrics, 15(4), 101177. https://doi.org/10.1016/j.joi.2021.101177
  37. Zhu, X., Turney, P., Lemire, D., & Vellino, A. (2015). Measuring academic influence: Not all citations are equal. Journal of the Association for Information Science and Technology, 66(2), 408-427. https://doi.org/10.1002/asi.23179