DOI QR코드

DOI QR Code

인용 지표를 이용한 재순위화 및 질의 확장의 성능 평가 - 인용색인 데이터베이스를 기반으로 -

Performance Evaluation of Re-ranking and Query Expansion for Citation Metrics: Based on Citation Index Databases

  • 이혜경 (경북대학교 문헌정보학과) ;
  • 이용구 (경북대학교 문헌정보학과)
  • 투고 : 2023.07.22
  • 심사 : 2023.08.12
  • 발행 : 2023.08.31

초록

본 연구의 목적은 인용 지표가 인용 색인 데이터베이스의 검색성능 향상에 기여할 가능성을 파악하는 데에 있다. 이를 위하여 본 연구는 문헌정보학 분야 10개의 질의를 Web of Science에서 검색하여 수집한 3,467건의 문헌과 2000년부터 2021년까지 SSCI 문헌정보학 분야 저널 85종에 수록된 60,734건의 문헌을 기반으로 적합성 판단을 거쳐, 검색 결과의 상위 100순위에 대한 성능 및 검색 방식과 인용 지표를 활용한 재순위화, 그리고 벡터 공간모형 검색시스템 구축 등에 따른 질의 확장 실험을 수행하였다. 그 결과 첫째, 인용 지표를 단독으로 사용한 재순위화의 성능은 Web of Science의 검색성능과 상이하였으며, 인용 지표는 Web of Science 기존 시스템에 적용되지 않는 독립적인 지표로 작용하고 있었다. 둘째, 고유 질의어 수에 질의어의 총 출현 빈도를 조합하고 인용수를 보조적으로 사용했을 때, 성능에 긍정적인 영향을 미칠 것으로 확인하였다. 셋째, 질의 확장에서는 전반적으로 벡터 공간모형 기반 검색시스템의 기본 성능 대비 성능이 향상되었다. 넷째, 이용자 적합성을 통해 질의 확장을 적용한 경우가 시스템 적합성을 적용한 경우보다 성능이 향상 되었다. 다섯째, 피인용 수를 적합 문헌과 더불어 사용하면 최상위권 내 적합 문헌에서의 순위 변동 가능성을 보여주었다.

The purpose of this study is to explore the potential contribution of citation metrics to improving the search performance of citation index databases. To this end, the study generated ten queries in the field of library and information science and conducted experiments based on the relevance assessment using 3,467 documents retrieved from the Web of Science and 60,734 documents published in 85 SSCI journals in the field of library and information science from 2000 to 2021. The experiments included re-ranking of the top 100 search results using citation metrics and search methods, query expansion experiments using vector space model retrieval systems, and the construction of a citation-based re-ranking system. The results are as follows: 1) Re-ranking using citation metrics differed from Web of Science's performance, acting as independent metrics. 2) Combining query term frequencies and citation counts positively affected performance. 3) Query expansion generally improved performance compared to the vector space model baseline. 4) User-based query expansion outperformed system-based. 5) Combining citation counts with suitability documents affected ranking within top suitability documents.

키워드

참고문헌

  1. Jang, Youngjin, Kwon, Oh-Woog, & Kim, Harksoo (2020). Passage re-ranking model for information retrieval based machine reading comprehension. Conference of Computing Science and Engineering, 410-412. 
  2. Kim, HanJoon, Noh, Joonho, & Chang, Jae-Young (2012). A new re-ranking technique based on concept-network profiles for personalized web search. The Journal of The Institute of Internet, Broadcasting and Communication, 12(2), 69-76. http://dx.doi.org/10.7236/JIWIT.2012.12.2.6. 
  3. Kim, HongRyul & Lee, Too-Young (1999). A study on relevance criteria of retrieved documents according to the research stage. Conference of the Korean Society for Information Management, 5-8. 
  4. Kim, SeonWook & Yang, Kiduk (2022). Topic model augmentation and extension method using LDA and BERTopic. Journal of the Korean Society for Information Management, 39(3), 99-132. https://doi.org/10.3743/KOSIM.2022.39.3.099 
  5. Lee, Seung-Wook, Song, Young-In, & Rim, Hae-Chang (2008). An opinionated document retrieval system based on hybrid method. Journal of the Korean Society for Information Management, 25(4), 115-129. https://doi.org/10.3743/KOSIM.2008.25.4.115 
  6. Park, JungAh & Sohn, YoungWoo (2009). User-centered relevance judgement model for information retrieval. The Korean Society For Emotion & Sensibility, 12(4), 489-500. 
  7. Anker, M. S., Hadzibegovic, S., Lena, A., & Haverkamp, W. (2019). The difference in referencing in Web of Science, Scopus, and Google Scholar. ESC Heart Failure, 6(6), 1291-1312. https://doi.org/10.1002/ehf2.12583 
  8. Bar-Ilan, J. (2008). Which h-index?-a comparison of WoS, Scopus and Google Scholar. Scientometrics, 74(2), 257-271. https://doi.org/10.1007/s11192-008-0216-y 
  9. Birkle, C., Pendlebury, D. A., Schnell, J., & Adams, J. (2020). Web of Science as a data source for research on scientific and scholarly activity. Quantitative Science Studies, 1(1), 363-376. https://doi.org/10.1162/qss_a_00018 
  10. Buckley, C., Salton, G., & Allan, J. (1993, March). The smart information retrieval project. In Proceedings of the workshop on Human Language Technology, 392-392. 
  11. Claveau, V. (2021, December). Neural text generation for query expansion in information retrieval. In IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, 202-209. https://doi.org/10.1145/3486622.3493957
  12. Cooper, W. S. (1971). A definition of relevance for information retrieval. Information storage and retrieval, 7(1), 19-37. https://doi.org/10.1016/0020-0271(71)90024-6 
  13. Cronin, B. (1982), Norms and functions in citation - the view of journal editors and referees in psychology. Social Science Information Studies, 2, 65-78. https://doi.org/10.1016/0143-6236(82)90001-1 
  14. Gao, Q., Huang, X., Dong, K., Liang, Z., & Wu, J. (2022). Semantic-enhanced topic evolution analysis: a combination of the dynamic topic model and word2vec. Scientometrics, 127(3), 1543-1563. https://doi.org/10.1007/s11192-022-04275-z 
  15. Garfield, E. (1964). "Science citation index"-a new dimension in indexing. Science, 144(3619), 649-654. https://doi.org/10.1126/science.144.3619.649 
  16. Gupta, V., Chinnakotla, M., & Shrivastava, M. (2018, November). Retrieve and re-rank: a simple and effective IR approach to simple question answering over knowledge graphs. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), 22-27. https://doi.org/10.18653/v1/W18-5504 
  17. Harman, D. (1988, May). Towards interactive query expansion. In Proceedings of the 11th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 321-331. https://doi.org/10.1145/62437.62469 
  18. Harman, D. (1992, June). Relevance feedback revisited. In Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1-10. https://doi.org/10.1145/133160.133167 
  19. Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569-16572. https://doi.org/10.1073/pnas.0507655102 
  20. Ioannakis, G., Koutsoudis, A., Pratikakis, I., & Chamzas, C. (2017). RETRIEVAL-an online performance evaluation tool for information retrieval methods. IEEE Transactions on Multimedia, 20(1), 119-127. https://doi.org/10.1109/TMM.2017.2716193 
  21. Jain, S., Seeja, K. R., & Jindal, R. (2021). A fuzzy ontology framework in information retrieval using semantic query expansion. International Journal of Information Management Data Insights, 1(1), 100009. https://doi.org/10.1016/j.jjimei.2021.100009 
  22. Jiang, Z., Tang, R., Xin, J., & Lin, J. (2021, November). How does BERT rerank passages? an attribution analysis with information bottlenecks. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and InterpretingNeural Networks for NLP, 496-509. https://doi.org/10.18653/v1/2021.blackboxnlp-1.39
  23. Lancaster, F. W. (1979). Information Retrieval Systems; Characteristics, Testing, and Evaluation. New York: Wiley. 
  24. Lv, Y. & Zhai, C. (2010, July). Positional relevance model for pseudo-relevance feedback. In Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 579-586. https://doi.org/10.1145/1835449.1835546 
  25. Maglaughlin, K. L. & Sonnenwald, D. H. (2002). User perspectives on relevance criteria: a comparison among relevant, partially relevant, and not relevant judgments. Journal of the American Society for Information Science and Technology, 53(5), 327-342. https://doi.org/10.1002/asi.10049 
  26. Martin-Martin, A., Thelwall, M., Orduna-Malea, E., & Delgado Lopez-Cozar, E. (2021). Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations' COCI: a multidisciplinary comparison of coverage via citations. Scientometrics, 126(1), 871-906. https://doi.org/10.1007/s11192-020-03690-4 
  27. Mizzaro, S. (1998). How many relevances in information retrieval? Interacting with computers, 10(3), 303-320. https://doi.org/10.1016/S0953-5438(98)00012-5 
  28. Natsev, A., Haubold, A., Tesic, J., Xie, L., & Yan, R. (2007, September). Semantic concept-based query expansion and re-ranking for multimedia retrieval. In Proceedings of the 15th ACM international conference on Multimedia, 991-1000. https://doi.org/10.1145/1291233.1291448 
  29. Pereira, M., Etemad, E., & Paulovich, F. (2020, March). Iterative learning to rank from explicit relevance feedback. In Proceedings of the 35th Annual ACM Symposium on Applied Computing, 698-705. https://doi.org/10.1145/3341105.3374002 
  30. Rivas, A. R., Iglesias, E. L., & Borrajo, L. (2014). Study of query expansion techniques and their application in the biomedical information retrieval. The Scientific World Journal, 2014. https://doi.org/10.1155/2014/132158 
  31. Rocchio, J. (1971). Relevance feedback in information retrieval. The Smart Retrieval Systemexperiments in Automatic Document Processing, 313-323. 
  32. Rovira, C., Codina, L., Guerrero-Sole, F., & Lopezosa, C. (2019). Ranking by relevance and citation counts, a comparative study: Google Scholar, Microsoft Academic, WoS and Scopus. Future Internet, 11(9), 202. https://doi.org/10.3390/fi11090202 
  33. Salton, G. & Lesk, M. E. (1965). The SMART automatic document retrieval systems-an illustration. Communications of the ACM, 8(6), 391-398. https://doi.org/10.1145/364955.364990 
  34. Salton, G. & McGill, Michael J. (1983) Introduction to Modern Information Retrieval. New York: McGraw-Hill.
  35. Salton, G., Wong, A., & Yang, C. S. (1975). A vector space model for automatic indexing. Communications of the ACM, 18(11), 613-620. https://doi.org/10.1145/361219.361220
  36. Smith, L. C. (1981). Citation Analysis. Library Trends, 30(1), 83-106. https://hdl.handle.net/2142/7190 
  37. Soboroff, I. (2021). Overview of TREC 2021. In 30th Text REtrieval Conference. Gaithersburg, Maryland. Available: https://trec.nist.gov/pubs/trec30/papers/Overview-2021.pdf 
  38. Spink, A., Greisdorf, H., & Bateman, J. (1998). From highly relevant to not relevant: examining different regions of relevance. Information processing & management, 34(5), 599-621. https://doi.org/10.1016/S0306-4573(98)00025-9 
  39. Taylor, A. (2012). User relevance criteria choices and the information search process. Information Processing & Management, 48(1), 136-153. https://doi.org/10.1016/j.ipm.2011.04.005 
  40. Van Gysel, C. & de Rijke, M. (2018, June). Pytrec_eval: an extremely fast python interface to trec_eval. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, 873-876. https://doi.org/10.1145/3209978.3210065 
  41. Van Raan, A. F. (2005). For your citations only? Hot topics in bibliometric analysis. Measurement: interdisciplinary research and perspectives, 3(1), 50-62. https://doi.org/10.1207/s15366359mea0301_7 
  42. Voorhees, E. M. & Harman, D. K. ed. (2005). TREC: Experiment and evaluation in information retrieval (Vol. 63). Cambridge: MIT Press. Available: http://aclanthology.lst.uni-saarland.de/J06-4008.pdf 
  43. Wang, X., Yang, H., Zhao, L., Mo, Y., & Shen, J. (2021, July). Refbert: Compressing bert by referencing to pre-computed representations. In 2021 International Joint Conference on Neural Networks (IJCNN), 1-8. IEEE. http://doi.org/10.1109/IJCNN52387.2021.9534402 
  44. Wang, Y., Wang, L., Li, Y., He, D., Chen, W., & Liu, T. Y. (2013, June). A theoretical analysis of NDCG ranking measures. In Proceedings of the 26th Annual Conference on Learning Theory (COLT 2013), 25-54. https://doi.org/10.48550/arXiv.1304.6480 
  45. Xu, J. & Croft, W. B. (2017, August). Quary expansion using local and global document analysis. In Acm Sigir Forum, 51(2). 168-175. https://doi.org/10.1145/3130348.3130364 
  46. Zheng, Z., Hui, K., He, B., Han, X., Sun, L., & Yates, A. (2020). BERT-QE: contextualized query expansion for document re-ranking. In Findings of the Association for Computational Linguistics: EMNLP 2020, 4718-4728. https://doi.org/10.18653/v1/2020.findings-emnlp.424