DOI QR코드

DOI QR Code

Research trends in statistics for domestic and international journal using paper abstract data

초록데이터를 활용한 국내외 통계학 분야 연구동향

  • Yang, Jong-Hoon (Department of Applied Statistics, Chung-Ang University) ;
  • Kwak, Il-Youp (Department of Applied Statistics, Chung-Ang University)
  • 양종훈 (중앙대학교 응용통계학과) ;
  • 곽일엽 (중앙대학교 응용통계학과)
  • Received : 2021.01.11
  • Accepted : 2021.01.27
  • Published : 2021.04.30

Abstract

As time goes by, the amount of data is increasing regardless of government, business, domestic or overseas. Accordingly, research on big data is increasing in academia. Statistics is one of the major disciplines of big data research, and it will be interesting to understand the research trend of statistics through big data in the growing number of papers in statistics. In this study, we analyzed what studies are being conducted through abstract data of statistical papers in Korea and abroad. Research trends in domestic and international were analyzed through the frequency of keyword data of the papers, and the relationship between the keywords was visualized through the Word Embedding method. In addition to the keywords selected by the authors, words that are importantly used in statistical papers selected through Textrank were also visualized. Lastly, 10 topics were investigated by applying the LDA technique to the abstract data. Through the analysis of each topic, we investigated which research topics are frequently studied and which words are used importantly.

시간이 갈 수록, 정부, 기업, 국내, 해외를 막론하고 데이터의 양이 증가하고 있다. 이에따라 학계에서도 빅데이터에 대한 연구들이 늘어나고 있다. 통계학은 빅데이터 연구의 중심이 되는 학문들 중 하나이며, 늘어나는 통계학 분야 논문 빅데이터를 통해 통계학의 연구동향을 파악해 보는 것도 재미있을 것이다. 본 연구에서는 국내와 해외의 통계학 논문들의 초록데이터를 통해 어떤 연구들이 이루어지고 있는지 분석을 진행하였다. 저자들이 선정한 논문들의 키워드 데이터 빈도를 통해 국내외 연구 동향을 분석하였고, Word Embedding 방법을 통해 해당 키워드들의 관계성을 시각화 하였다. 여기서 저자들이 선정한 키워드들 외에 Textrank를 통해 선정된 통계학 분야 논문들에서 중요하게 사용되는 단어들도 추가적으로 시각화 하였다. 마지막으로 초록 데이터에 LDA 기법을 적용하여 10가지 토픽을 알아보았다. 각 토픽들에 대한 분석을 통해 어떤 연구 주제들이 자주 연구되며, 어떤 단어들이 중요하게 사용되는지 알아보았다.

Keywords

Acknowledgement

이 논문은 2020년도 중앙대학교 연구장학기금 지원에 의한 것임.

References

  1. Blei MD, Ng YA, Jordan IM (2003). Latent dirichlet allocation, Journal of Machine Learning Research, 3, 993-1022.
  2. Brin S and Page L (1998). The anatomy of a large-scale hypertextual web search engine. In Proceedings of the Seventh International Conference on World Wide Web 7, 107-117.
  3. Brownlee J (2020). A gentle introduction to the bag-of-words model. In Deep Learning for Natural Language Processing
  4. Choi CH and LEE JB (2017). The knowledge structure analysis on Taekwondo researches : Application of key-word network analysis, The Korean Journal of Physical Education, 56, 627-644. https://doi.org/10.23949/kjpe.2017.05.56.3.47
  5. Cox FT and Cox MAA (2000). Multidimensional scaling 2nd ed, Chapman and Hall.
  6. Goldberg Y and Levy O (2014). Word2vec explained: deriving mikolov et al.'s negative-sampling word-embedding method. arXiv
  7. Jeon YB, Ryu SR, Song JH, and Kim HJ (2017), Analysis of research trends in artificial intelligence using text mining techniques, Proceedings of the Korea Intelligent Information Systems Society, 39-40.
  8. Jolliffe IT (1986). Principal Component Analysis, Springer Verlag.
  9. Joulin A, Grave E, Bojanowski P, Douze M, Jegou H, and Mikolov T (2016). Fasttext.zip: Compressing text classification models, arXiv preprint arXiv:1612.03651.
  10. Kim SY (2020). Analysis on status and trends of SIAM journal papers using text mining, Journal of the Korea Contents Association, 20, 212-222.
  11. Landauer TK, Foltz PW, and Laham D (1998). An introduction to latent semantic analysis, Discourse processes, 25, 259-284. https://doi.org/10.1080/01638539809545028
  12. Lee IS, Park SH, Baek JG (2015). Identification of research trends in the manufacturing system field through text mining, Proceedings of the Spring Conference of the Korean Institute of Industrial Engineers, 4201-4205
  13. Maaten L and Hinton G (2008). Visualizing data using t-SNE, Journal of Machine Learning Research, 9, 2579-2605.
  14. Mai F, Galke L, and Scherp A (2019). CBOW is not all you need: Combining CBOW with the compositional matrix space model, CoRR.
  15. Mihalcea R, Tarau P (2004). TextRank: bringing order into texts. In Proceedings of EMNLP-04and the 2004 Conference on Empirical Methods in Natural Language Processing.
  16. Mikolov T, Sutskever I, Chen K, Corrado G, and Dean J (2013). Distributed rep-resentations of words and phrases and their compositionality, Neural and Information Processing System (NIPS)
  17. Papadimitriou C, Raghavan P, Tamaki H, and Vempala S (1998). Latent semantic indexing: a probabilistic analysis. In Proceedings of ACM PODS, 159-168.
  18. Pennington J, Socher R, and Manning CD (2014). Glove: global vectors forword representation, EMNLP, 14, 1532-1543.
  19. Rong X (2014). Word2vec parameter learning explained. arXiv.
  20. Roweis TS and Saul KL (2000). Nonlinear Dimensionality Reduction by Locally Linear Embedding, Science, 290, 2323-2326. https://doi.org/10.1126/science.290.5500.2323
  21. Sievert C, Shirley K (2014). LDAvis: A method for visualizing and interpreting topics, Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces, 63-70.
  22. Yin Z and Shen Y (2018). On the dimensionality of word embedding, Advances in Neural Information Processing Systems 31, 895-906.