DOI QR코드

DOI QR Code

Use Cases of Program Task using Tools based on Machine Learning and Deep Learning

  • 투고 : 2024.10.23
  • 심사 : 2024.11.03
  • 발행 : 2024.11.30

초록

The difference of this paper is that it analyzes the latest machine learning and deep learning tools for various tasks of program such as program search, understanding, completion, and review. In addition, the purpose of this study is to increase the understanding of various tasks of program by examining specific cases of applying various tasks of program based on tools. Recently, machine learning (ML) and deep learning (DL) technologies have contributed to automation and improvement of efficiency in various software development tasks such as program search, understanding, completion, and review. This study examines the characteristics of the latest ML and DL tools implemented for various tasks of program. Although these tools have many strengths, they still have weaknesses in generalization in various programming languages and program structures, and efficiency of computational resources. In this study, we evaluated the characteristics of these tools in a real environment.

키워드

참고문헌

  1. M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de O. Pinto, J. Kaplan, et. al, "Evaluating Large Language Models Trained on Code," https://arxiv.org/pdf/2107.03374. pp. 1-35. Jul 2021. DOI: https://doi.org/10.48550/arXiv.2107.03374
  2. Z. Feng, D. Guo, D. Tang, N. Duan, X. Feng, et. al, "CodeBERT: A Pre-Trained Model for Programming and Natural Languages," https://arxiv.org/abs/2002.08155, pp. 1-12, Sep 2020. DOI: https://doi.org/10.48550/arXiv.2002.08155
  3. U. Alon, M. Zilberstein, O. Levy, E. Yahav, "code2vec: Learning Distributed Representations of Code," https://arxiv.org/pdf/1803.09473. in Proc. ACM on Programming Languages, Vol. 3, Issue POPL, No. 40, pp. 1-29, Jan 2019. DOI: https://doi.org/10.1145/3290353
  4. D. Guo, S. Ren, S. Lu, Z. Feng, D. Tang, et. al, "GraphCodeBERT: Pre-training code Representations with Data Flow," in Proc. ICLR2021 9th, https://arxiv.org/abs/2009.08366, pp. 1-18, Sep 2021. DOI: https://doi.org/10.48550/arXiv.2009.08366
  5. U. Alon, S. Brody, O. Levy, E. Yahav, "code2seq: Generating Sequences from Structure Representations of Code," https://arxiv.org/abs/1808.01400, in Proc. ICLR2019, pp. 1-22. DOI: https://doi.org/10.48550/arXiv.1808.01400
  6. B. Guo, X. Shan, J. Chung, "A Comparative Study on the Features and Applications of AI Tools - Focus on PIKA Labs and RUNWAY," The Journal of The Institute of Internet, Broadcasting and Communication (JIIBC), Vol. 16, No. 1, pp. 86-91, Feb 2024. DOI: https://doi.org/10.7236/IJI BC.2024.16.1.86
  7. TabNine, https://www.tabnine.com/.
  8. TabNine, https://github.com/codota/TabNine
  9. Codota, https://www.tabnine.com/.
  10. Codota, https://github.com/codota
  11. MS IntelliCode, https://visualstudio.microsoft.com/ko/services/intellicode/
  12. OpenAI Codex, https://openai.com/
  13. Salesforce Research CodeT5, https://github.com/salesforce/CodeT5
  14. MS Research, https://github.com/microsoft/CodeBERT
  15. Technion Israel Institute of Technology, https://github.com/tech-srl/code2vec
  16. MS Research, https://github.com/microsoft/graphcodebert
  17. Snyk, https://snyk.io/platform/deepcode-ai/
  18. MS Research, https://github.com/microsoft/prose
  19. MS, https://www.scribd.com/document/632501043/Flash-fill-1
  20. Sourcegraph, https://sourcegraph.com/
  21. Sourcegraph, https://github.com/sourcegraph/sourcegraph
  22. DreamCode, https://github.com/DreamPoland
  23. DreamCode, https://www.dreamcode.io/
  24. Keras, https://keras.io/
  25. Keras, https://github.com/keras-team/keras
  26. Deeplearning4J, https://github.com/deeplearning4j
  27. Deeplearning4J, https://github.com/deeplearning4j/deeplearning4j