DOI QR코드

DOI QR Code

Is ChatGPT a "Fire of Prometheus" for Non-Native English-Speaking Researchers in Academic Writing?

  • Sung Il Hwang (Department of Radiology, Seoul National University Bundang Hospital) ;
  • Joon Seo Lim (Scientific Publications Team, Clinical Research Center, Asan Medical Center, University of Ulsan College of Medicine) ;
  • Ro Woon Lee (Department of Radiology, Inha University Hospital) ;
  • Yusuke Matsui (Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University) ;
  • Toshihiro Iguchi (Department of Radiological Technology, Faculty of Health Sciences, Okayama University) ;
  • Takao Hiraki (Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University) ;
  • Hyungwoo Ahn (Department of Radiology, Seoul National University Bundang Hospital)
  • Received : 2023.08.17
  • Accepted : 2023.08.18
  • Published : 2023.10.01

Abstract

Large language models (LLMs) such as ChatGPT have garnered considerable interest for their potential to aid non-native English-speaking researchers. These models can function as personal, round-the-clock English tutors, akin to how Prometheus in Greek mythology bestowed fire upon humans for their advancement. LLMs can be particularly helpful for non-native researchers in writing the Introduction and Discussion sections of manuscripts, where they often encounter challenges. However, using LLMs to generate text for research manuscripts entails concerns such as hallucination, plagiarism, and privacy issues; to mitigate these risks, authors should verify the accuracy of generated content, employ text similarity detectors, and avoid inputting sensitive information into their prompts. Consequently, it may be more prudent to utilize LLMs for editing and refining text rather than generating large portions of text. Journal policies concerning the use of LLMs vary, but transparency in disclosing artificial intelligence tool usage is emphasized. This paper aims to summarize how LLMs can lower the barrier to academic writing in English, enabling researchers to concentrate on domain-specific research, provided they are used responsibly and cautiously.

Keywords

Acknowledgement

The authors deeply appreciate Prof. Joo Hyeong Oh from Kyung Hee University and Prof. Seung Eun Jung from Catholic University for providing us with valuable insights and motivation to contemplate the appropriate use of ChatGPT for radiologists. We also would like to thank Susumu Kanazawa, a professor emeritus from Okayama University for the international collaboration.

References

  1. Flowerdew J. Some thoughts on English for research publication purposes (ERPP) and related issues. Lange Teach 2015;48:250-262 https://doi.org/10.1017/S0261444812000523
  2. Cho DW. Science journal paper writing in an EFL context: the case of Korea. English Specif Purp 2009;28:230-239 https://doi.org/10.1016/j.esp.2009.06.002
  3. Flowerdew J. Problems in writing for scholarly publication in English: the case of Hong Kong. J Second Lang Writ 1999;8:243-264 https://doi.org/10.1016/S1060-3743(99)80116-7
  4. Okamura A. Two types of strategies used by Japanese scientists, when writing research articles in English. System 2006;34:68-79 https://doi.org/10.1016/j.system.2005.03.006
  5. Dong YR. Non-native graduate students' thesis/dissertation writing in science: self-reports by students and their advisors from two U.S. institutions. English Specif Purp 1998;17:369-390 https://doi.org/10.1016/S0889-4906(97)00054-9
  6. Shaw P. Science research students' composing processes. English Specif Purp 1991;10:189-206 https://doi.org/10.1016/0889-4906(91)90024-Q
  7. Rogerson AM, McCarthy G. Using internet based paraphrasing tools: original work, patchwriting or facilitated plagiarism? Int J Educ Integr 2017;13:2
  8. Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature 2023;613:620-621
  9. Doskaliuk B, Zimba O. Beyond the keyboard: academic writing in the era of ChatGPT. J Korean Med Sci 2023;38:e207
  10. Majovsky M, Cerny M, Kasal M, Komarc M, Netuka D. Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora's box has been opened. J Med Internet Res 2023;25:e46924
  11. Flowerdew J. Writing for scholarly publication in English: the case of Hong Kong. J Second Lang Writ 1999;8:123-145 https://doi.org/10.1016/S1060-3743(99)80125-8
  12. Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care 2023;27:75
  13. Altmae S, Sola-Leyva A, Salumets A. Artificial intelligence in scientific writing: a friend or a foe? Reprod Biomed Online 2023;47:3-9 https://doi.org/10.1016/j.rbmo.2023.04.009
  14. Noy S, Zhang W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 2023;381:187-192 https://doi.org/10.1126/science.adh2586
  15. Athaluri SA, Manthena SV, Kesapragada VSRKM, Yarlagadda V, Dave T, Duddumpudi RTS. Exploring the boundaries of reality: investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus 2023;15:e37432
  16. Brainard J. As scientists explore AI-written text, journals hammer out policies [accessed on August 14, 2023]. Available at: https://www.science.org/content/article/scientists-explore-ai-written-text-journals-hammer-policies
  17. Bouville M. Plagiarism: words and ideas. Sci Eng Ethics 2008;14:311-322 https://doi.org/10.1007/s11948-008-9057-6
  18. OpenAI. Privacy policy [accessed on August 14, 2023]. Available at: https://openai.com/policies/privacy-policy
  19. Taylor J. AMA calls for stronger AI regulations after doctors use ChatGPT to write medical notes [accessed on August 14, 2023]. Available at: https://www.theguardian.com/technology/2023/jul/27/chatgpt-health-industry-hospitals-ai-regulations-ama
  20. Edward B. Why AI detectors think the US constitution was written by AI [accessed on August 14, 2023]. Available at: https://arstechnica.com/information-technology/2023/07/ why-ai-detectors-think-the-us-constitution-was-written-by-ai/
  21. OpenAI. New AI classifier for indicating AI-written text [accessed on August 14, 2023]. Available at: https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text
  22. Bartz D, Hu K. OpenAI, Google, others pledge to watermark AI content for safety, White House says [accessed on August 14, 2023]. Available at: https://www.reuters.com/technology/openai-google-others-pledge-watermark-ai-content-safety-white-house-2023-07-21/
  23. Miller K, Gunn E, Cochran A, Burstein H, Friedberg JW, Wheeler S, et al. Use of large language models and artificial intelligence tools in works submitted to Journal of Clinical Oncology. J Clin Oncol 2023;41:3480-3481 https://doi.org/10.1200/JCO.23.00819
  24. National Institutes of Health. The use of generative artificial intelligence technologies is prohibited for the NIH peer review process [accessed on August 8, 2023]. Available at: https://grants.nih.gov/grants/guide/notice-files/NOTOD-23-149.html
  25. Kaiser J. Science funding agencies say no to using AI for peer review [accessed on August 8, 2023]. Available at: https://www.science.org/content/article/science-funding-agencies-say-no-using-ai-peer-review
  26. JAMA. Instructions for authors [accessed on August 8, 2023]. Available at: https://jamanetwork.com/journals/jama/pages/instructions-for-authors
  27. Park SH. Use of generative artificial intelligence, including large language models such as ChatGPT, in scientific publications: policies of KJR and prominent authorities. Korean J Radiol 2023;24:715-718 https://doi.org/10.3348/kjr.2023.0643
  28. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature 2023;613:612
  29. Nature. For authors: initial submission [accessed on July 10, 2023]. Available at: https://www.nature.com/nature/forauthors/initial-submission
  30. American Association for the Advancement of Science. Science journals: editorial policies [accessed on August 17, 2023]. Available at: https://www.science.org/content/page/science-journals-editorial-policies
  31. Committee on Publication Ethics. Authorship and AI tools: COPE position statement [accessed on August 17, 2023]. Available at: https://publicationethics.org/cope-position-statements/ai-author
  32. International Committee of Medical Journal Editors. Recommendations [accessed on August 17, 2023]. Available at: https://www.icmje.org/recommendations
  33. Zielinski C, Winker MA, Aggarwal R, Ferris LE, Heinemann M, Lapena JF, et al. Chatbots, generative AI, and scholarly manuscripts. WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications [accessed on August 17, 2023]. Available at:https://wame.org/page3.php?id=106
  34. Vessal K, Habibzadeh F. Rules of the game of scientific writing: fair play and plagiarism. Lancet 2007;369:641