• Title/Summary/Keyword: chatGPT

Search Result 247, Processing Time 0.026 seconds

Can ChatGPT Pass the National Korean Occupational Therapy Licensure Examination? (ChatGPT는 한국작업치료사면허시험에 합격할 수 있을까?)

  • Hong, Junhwa;Kim, Nayeon;Min, Hyemin;Yang, Hamin;Lee, Sihyun;Choi, Seojin;Park, Jin-Hyuck
    • Therapeutic Science for Rehabilitation
    • /
    • v.13 no.1
    • /
    • pp.65-74
    • /
    • 2024
  • Objective : This study assessed ChatGPT, an artificial intelligence system based on a large language model, for its ability to pass the National Korean Occupational Therapy Licensure Examination (NKOTLE). Methods : Using NKOTLE questions from 2018 to 2022, provided by the Korea Health and Medical Personnel Examination Institute, this study employed English prompts to determine the accuracy of ChatGPT in providing correct answers. Two researchers independently conducted the entire process, and the average accuracy of both researchers was used to determine whether ChatGPT passed over the 5-year period. The degree of agreement between ChatGPT answers of the two researchers was assessed. Results : ChatGPT passed the 2020 examination but failed to pass the other 4 years' examination. Specifically, its accuracy in questions related to medical regulations ranged from 25% to 57%, whereas its accuracy in other questions exceeded 60%. ChatGPT exhibited a strong agreement between researchers, except for medical regulation questions, and this agreement was significantly correlated with accuracy. Conclusion : There are still limitations to the application of ChatGPT to answer questions influenced by language or culture. Future studies should explore its potential as an educational tool for students majoring in occupational therapy through optimized prompts and continuous learning from the data.

Analysis of the scholastic capability of ChatGPT utilizing the Korean College Scholastic Ability Test (대학입시 수능시험을 평가 도구로 적용한 ChatGPT의 학업 능력 분석)

  • WEN HUILIN;Kim Jinhyuk;Han Kyonghee;Kim Shiho
    • Journal of Platform Technology
    • /
    • v.11 no.5
    • /
    • pp.72-83
    • /
    • 2023
  • ChatGPT, commercial launch in late 2022, has shown successful results in various professional exams, including US Bar Exam and the United States Medical Licensing Exam (USMLE), demonstrating its ability to pass qualifying exams in professional domains. However, further experimentation and analysis are required to assess ChatGPT's scholastic capability, such as logical inference and problem-solving skills. This study evaluated ChatGPT's scholastic performance utilizing the Korean College Scholastic Ability Test (KCSAT) subjects, including Korean, English, and Mathematics. The experimental results revealed that ChatGPT achieved a relatively high accuracy rate of 69% in the English exam but relatively lower rates of 34% and 19% in the Korean Language and Mathematics domains, respectively. Through analyzing the results of the Korean language exam, English exams, and TOPIK II, we evaluated ChatGPT's strengths and weaknesses in comprehension and logical inference abilities. Although ChatGPT, as a generative language model, can understand and respond to general Korean, English, and Mathematics problems, it is considered weak in tasks involving higher-level logical inference and complex mathematical problem-solving. This study might provide simple yet accurate and effective evaluation criteria for generative artificial intelligence performance assessment through the analysis of KCSAT scores.

  • PDF

A Study on the Data Literacy Education in the Library of the Chat GPT, Generative AI Era (ChatGPT, 생성형 AI 시대 도서관의 데이터 리터러시 교육에 대한 연구)

  • Jeong-Mee Lee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.57 no.3
    • /
    • pp.303-323
    • /
    • 2023
  • The purpose of this study is to introduce this language model in the era of generative AI such as ChatGPT, and to provide direction for data literacy education components in libraries using it. To this end, the following three research questions are proposed. First, the technical features of ChatGPT-like language models are examined, and then, it is argued that data literacy education is necessary for the proper and accurate use of information by users using a service platform based on generative AI technology. Finally, for library data literacy education in the ChatGPT era, it is proposed a data literacy education scheme including seven components such as data understanding, data generation, data collection, data verification, data management, data use and sharing, and data ethics. In conclusion, since generative AI technologies such as ChatGPT are expected to have a significant impact on users' information utilization, libraries should think about the advantages, disadvantages, and problems of these technologies first, and use them as a basis for further improving library information services.

A Study on SQL Practice Model for Data Analysis Using Chat GPT in Insurance Claims Databas (보험 청구 데이터베이스에서 Chat GPT를 이용한 데이터 분석을 위한 SQL 실습 모델 연구)

  • Joon-Young Choi
    • Journal of the Health Care and Life Science
    • /
    • v.11 no.1
    • /
    • pp.11-23
    • /
    • 2023
  • In this study, a practice model that can improve healthcare information management ability using Chat GPT and SQL was studied. For SQL utilization, learners were asked to use Chat GPT to easily access the database and write SQL for data extraction. For the contents analyzed in the claims database, the sum of insurance claim amount, insurance claim amount by treatment item, claim details corresponding to a specific amount, other diagnosis names of specific prescription patients, examination details of specific diagnosis names, and total amount by item were calculated. As a result of executing SQL statements written for each subject in Chat GPT, it was confirmed that the analysis contents were the same. It is believed that the use of ChatGPT as progressed in this study will contribute to improving the ability of healthcare data management to increase accuracy and efficiency in database management and analysis work, rather than simply simplifying or automating tasks.

Data Augmentation of English Reading Comprehension Tutoring Dialogs using ChatGPT (ChatGPT 를 이용한 독해 튜터링 대화 데이터 확장)

  • Hyunyou Kwon;Sung-Kwon Choi;Jinxia Huang;Oh-Woog Kwon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.43-44
    • /
    • 2023
  • 대화형 독해 튜터링 시스템을 위한 학생주도 대화 데이터셋 생성 및 확장에 ChatGPT 의 활용 가능성을 평가하였다. 단순히 수동으로만 구축한 기존의 데이터셋과 ChatGPT 에 의해 반자동으로 확장된 데이터셋을 비교한 결과, 구축량, 소요 시간, 비용 및 반복 작업 측면에서 ChatGPT 가 가진 유용성을 알 수 있었다. 그러나, 유형별 배분의 편중과, 부적절한 데이터 생성 등의 한계도 나타났다. Chat GPT 의 빠른 발전이 예상됨에 따라 대화형 튜터링 분야에 ChatGPT 에 의한 반자동 데이터 확장 방법이 널리 활용될 것으로 기대된다.

Analysis of ChatGPT's Coding Capabilities in Foundational Programming Courses (기초 프로그래밍 과목에서의 ChatGPT의 코딩 역량 분석)

  • Nah, Jae-Ho
    • Journal of Engineering Education Research
    • /
    • v.26 no.6
    • /
    • pp.71-78
    • /
    • 2023
  • ChatGPT significantly broadens the application of artificial intelligence (AI) services across various domains, with one of its primary functions being assistance in programming and coding. Nevertheless, due to the short history of ChatGPT, there have been few studies analyzing its coding capabilities in Korean higher education. In this paper, we evaluate it using exam questions from three foundational programming courses at S University. According to the experimental results, ChatGPT successfully generated Python, C, and JAVA programs, and the code quality is on par with that of high-achieving students. The powerful coding capabilities of ChatGPT imply the need for a strict prohibition of its usage in coding tests; however, it also suggests significant potential for enhancing practical exercises in the educational aspect.

User Factors and Trust in ChatGPT: Investigating the Relationship between Demographic Variables, Experience with AI Systems, and Trust in ChatGPT (사용자 특성과 ChatGPT 신뢰의 관계 : 인구통계학적 변수와 AI 경험의 영향)

  • Park Yeeun;Jang Jeonghoon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.4
    • /
    • pp.53-71
    • /
    • 2023
  • This study explores the relationship between various user factors and the level of trust in ChatGPT, a sophisticated language model exhibiting human-like capabilities. Specifically, we considered demographic characteristics such as age, education, gender, and major, along with factors related to previous AI experience, including duration, frequency, proficiency, perception, and familiarity. Through a survey of 140 participants, comprising 71 females and 69 males, we collected and analyzed the data to see how these user factors have a relationship with trust in ChatGPT. Both descriptive and inferential statistical methods, encompassing multiple linear regression models, were employed in our analysis. Our findings reveal significant relationships between user factors such as gender, the perception of prior AI interactions, self-evaluated proficiency, and Trust in ChatGPT. This research not only enhances our understanding of trust in artificial intelligence but also offers valuable insights for AI developers and practitioners in the field.

Quality Evaluation of Automatically Generated Metadata Using ChatGPT: Focusing on Dublin Core for Korean Monographs (ChatGPT가 자동 생성한 더블린 코어 메타데이터의 품질 평가: 국내 도서를 대상으로)

  • SeonWook Kim;HyeKyung Lee;Yong-Gu Lee
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.2
    • /
    • pp.183-209
    • /
    • 2023
  • The purpose of this study is to evaluate the Dublin Core metadata generated by ChatGPT using book covers, title pages, and colophons from a collection of books. To achieve this, we collected book covers, title pages, and colophons from 90 books and inputted them into ChatGPT to generate Dublin Core metadata. The performance was evaluated in terms of completeness and accuracy. The overall results showed a satisfactory level of completeness at 0.87 and accuracy at 0.71. Among the individual elements, Title, Creator, Publisher, Date, Identifier, Rights, and Language exhibited higher performance. Subject and Description elements showed relatively lower performance in terms of completeness and accuracy, but it confirmed the generation capability known as the inherent strength of ChatGPT. On the other hand, books in the sections of social sciences and technology of DDC showed slightly lower accuracy in the Contributor element. This was attributed to ChatGPT's attribution extraction errors, omissions in the original bibliographic description contents for metadata, and the language composition of the training data used by ChatGPT.

Evaluation of the applicability of ChatGPT in biological nursing science education (ChatGPT의 기초간호학교육 활용 가능성 평가)

  • Sunmi Kim;Jihun Kim;Myung Jin Choi;Seok Hee Jeong
    • Journal of Korean Biological Nursing Science
    • /
    • v.25 no.3
    • /
    • pp.183-204
    • /
    • 2023
  • Purpose: The purpose of this study was to evaluate the applicability of ChatGPT in biological nursing science education. Methods: This study was conducted by entering questions about the field of biological nursing science into ChatGPT versions GPT-3.5 and GPT-4 and evaluating the answers. Three questions each related to microbiology and pharmacology were entered, and the generated content was analyzed to determine its applicability to the field of biological nursing science. The questions were of a level that could be presented to nursing students as written test questions. Results: The answers generated in English had 100.0% accuracy in both GPT-3.5 and GPT-4. For the sentences generated in Korean, the accuracy rate of GPT-3.5 was 62.7%, and that of GPT-4 was 100.0%. The total number of Korean sentences in GPT-3.5 was 51, while the total number of Korean sentences in GPT-4 was 68. Likewise, the total number of English sentences in GPT-3.5 was 70, while the total number of English sentences in GPT-4 was 75. This showed that even for the same Korean or English question, GPT-4 tended to be more detailed than GPT-3.5. Conclusion: This study confirmed the advantages of ChatGPT as a tool to improve understanding of various complex concepts in the field of biological nursing science. However, as the answers were based on data collected up to 2021, a guideline reflecting the most up-to-date information is needed. Further research is needed to develop a reliable and valid scale to evaluate ChatGPT's responses.

A Qualitative Research on Exploring Consideration Factors for Educational Use of ChatGPT (ChatGPT의 교육적 활용 고려 요소 탐색을 위한 질적 연구)

  • Hyeongjong Han
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.659-666
    • /
    • 2023
  • Among the tools based on generative artificial intelligence, the possibility of using ChatGPT is being explored. However, studies that have confirmed what factors should be considered when using it educationally based on learners' actual perceptions are insufficient. Through qualitative research method, this study was to derive consideration factors when using ChatGPT in the education. The results showed that there were five key factors as follows: critical thinking on generated information, recognizing it as a tool to support learning and avoiding dependent use, conducting prior training on ethical usage, generating clear and appropriate questions, and reviewing and synthesizing answers. It is necessary to develop an instructional design model that comprehensively composes the above elements.