• Title/Summary/Keyword: GPT-4 based ChatGPT

Search Result 36, Processing Time 0.023 seconds

Exploring automatic scoring of mathematical descriptive assessment using prompt engineering with the GPT-4 model: Focused on permutations and combinations (프롬프트 엔지니어링을 통한 GPT-4 모델의 수학 서술형 평가 자동 채점 탐색: 순열과 조합을 중심으로)

  • Byoungchul Shin;Junsu Lee;Yunjoo Yoo
    • The Mathematical Education
    • /
    • v.63 no.2
    • /
    • pp.187-207
    • /
    • 2024
  • In this study, we explored the feasibility of automatically scoring descriptive assessment items using GPT-4 based ChatGPT by comparing and analyzing the scoring results between teachers and GPT-4 based ChatGPT. For this purpose, three descriptive items from the permutation and combination unit for first-year high school students were selected from the KICE (Korea Institute for Curriculum and Evaluation) website. Items 1 and 2 had only one problem-solving strategy, while Item 3 had more than two strategies. Two teachers, each with over eight years of educational experience, graded answers from 204 students and compared these with the results from GPT-4 based ChatGPT. Various techniques such as Few-Shot-CoT, SC, structured, and Iteratively prompts were utilized to construct prompts for scoring, which were then inputted into GPT-4 based ChatGPT for scoring. The scoring results for Items 1 and 2 showed a strong correlation between the teachers' and GPT-4's scoring. For Item 3, which involved multiple problem-solving strategies, the student answers were first classified according to their strategies using prompts inputted into GPT-4 based ChatGPT. Following this classification, scoring prompts tailored to each type were applied and inputted into GPT-4 based ChatGPT for scoring, and these results also showed a strong correlation with the teachers' scoring. Through this, the potential for GPT-4 models utilizing prompt engineering to assist in teachers' scoring was confirmed, and the limitations of this study and directions for future research were presented.

A Study on A Study on the University Education Plan Using ChatGPTfor University Students (ChatGPT를 활용한 대학 교육 방안 연구)

  • Hyun-ju Kim;Jinyoung Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.71-79
    • /
    • 2024
  • ChatGPT, an interactive artificial intelligence (AI) chatbot developed by Open AI in the U.S., gaining popularity with great repercussions around the world. Some academia are concerned that ChatGPT can be used by students for plagiarism, but ChatGPT is also widely used in a positive direction, such as being used to write marketing phrases or website phrases. There is also an opinion that ChatGPT could be a new future for "search," and some analysts say that the focus should be on fostering rather than excessive regulation. This study analyzed consciousness about ChatGPT for college students through a survey of their perception of ChatGPT. And, plagiarism inspection systems were prepared to establish an education support model using ChatGPT and ChatGPT. Based on this, a university education support model using ChatGPT was constructed. The education model using ChatGPT established an education model based on text, digital, and art, and then composed of detailed strategies necessary for the era of the 4th industrial revolution below it. In addition, it was configured to guide students to use ChatGPT within the permitted range by using the ChatGPT detection function provided by the plagiarism inspection system, after the instructor of the class determined the allowable range of content generated by ChatGPT according to the learning goal. By linking and utilizing ChatGPT and the plagiarism inspection system in this way, it is expected to prevent situations in which ChatGPT's excellent ability is abused in education.

Game System for Autonomous Level Design Based on ChatGPT (ChatGPT 기반의 자율형 레벨 디자인을 위한 게임 시스템)

  • Do-Hoon Jung;Jun-Gyeong Lee;Sung-Jun Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.4
    • /
    • pp.113-119
    • /
    • 2024
  • In this paper, a model was devised to change the numerical values that affect the game balance by using Chat-GPT for game balancing. Based on the usability of Chat-GPT shown in several studies and cases using Chat-GPT, Chat-GPT is automated to directly adjust detailed and objective in-game numerical values. In this paper, the format of Chat-GPT responses was consistently adjusted so that the numerical values required for game balancing could be obtained directly from the answers. As an experimental method, it was confirmed that four players autonomously designed the game level through five rounds to adjust the balance. These studies suggest the possibility that games can be produced using Chat-GPT in the future.

A Study on the Service Integration of Traditional Chatbot and ChatGPT (전통적인 챗봇과 ChatGPT 연계 서비스 방안 연구)

  • Cheonsu Jeong
    • Journal of Information Technology Applications and Management
    • /
    • v.30 no.4
    • /
    • pp.11-28
    • /
    • 2023
  • This paper proposes a method of integrating ChatGPT with traditional chatbot systems to enhance conversational artificial intelligence(AI) and create more efficient conversational systems. Traditional chatbot systems are primarily based on classification models and are limited to intent classification and simple response generation. In contrast, ChatGPT is a state-of-the-art AI technology for natural language generation, which can generate more natural and fluent conversations. In this paper, we analyze the business service areas that can be integrated with ChatGPT and traditional chatbots, and present methods for conducting conversational scenarios through case studies of service types. Additionally, we suggest ways to integrate ChatGPT with traditional chatbot systems for intent recognition, conversation flow control, and response generation. We provide a practical implementation example of how to integrate ChatGPT with traditional chatbots, making it easier to understand and build integration methods and actively utilize ChatGPT with existing chatbots.

Evaluation of the applicability of ChatGPT in biological nursing science education (ChatGPT의 기초간호학교육 활용 가능성 평가)

  • Sunmi Kim;Jihun Kim;Myung Jin Choi;Seok Hee Jeong
    • Journal of Korean Biological Nursing Science
    • /
    • v.25 no.3
    • /
    • pp.183-204
    • /
    • 2023
  • Purpose: The purpose of this study was to evaluate the applicability of ChatGPT in biological nursing science education. Methods: This study was conducted by entering questions about the field of biological nursing science into ChatGPT versions GPT-3.5 and GPT-4 and evaluating the answers. Three questions each related to microbiology and pharmacology were entered, and the generated content was analyzed to determine its applicability to the field of biological nursing science. The questions were of a level that could be presented to nursing students as written test questions. Results: The answers generated in English had 100.0% accuracy in both GPT-3.5 and GPT-4. For the sentences generated in Korean, the accuracy rate of GPT-3.5 was 62.7%, and that of GPT-4 was 100.0%. The total number of Korean sentences in GPT-3.5 was 51, while the total number of Korean sentences in GPT-4 was 68. Likewise, the total number of English sentences in GPT-3.5 was 70, while the total number of English sentences in GPT-4 was 75. This showed that even for the same Korean or English question, GPT-4 tended to be more detailed than GPT-3.5. Conclusion: This study confirmed the advantages of ChatGPT as a tool to improve understanding of various complex concepts in the field of biological nursing science. However, as the answers were based on data collected up to 2021, a guideline reflecting the most up-to-date information is needed. Further research is needed to develop a reliable and valid scale to evaluate ChatGPT's responses.

The Impact of User Trust and Anthropomorphism on the Continuance Intention to Use ChatGPT (사용자 신뢰와 의인화가 ChatGPT의 지속적인 사용 의도에 미치는 영향)

  • Jang, Ji Yeong;Suh, Chang Kyo
    • The Journal of Information Systems
    • /
    • v.33 no.1
    • /
    • pp.91-114
    • /
    • 2024
  • Purpose The purpose of this study is to empirically investigate the factors that influence users' continuous intention to use ChatGPT based on the Expectation Confirmation Model(ECM). Drawing from the literature, this study identifies anthropomorphism and trust as key characteristics of generative AI and ChatGPT. Design/methodology/approach The research model was developed based on ECM and literature research to investigate the impacts of anthropomorphism and trust on continuous intention of using ChatGPT. In order to test the hypothese, a total of 193 questionnaires were collected and analyzed for the structural equation modeling with SmartPLS 4.0. Findings The study's findings show that all proposed hypotheses were supported, suggesting that the ECM is a valid framework for examining continuous intention of using ChatGPT. Moreover, the study stressed the crucial role of anthropomorphism in the model, showing the positive impact on expectation confirmation, perceived usefulness, and trust in ChatGPT. Also, trust positively affects perceived usefulness. These findings provide valuable insights for enhancing user satisfaction and continuous usage intention, serving as a foundation for development strategies for ChatGPT and similar AI-based systems.

An Exploratory Study on Issues Related to chatGPT and Generative AI through News Big Data Analysis

  • Jee Young Lee
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.378-384
    • /
    • 2023
  • In this study, we explore social awareness, interest, and acceptance of generative AI, including chatGPT, which has revolutionized web search, 30 years after web search was released. For this purpose, we performed a machine learning-based topic modeling analysis based on Korean news big data collected from November 30, 2022, when chatGPT was released, to August 31, 2023. As a result of our research, we have identified seven topics related to chatGPT and generative AI; (1)growth of the high-performance hardware market, (2)service contents using generative AI, (3)technology development competition, (4)human resource development, (5)instructions for use, (6)revitalizing the domestic ecosystem, (7)expectations and concerns. We also explored monthly frequency changes in topics to explore social interest related to chatGPT and Generative AI. Based on our exploration results, we discussed the high social interest and issues regarding generative AI. We expect that the results of this study can be used as a precursor to research that analyzes and predicts the diffusion of innovation in generative AI.

A Qualitative Research on Exploring Consideration Factors for Educational Use of ChatGPT (ChatGPT의 교육적 활용 고려 요소 탐색을 위한 질적 연구)

  • Hyeongjong Han
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.659-666
    • /
    • 2023
  • Among the tools based on generative artificial intelligence, the possibility of using ChatGPT is being explored. However, studies that have confirmed what factors should be considered when using it educationally based on learners' actual perceptions are insufficient. Through qualitative research method, this study was to derive consideration factors when using ChatGPT in the education. The results showed that there were five key factors as follows: critical thinking on generated information, recognizing it as a tool to support learning and avoiding dependent use, conducting prior training on ethical usage, generating clear and appropriate questions, and reviewing and synthesizing answers. It is necessary to develop an instructional design model that comprehensively composes the above elements.

Performance of ChatGPT 3.5 and 4 on U.S. dental examinations: the INBDE, ADAT, and DAT

  • Mahmood Dashti;Shohreh Ghasemi;Niloofar Ghadimi;Delband Hefzi;Azizeh Karimian;Niusha Zare;Amir Fahimipour;Zohaib Khurshid;Maryam Mohammadalizadeh Chafjiri;Sahar Ghaedsharaf
    • Imaging Science in Dentistry
    • /
    • v.54 no.3
    • /
    • pp.271-275
    • /
    • 2024
  • Purpose: Recent advancements in artificial intelligence (AI), particularly tools such as ChatGPT developed by OpenAI, a U.S.-based AI research organization, have transformed the healthcare and education sectors. This study investigated the effectiveness of ChatGPT in answering dentistry exam questions, demonstrating its potential to enhance professional practice and patient care. Materials and Methods: This study assessed the performance of ChatGPT 3.5 and 4 on U.S. dental exams - specifically, the Integrated National Board Dental Examination (INBDE), Dental Admission Test (DAT), and Advanced Dental Admission Test (ADAT) - excluding image-based questions. Using customized prompts, ChatGPT's answers were evaluated against official answer sheets. Results: ChatGPT 3.5 and 4 were tested with 253 questions from the INBDE, ADAT, and DAT exams. For the INBDE, both versions achieved 80% accuracy in knowledge-based questions and 66-69% in case history questions. In ADAT, they scored 66-83% in knowledge-based and 76% in case history questions. ChatGPT 4 excelled on the DAT, with 94% accuracy in knowledge-based questions, 57% in mathematical analysis items, and 100% in comprehension questions, surpassing ChatGPT 3.5's rates of 83%, 31%, and 82%, respectively. The difference was significant for knowledge-based questions(P=0.009). Both versions showed similar patterns in incorrect responses. Conclusion: Both ChatGPT 3.5 and 4 effectively handled knowledge-based, case history, and comprehension questions, with ChatGPT 4 being more reliable and surpassing the performance of 3.5. ChatGPT 4's perfect score in comprehension questions underscores its trainability in specific subjects. However, both versions exhibited weaker performance in mathematical analysis, suggesting this as an area for improvement.

Evaluating ChatGPT's Competency in BIM Related Knowledge via the Korean BIM Expertise Exam (BIM 운용 전문가 시험을 통한 ChatGPT의 BIM 분야 전문 지식 수준 평가)

  • Choi, Jiwon;Koo, Bonsang;Yu, Youngsu;Jeong, Yujeong;Ham, Namhyuk
    • Journal of KIBIM
    • /
    • v.13 no.3
    • /
    • pp.21-29
    • /
    • 2023
  • ChatGPT, a chatbot based on GPT large language models, has gained immense popularity among the general public as well as domain professionals. To assess its proficiency in specialized fields, ChatGPT was tested on mainstream exams like the bar exam and medical licensing tests. This study evaluated ChatGPT's ability to answer questions related to Building Information Modeling (BIM) by testing it on Korea's BIM expertise exam, focusing primarily on multiple-choice problems. Both GPT-3.5 and GPT-4 were tested by prompting them to provide the correct answers to three years' worth of exams, totaling 150 questions. The results showed that both versions passed the test with average scores of 68 and 85, respectively. GPT-4 performed particularly well in categories related to 'BIM software' and 'Smart Construction technology'. However, it did not fare well in 'BIM applications'. Both versions were more proficient with short-answer choices than with sentence-length answers. Additionally, GPT-4 struggled with questions related to BIM policies and regulations specific to the Korean industry. Such limitations might be addressed by using tools like LangChain, which allow for feeding domain-specific documents to customize ChatGPT's responses. These advancements are anticipated to enhance ChatGPT's utility as a virtual assistant for BIM education and modeling automation.