• Title/Summary/Keyword: sGPT

Search Result 411, Processing Time 0.026 seconds

ChatGPT-based Software Requirements Engineering (ChatGPT 기반 소프트웨어 요구공학)

  • Jongmyung Choi
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.45-50
    • /
    • 2023
  • In software development, the elicitation and analysis of requirements is a crucial phase, and it involves considerable time and effort due to the involvement of various stakeholders. ChatGPT, having been trained on a diverse array of documents, is a large language model that possesses not only the ability to generate code and perform debugging but also the capability to be utilized in the domain of software analysis and design. This paper proposes a method of requirements engineering that leverages ChatGPT's capabilities for eliciting software requirements, analyzing them to align with system goals, and documenting them in the form of use cases. In software requirements engineering, it suggests that stakeholders, analysts, and ChatGPT should engage in a collaborative model. The process should involve using the outputs of ChatGPT as initial requirements, which are then reviewed and augmented by analysts and stakeholders. As ChatGPT's capability improves, it is anticipated that the accuracy of requirements elicitation and analysis will increase, leading to time and cost savings in the field of software requirements engineering.

Exploring automatic scoring of mathematical descriptive assessment using prompt engineering with the GPT-4 model: Focused on permutations and combinations (프롬프트 엔지니어링을 통한 GPT-4 모델의 수학 서술형 평가 자동 채점 탐색: 순열과 조합을 중심으로)

  • Byoungchul Shin;Junsu Lee;Yunjoo Yoo
    • The Mathematical Education
    • /
    • v.63 no.2
    • /
    • pp.187-207
    • /
    • 2024
  • In this study, we explored the feasibility of automatically scoring descriptive assessment items using GPT-4 based ChatGPT by comparing and analyzing the scoring results between teachers and GPT-4 based ChatGPT. For this purpose, three descriptive items from the permutation and combination unit for first-year high school students were selected from the KICE (Korea Institute for Curriculum and Evaluation) website. Items 1 and 2 had only one problem-solving strategy, while Item 3 had more than two strategies. Two teachers, each with over eight years of educational experience, graded answers from 204 students and compared these with the results from GPT-4 based ChatGPT. Various techniques such as Few-Shot-CoT, SC, structured, and Iteratively prompts were utilized to construct prompts for scoring, which were then inputted into GPT-4 based ChatGPT for scoring. The scoring results for Items 1 and 2 showed a strong correlation between the teachers' and GPT-4's scoring. For Item 3, which involved multiple problem-solving strategies, the student answers were first classified according to their strategies using prompts inputted into GPT-4 based ChatGPT. Following this classification, scoring prompts tailored to each type were applied and inputted into GPT-4 based ChatGPT for scoring, and these results also showed a strong correlation with the teachers' scoring. Through this, the potential for GPT-4 models utilizing prompt engineering to assist in teachers' scoring was confirmed, and the limitations of this study and directions for future research were presented.

Evaluation of the applicability of ChatGPT in biological nursing science education (ChatGPT의 기초간호학교육 활용 가능성 평가)

  • Sunmi Kim;Jihun Kim;Myung Jin Choi;Seok Hee Jeong
    • Journal of Korean Biological Nursing Science
    • /
    • v.25 no.3
    • /
    • pp.183-204
    • /
    • 2023
  • Purpose: The purpose of this study was to evaluate the applicability of ChatGPT in biological nursing science education. Methods: This study was conducted by entering questions about the field of biological nursing science into ChatGPT versions GPT-3.5 and GPT-4 and evaluating the answers. Three questions each related to microbiology and pharmacology were entered, and the generated content was analyzed to determine its applicability to the field of biological nursing science. The questions were of a level that could be presented to nursing students as written test questions. Results: The answers generated in English had 100.0% accuracy in both GPT-3.5 and GPT-4. For the sentences generated in Korean, the accuracy rate of GPT-3.5 was 62.7%, and that of GPT-4 was 100.0%. The total number of Korean sentences in GPT-3.5 was 51, while the total number of Korean sentences in GPT-4 was 68. Likewise, the total number of English sentences in GPT-3.5 was 70, while the total number of English sentences in GPT-4 was 75. This showed that even for the same Korean or English question, GPT-4 tended to be more detailed than GPT-3.5. Conclusion: This study confirmed the advantages of ChatGPT as a tool to improve understanding of various complex concepts in the field of biological nursing science. However, as the answers were based on data collected up to 2021, a guideline reflecting the most up-to-date information is needed. Further research is needed to develop a reliable and valid scale to evaluate ChatGPT's responses.

Software Education Class Model using Generative AI - Focusing on ChatGPT (생성형 AI를 활용한 소프트웨어교육 수업모델 연구 - ChatGPT를 중심으로)

  • Myung-suk Lee
    • Journal of Practical Engineering Education
    • /
    • v.16 no.3_spc
    • /
    • pp.275-282
    • /
    • 2024
  • This study studied a teaching model for software education using generative AI. The purpose of the study is to use ChatGPT as an instructor's assistant in programming classes for non-major students by using ChatGPT in software education. In addition, we designed ChatGPT to enable individual learning for learners and provide immediate feedback when students need it. The research method was conducted using ChatGPT as an assistant for non-computer majors taking a liberal arts Python class. In addition, we confirmed whether ChatGPT has the potential as an assistant in programming education for non-major students. Students actively used ChatGPT for writing assignments, correcting errors, writing coding, and acquiring knowledge, and confirmed various advantages, such as being able to focus on understanding the program rather than spending a lot of time resolving errors. We were able to see the potential for ChatGPT to increase students' learning efficiency, and we were able to see that more research is needed on its use in education. In the future, research will be conducted on the development, supplementation, and evaluation methods of educational models using ChatGPT.

A Study on the impact of ChatGPT Quality and Satisfaction on Intention to Continuous Use (ChatGPT 품질과 활용만족이 지속적 이용의도에 미치는 영향)

  • Park Cheol Woo;Kang Gyung Lan
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.6
    • /
    • pp.191-199
    • /
    • 2023
  • The purpose of this study is to examine the impact of ChatGpt's quality on users' satisfaction and intention to continuous use it. For this purpose, a survey was conducted targeting college students in the Busan and Gyeongnam regions, and responses from a total of 155 people were verified using the SPSS 28.0 program. As a result of the study, reliability and stability among ChatGPT quality factors were found to have a positive effect on satisfaction with use and intention to continuous use. Satisfaction with the use of ChatGPT was found to have a positive effect on intention to continuous use.. Satisfaction with use was found to have a positive mediating effect between the reliability and stability of ChatGPT quality and intention to continous use it. As a result of this study, we aim to contribute to suggesting educational and policy directions necessary to promote the use of ChatGPT by presenting factors that affect users' intention to continuous use ChatGPT among the qualities of ChatGPT.

  • PDF

A Study on the ChatGPT: Focused on the News Big Data Service and ChatGPT Use Cases (ChatGPT에 관한 연구: 뉴스 빅데이터 서비스와 ChatGPT 활용 사례를 중심으로)

  • Lee Yunhee;Kim Chang-Sik;Ahn Hyunchul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.1
    • /
    • pp.139-151
    • /
    • 2023
  • This study aims to gain insights into ChatGPT, which has recently received significant attention. The study utilized a mixed method involving case studies and news big data analysis. ChatGPT can be described as an optimized language model for dialogue. The question arises whether ChatGPT will replace Google search services, posing a potential threat to Google. It could hurt Google's advertising business, which is the foundation of its profits. With AI-based chatbots like ChatGPT likely to disrupt the web search industry, Google is establishing a new AI strategy. The study used the BIG KINDS service and analyzed 2,136 articles over six months, from August 23, 2022, to February 22, 2023. Thirty of these articles were written in 2022, while 2,106 have been reported recently as of February 22, 2023. Also, the study examined the contents of ChatGPT by utilizing literature research, news big data analysis, and use cases. Despite limitations such as the potential for false information, analyzing news big data and use cases suggests that ChatGPT is worth using.

Factors Influencing User's Satisfaction in ChatGPT Use: Mediating Effect of Reliability (ChatGPT 사용 만족도에 미치는 영향 요인: 신뢰성의 매개효과)

  • Ki Ho Park;Jun Hu Li
    • Journal of Information Technology Services
    • /
    • v.23 no.2
    • /
    • pp.99-116
    • /
    • 2024
  • Recently, interest in ChatGPT has been increasing. This study investigated the factors influencing the satisfaction of users using ChatGPT service, a chatbot system based on artificial intelligence technology. This paper empirically analyzed causality between the four major factors of service quality, system quality, information quality, and security as independent variables and user satisfaction of ChatGPT as dependent variable. In addition, the mediating effect of reliability between the independent variables and user's satisfaction was analyzed. As a result of this research, except for information quality, among the quality factors, security and reliability had a positive causality with use satisfaction. Reliability played a mediating role between quality factors, security, and user satisfaction. However, among quality factors, the mediating effect of reliability between service quality and user's satisfaction was not significant. In conclusion, in order to increase user satisfaction with new technology-based services, it is important to create trust among users. The research results sought to emphasize the importance of user trust in establishing development and operation strategies for artificial intelligence systems, including ChatGPT.

A Study on the Web Building Assistant System Using GUI Object Detection and Large Language Model (웹 구축 보조 시스템에 대한 GUI 객체 감지 및 대규모 언어 모델 활용 연구)

  • Hyun-Cheol Jang;Hyungkuk Jang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.830-833
    • /
    • 2024
  • As Large Language Models (LLM) like OpenAI's ChatGPT[1] continue to grow in popularity, new applications and services are expected to emerge. This paper introduces an experimental study on a smart web-builder application assistance system that combines Computer Vision with GUI object recognition and the ChatGPT (LLM). First of all, the research strategy employed computer vision technology in conjunction with Microsoft's "ChatGPT for Robotics: Design Principles and Model Abilities"[2] design strategy. Additionally, this research explores the capabilities of Large Language Model like ChatGPT in various application design tasks, specifically in assisting with web-builder tasks. The study examines the ability of ChatGPT to synthesize code through both directed prompts and free-form conversation strategies. The researchers also explored ChatGPT's ability to perform various tasks within the builder domain, including functions and closure loop inferences, basic logical and mathematical reasoning. Overall, this research proposes an efficient way to perform various application system tasks by combining natural language commands with computer vision technology and LLM (ChatGPT). This approach allows for user interaction through natural language commands while building applications.

Analysis of the scholastic capability of ChatGPT utilizing the Korean College Scholastic Ability Test (대학입시 수능시험을 평가 도구로 적용한 ChatGPT의 학업 능력 분석)

  • WEN HUILIN;Kim Jinhyuk;Han Kyonghee;Kim Shiho
    • Journal of Platform Technology
    • /
    • v.11 no.5
    • /
    • pp.72-83
    • /
    • 2023
  • ChatGPT, commercial launch in late 2022, has shown successful results in various professional exams, including US Bar Exam and the United States Medical Licensing Exam (USMLE), demonstrating its ability to pass qualifying exams in professional domains. However, further experimentation and analysis are required to assess ChatGPT's scholastic capability, such as logical inference and problem-solving skills. This study evaluated ChatGPT's scholastic performance utilizing the Korean College Scholastic Ability Test (KCSAT) subjects, including Korean, English, and Mathematics. The experimental results revealed that ChatGPT achieved a relatively high accuracy rate of 69% in the English exam but relatively lower rates of 34% and 19% in the Korean Language and Mathematics domains, respectively. Through analyzing the results of the Korean language exam, English exams, and TOPIK II, we evaluated ChatGPT's strengths and weaknesses in comprehension and logical inference abilities. Although ChatGPT, as a generative language model, can understand and respond to general Korean, English, and Mathematics problems, it is considered weak in tasks involving higher-level logical inference and complex mathematical problem-solving. This study might provide simple yet accurate and effective evaluation criteria for generative artificial intelligence performance assessment through the analysis of KCSAT scores.

  • PDF

Performance of ChatGPT on the Korean National Examination for Dental Hygienists

  • Soo-Myoung Bae;Hye-Rim Jeon;Gyoung-Nam Kim;Seon-Hui Kwak;Hyo-Jin Lee
    • Journal of dental hygiene science
    • /
    • v.24 no.1
    • /
    • pp.62-70
    • /
    • 2024
  • Background: This study aimed to evaluate ChatGPT's performance accuracy in responding to questions from the national dental hygienist examination. Moreover, through an analysis of ChatGPT's incorrect responses, this research intended to pinpoint the predominant types of errors. Methods: To evaluate ChatGPT-3.5's performance according to the type of national examination questions, the researchers classified 200 questions of the 49th National Dental Hygienist Examination into recall, interpretation, and solving type questions. The researchers strategically modified the questions to counteract potential misunderstandings from implied meanings or technical terminology in Korea. To assess ChatGPT-3.5's problem-solving capabilities in applying previously acquired knowledge, the questions were first converted to subjective type. If ChatGPT-3.5 generated an incorrect response, an original multiple-choice framework was provided again. Two hundred questions were input into ChatGPT-3.5 and the generated responses were analyzed. After using ChatGPT, the accuracy of each response was evaluated by researchers according to the types of questions, and the types of incorrect responses were categorized (logical, information, and statistical errors). Finally, hallucination was evaluated when ChatGPT provided misleading information by answering something that was not true as if it were true. Results: ChatGPT's responses to the national examination were 45.5% accurate. Accuracy by question type was 60.3% for recall and 13.0% for problem-solving type questions. The accuracy rate for the subjective solving questions was 13.0%, while the accuracy for the objective questions increased to 43.5%. The most common types of incorrect responses were logical errors 65.1% of all. Of the total 102 incorrectly answered questions, 100 were categorized as hallucinations. Conclusion: ChatGPT-3.5 was found to be limited in its ability to provide evidence-based correct responses to the Korean national dental hygiene examination. Therefore, dental hygienists in the education or clinical fields should be careful to use artificial intelligence-generated materials with a critical view.