• Title/Summary/Keyword: patient chatbot

Search Result 6, Processing Time 0.009 seconds

Usability and Educational Effectiveness of AI-based Patient Chatbot for Clinical Skills Training in Korean Medicine (한의학 임상실습교육을 위한 인공지능 기반 환자 챗봇의 사용성과 교육적 효과성)

  • Yejin Han
    • Korean Journal of Acupuncture
    • /
    • v.41 no.1
    • /
    • pp.27-32
    • /
    • 2024
  • Objectives : This study developed an AI-based patient chatbot and examined the usability and educational effectiveness of the chatbot in the context of Korean medicine education. Methods : The patient chatbot was developed using the AI chatbot builder 'Danbee', and a total of five experts were surveyed and interviewed to determine the usability, effectiveness, advantages, disadvantages, and improvement points of the chatbot. Results : The patient chatbot was found to have high usability and educational effectiveness. The advantages of the patient chatbot were 1) it provided students with practical experience in performing clinical skills, 2) it provided instructors with assessment materials while reducing their teaching burden, and 3) it could be effectively used for horizontal and vertical integration education. The disadvantages and improvements of the patient chatbot were 1) improving the accuracy of intention inference, 2) providing students with specific instructions for problem-solving activities, and 3) providing assessment results and feedback about students' activities. Conclusions : This study is significant in that it proposes a new training method to overcome the limitations of the existing doctor-patient simulation. It is hoped that this study will stimulate further research on the improvement of students' clinical skills using artificial intelligence.

AIMS: AI based Mental Healthcare System

  • Ibrahim Alrashide;Hussain Alkhalifah;Abdul-Aziz Al-Momen;Ibrahim Alali;Ghazy Alshaikh;Atta-ur Rahman;Ashraf Saadeldeen;Khalid Aloup
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.225-234
    • /
    • 2023
  • In this era of information and communication technology (ICT), tremendous improvements have been witnessed in our daily lives. The impact of these technologies is subjective and negative or positive. For instance, ICT has brought a lot of ease and versatility in our lifestyles, on the other hand, its excessive use brings around issues related to physical and mental health etc. In this study, we are bridging these both aspects by proposing the idea of AI based mental healthcare (AIMS). In this regard, we aim to provide a platform where the patient can register to the system and take consultancy by providing their assessment by means of a chatbot. The chatbot will send the gathered information to the machine learning block. The machine learning model is already trained and predicts whether the patient needs a treatment by classifying him/her based on the assessment. This information is provided to the mental health practitioner (doctor, psychologist, psychiatrist, or therapist) as clinical decision support. Eventually, the practitioner will provide his/her suggestions to the patient via the proposed system. Additionally, the proposed system prioritizes care, support, privacy, and patient autonomy, all while using a friendly chatbot interface. By using technology like natural language processing and machine learning, the system can predict a patient's condition and recommend the right professional for further help, including in-person appointments if necessary. This not only raises awareness about mental health but also makes it easier for patients to start therapy.

Pilot Development of a 'Clinical Performance Examination (CPX) Practicing Chatbot' Utilizing Prompt Engineering (프롬프트 엔지니어링(Prompt Engineering)을 활용한 '진료수행시험 연습용 챗봇(CPX Practicing Chatbot)' 시범 개발)

  • Jundong Kim;Hye-Yoon Lee;Ji-Hwan Kim;Chang-Eop Kim
    • The Journal of Korean Medicine
    • /
    • v.45 no.1
    • /
    • pp.203-214
    • /
    • 2024
  • Objectives: In the context of competency-based education emphasized in Korean Medicine, this study aimed to develop a pilot version of a CPX (Clinical Performance Examination) Practicing Chatbot utilizing large language models with prompt engineering. Methods: A standardized patient scenario was acquired from the National Institute of Korean Medicine and transformed into text format. Prompt engineering was then conducted using role prompting and few-shot prompting techniques. The GPT-4 API was employed, and a web application was created using the gradio package. An internal evaluation criterion was established for the quantitative assessment of the chatbot's performance. Results: The chatbot was implemented and evaluated based on the internal evaluation criterion. It demonstrated relatively high correctness and compliance. However, there is a need for improvement in confidentiality and naturalness. Conclusions: This study successfully piloted the CPX Practicing Chatbot, revealing the potential for developing educational models using AI technology in the field of Korean Medicine. Additionally, it identified limitations and provided insights for future developmental directions.

Software to promote patient-to-doctor communication based on 'chatbot' (인공지능 챗봇을 기반으로 한 환자-의사 소통 증진 소프트웨어)

  • Ryu, Yeon-Jun;Park, Se-Ri;Sung, Hyun-Gyu;Lee, Jyu-Su;Kim, Woongsup
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.501-504
    • /
    • 2020
  • 본 프로젝트는 한국 의료 진로 서비스의 문제점을 개선하고자 인공지능 기반의 챗봇을 이용해 환자와 의사 간의 의사소통을 증진시키는 데 목적이 있다. Web UI 를 제공하는 Rasa X 챗봇(Chatbot) Tool 을 이용하여 메시지와 이미지를 송신할 수 있는 챗봇을 구축해냈다. 또한 YOLO model training 으로 충치 Detection 기능 등 인공지능을 접목시켜 더 효율성있는 어플리케이션(Application)을 개발했다. 이는 최근 코로나-19 로 비대면 서비스가 각광받는 가운데 챗봇 모델은 가장 경제적이고 효율적으로 실생활에 적용될 기술이다.

Evaluating the Accuracy of Artificial Intelligence-Based Chatbots on Pediatric Dentistry Questions in the Korean National Dental Board Exam

  • Yun Sun Jung;Yong Kwon Chae;Mi Sun Kim;Hyo-Seol Lee;Sung Chul Choi;Ok Hyung Nam
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.51 no.3
    • /
    • pp.299-309
    • /
    • 2024
  • This study aimed to assess the competency of artificial intelligence (AI) in pediatric dentistry and compare it with that of dentists. We used open-source data obtained from the Korea Health Personnel Licensing Examination Institute. A total of 32 item multiple-choice pediatric dentistry exam questions were included. Two AI-based chatbots (ChatGPT 3.5 and Gemini) were evaluated. Each chatbot received the same questions seven times in separate chat sessions initiated on April 25, 2024. The accuracy was assessed by measuring the percentage of correct answers, and consistency was evaluated using Cronbach's alpha coefficient. Both ChatGPT 3.5 and Gemini demonstrated similar accuracy, with no significant differences observed between them. However, neither chatbot achieved the minimum passing score set by the Pediatric Dentistry National Examination. However, both chatbots exhibited acceptable consistency in their responses. Within the limits of this study, both AI-based chatbots did not sufficiently answer the pediatric dentistry exam questions. This finding suggests that pediatric dentists should be aware of the advantages and limitations of this new tool and effectively utilize it to promote patient health.

The new frontier: utilizing ChatGPT to expand craniofacial research

  • Andi Zhang;Ethan Dimock;Rohun Gupta;Kevin Chen
    • Archives of Craniofacial Surgery
    • /
    • v.25 no.3
    • /
    • pp.116-122
    • /
    • 2024
  • Background: Due to the importance of evidence-based research in plastic surgery, the authors of this study aimed to assess the accuracy of ChatGPT in generating novel systematic review ideas within the field of craniofacial surgery. Methods: ChatGPT was prompted to generate 20 novel systematic review ideas for 10 different subcategories within the field of craniofacial surgery. For each topic, the chatbot was told to give 10 "general" and 10 "specific" ideas that were related to the concept. In order to determine the accuracy of ChatGPT, a literature review was conducted using PubMed, CINAHL, Embase, and Cochrane. Results: In total, 200 total systematic review research ideas were generated by ChatGPT. We found that the algorithm had an overall 57.5% accuracy at identifying novel systematic review ideas. ChatGPT was found to be 39% accurate for general topics and 76% accurate for specific topics. Conclusion: Craniofacial surgeons should use ChatGPT as a tool. We found that ChatGPT provided more precise answers with specific research questions than with general questions and helped narrow down the search scope, leading to a more relevant and accurate response. Beyond research purposes, ChatGPT can augment patient consultations, improve healthcare equity, and assist in clinical decision-making. With rapid advancements in artificial intelligence (AI), it is important for plastic surgeons to consider using AI in their clinical practice to improve patient-centered outcomes.