• Title/Summary/Keyword: Chat-GPT

Search Result 262, Processing Time 0.019 seconds

Analysis of AI Content Detector Tools

  • Yo-Seob Lee;Phil-Joo Moon
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.154-163
    • /
    • 2023
  • With the rapid development of AI technology, ChatGPT and other AI content creation tools are becoming common, and users are becoming curious and adopting them. These tools, unlike search engines, generate results based on user prompts, which puts them at risk of inaccuracy or plagiarism. This allows unethical users to create inappropriate content and poses greater educational and corporate data security concerns. AI content detection is needed and AI-generated text needs to be identified to address misinformation and trust issues. Along with the positive use of AI tools, monitoring and regulation of their ethical use is essential. When detecting content created by AI with an AI content detection tool, it can be used efficiently by using the appropriate tool depending on the usage environment and purpose. In this paper, we collect data on AI content detection tools and compare and analyze the functions and characteristics of AI content detection tools to help meet these needs.

Alzheimer's disease recognition from spontaneous speech using large language models

  • Jeong-Uk Bang;Seung-Hoon Han;Byung-Ok Kang
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.96-105
    • /
    • 2024
  • We propose a method to automatically predict Alzheimer's disease from speech data using the ChatGPT large language model. Alzheimer's disease patients often exhibit distinctive characteristics when describing images, such as difficulties in recalling words, grammar errors, repetitive language, and incoherent narratives. For prediction, we initially employ a speech recognition system to transcribe participants' speech into text. We then gather opinions by inputting the transcribed text into ChatGPT as well as a prompt designed to solicit fluency evaluations. Subsequently, we extract embeddings from the speech, text, and opinions by the pretrained models. Finally, we use a classifier consisting of transformer blocks and linear layers to identify participants with this type of dementia. Experiments are conducted using the extensively used ADReSSo dataset. The results yield a maximum accuracy of 87.3% when speech, text, and opinions are used in conjunction. This finding suggests the potential of leveraging evaluation feedback from language models to address challenges in Alzheimer's disease recognition.

Evaluating Conversational AI Systems for Responsible Integration in Education: A Comprehensive Framework

  • Utkarch Mittal;Namjae Cho;Giseob Yu
    • Journal of Information Technology Applications and Management
    • /
    • v.31 no.3
    • /
    • pp.149-163
    • /
    • 2024
  • As conversational AI systems such as ChatGPT have become more advanced, researchers are exploring ways to use them in education. However, we need effective ways to evaluate these systems before allowing them to help teach students. This study proposes a detailed framework for testing conversational AI across three important criteria as follow. First, specialized benchmarks that measure skills include giving clear explanations, adapting to context during long dialogues, and maintaining a consistent teaching personality. Second, adaptive standards check whether the systems meet the ethical requirements of privacy, fairness, and transparency. These standards are regularly updated to match societal expectations. Lastly, evaluations were conducted from three perspectives: technical accuracy on test datasets, performance during simulations with groups of virtual students, and feedback from real students and teachers using the system. This framework provides a robust methodology for identifying strengths and weaknesses of conversational AI before its deployment in schools. It emphasizes assessments tailored to the critical qualities of dialogic intelligence, user-centric metrics capturing real-world impact, and ethical alignment through participatory design. Responsible innovation by AI assistants requires evidence that they can enhance accessible, engaging, and personalized education without disrupting teaching effectiveness or student agency.

Application of ChatGPT text extraction model in analyzing rhetorical principles of COVID-19 pandemic information on a question-and-answer community

  • Hyunwoo Moon;Beom Jun Bae;Sangwon Bae
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.205-213
    • /
    • 2024
  • This study uses a large language model (LLM) to identify Aristotle's rhetorical principles (ethos, pathos, and logos) in COVID-19 information on Naver Knowledge-iN, South Korea's leading question-and-answer community. The research analyzed the differences of these rhetorical elements in the most upvoted answers with random answers. A total of 193 answer pairs were randomly selected, with 135 pairs for training and 58 for testing. These answers were then coded in line with the rhetorical principles to refine GPT 3.5-based models. The models achieved F1 scores of .88 (ethos), .81 (pathos), and .69 (logos). Subsequent analysis of 128 new answer pairs revealed that logos, particularly factual information and logical reasoning, was more frequently used in the most upvoted answers than the random answers, whereas there were no differences in ethos and pathos between the answer groups. The results suggest that health information consumers value information including logos while ethos and pathos were not associated with consumers' preference for health information. By utilizing an LLM for the analysis of persuasive content, which has been typically conducted manually with much labor and time, this study not only demonstrates the feasibility of using an LLM for latent content but also contributes to expanding the horizon in the field of AI text extraction.

Exploring Predictive Models for Student Success in National Physical Therapy Examination: Machine Learning Approach

  • Bokyung Kim;Yeonseop Lee;Jang-hoon Shin;Yusung Jang;Wansuk Choi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.10
    • /
    • pp.113-120
    • /
    • 2024
  • This study aims to assess the effectiveness of machine learning models in predicting the pass rates of physical therapy students in national exams. Traditional grade prediction methods primarily rely on past academic performance or demographic data. However, this study employed machine learning and deep learning techniques to analyze mock test scores with the goal of improving prediction accuracy. Data from 1,242 students across five Korean universities were collected and preprocessed, followed by analysis using various models. Models, including those generated and fine-tuned with the assistance of ChatGPT-4, were applied to the dataset. The results showed that H2OAutoML (GBM2) performed the best with an accuracy of 98.4%, while TabNet, LightGBM, and RandomForest also demonstrated high performance. This study demonstrates the exceptional effectiveness of H2OAutoML (GBM2) in predicting national exam pass rates and suggests that these AI-assisted models can significantly contribute to medical education and policy.