• Title/Summary/Keyword: Generative pretrained transformer

Search Result 4, Processing Time 0.023 seconds

Artificial intelligence application UX/UI study for language learning of children with articulation disorder (조음장애 아동의 언어학습을 위한 인공지능 애플리케이션 UX/UI 연구)

  • Yang, Eun-mi;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.174-176
    • /
    • 2022
  • In this paper, we present a mobile application for 'personalized customized learning' for children with articulation disorders using an artificial intelligence (AI) algorithm. A dataset (Data Set) to analyze, judge, and predict the learner's articulation situation and degree. In particular, we designed a prototype model by looking at how AI can be improved and advanced compared to existing applications from the UX/UI (GUI) aspect. So far, the focus has been on visual experience, but now it is an important time to process data and provide a UX/UI (GUI) experience to users. The UX/UI (GUI) of the proposed mobile application was to be provided according to the learner's articulation level and situation by using CRNN (Convolution Recurrent Neural Network) of DeepLearning and Auto Encoder GPT-3 (Generative Pretrained Transformer). The use of artificial intelligence algorithms will provide a learning environment with a high degree of perfection to children with articulation disorders, thereby enhancing the learning effect. I hope that you do not have any fear or discomfort in conversation by improving the perfection of articulation with 'personalized and customized learning'.

  • PDF

Empirical Study for Automatic Evaluation of Abstractive Summarization by Error-Types (오류 유형에 따른 생성요약 모델의 본문-요약문 간 요약 성능평가 비교)

  • Seungsoo Lee;Sangwoo Kang
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.3
    • /
    • pp.197-226
    • /
    • 2023
  • Generative Text Summarization is one of the Natural Language Processing tasks. It generates a short abbreviated summary while preserving the content of the long text. ROUGE is a widely used lexical-overlap based metric for text summarization models in generative summarization benchmarks. Although it shows very high performance, the studies report that 30% of the generated summary and the text are still inconsistent. This paper proposes a methodology for evaluating the performance of the summary model without using the correct summary. AggreFACT is a human-annotated dataset that classifies the types of errors in neural text summarization models. Among all the test candidates, the two cases, generation summary, and when errors occurred throughout the summary showed the highest correlation results. We observed that the proposed evaluation score showed a high correlation with models finetuned with BART and PEGASUS, which is pretrained with a large-scale Transformer structure.

A Study on Performance Improvement of GVQA Model Using Transformer (트랜스포머를 이용한 GVQA 모델의 성능 개선에 관한 연구)

  • Park, Sung-Wook;Kim, Jun-Yeong;Park, Jun;Lee, Han-Sung;Jung, Se-Hoon;Sim, Cun-Bo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.749-752
    • /
    • 2021
  • 오늘날 인공지능(Artificial Intelligence, AI) 분야에서 가장 구현하기 어려운 분야 중 하나는 추론이다. 근래 추론 분야에서 영상과 언어가 결합한 다중 모드(Multi-modal) 환경에서 영상 기반의 질의 응답(Visual Question Answering, VQA) 과업에 대한 AI 모델이 발표됐다. 얼마 지나지 않아 VQA 모델의 성능을 개선한 GVQA(Grounded Visual Question Answering) 모델도 발표됐다. 하지만 아직 GVQA 모델도 완벽한 성능을 내진 못한다. 본 논문에서는 GVQA 모델의 성능 개선을 위해 VCC(Visual Concept Classifier) 모델을 ViT-G(Vision Transformer-Giant)/14로 변경하고, ACP(Answer Cluster Predictor) 모델을 GPT(Generative Pretrained Transformer)-3으로 변경한다. 이와 같은 방법들은 성능을 개선하는 데 큰 도움이 될 수 있다고 사료된다.

Is ChatGPT a "Fire of Prometheus" for Non-Native English-Speaking Researchers in Academic Writing?

  • Sung Il Hwang;Joon Seo Lim;Ro Woon Lee;Yusuke Matsui;Toshihiro Iguchi;Takao Hiraki;Hyungwoo Ahn
    • Korean Journal of Radiology
    • /
    • v.24 no.10
    • /
    • pp.952-959
    • /
    • 2023
  • Large language models (LLMs) such as ChatGPT have garnered considerable interest for their potential to aid non-native English-speaking researchers. These models can function as personal, round-the-clock English tutors, akin to how Prometheus in Greek mythology bestowed fire upon humans for their advancement. LLMs can be particularly helpful for non-native researchers in writing the Introduction and Discussion sections of manuscripts, where they often encounter challenges. However, using LLMs to generate text for research manuscripts entails concerns such as hallucination, plagiarism, and privacy issues; to mitigate these risks, authors should verify the accuracy of generated content, employ text similarity detectors, and avoid inputting sensitive information into their prompts. Consequently, it may be more prudent to utilize LLMs for editing and refining text rather than generating large portions of text. Journal policies concerning the use of LLMs vary, but transparency in disclosing artificial intelligence tool usage is emphasized. This paper aims to summarize how LLMs can lower the barrier to academic writing in English, enabling researchers to concentrate on domain-specific research, provided they are used responsibly and cautiously.