• Title/Summary/Keyword: language intelligence

Search Result 634, Processing Time 0.022 seconds

Data Augmentation of English Reading Comprehension Tutoring Dialogs using ChatGPT (ChatGPT 를 이용한 독해 튜터링 대화 데이터 확장)

  • Hyunyou Kwon;Sung-Kwon Choi;Jinxia Huang;Oh-Woog Kwon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.43-44
    • /
    • 2023
  • 대화형 독해 튜터링 시스템을 위한 학생주도 대화 데이터셋 생성 및 확장에 ChatGPT 의 활용 가능성을 평가하였다. 단순히 수동으로만 구축한 기존의 데이터셋과 ChatGPT 에 의해 반자동으로 확장된 데이터셋을 비교한 결과, 구축량, 소요 시간, 비용 및 반복 작업 측면에서 ChatGPT 가 가진 유용성을 알 수 있었다. 그러나, 유형별 배분의 편중과, 부적절한 데이터 생성 등의 한계도 나타났다. Chat GPT 의 빠른 발전이 예상됨에 따라 대화형 튜터링 분야에 ChatGPT 에 의한 반자동 데이터 확장 방법이 널리 활용될 것으로 기대된다.

Toon Image Generation of Main Characters in a Comic from Object Diagram via Natural Language Based Requirement Specifications

  • Janghwan Kim;Jihoon Kong;Hee-Do Heo;Sam-Hyun Chun;R. Young Chul Kim
    • International journal of advanced smart convergence
    • /
    • v.13 no.1
    • /
    • pp.85-91
    • /
    • 2024
  • Currently, generative artificial intelligence is a hot topic around the world. Generative artificial intelligence creates various images, art, video clips, advertisements, etc. The problem is that it is very difficult to verify the internal work of artificial intelligence. As a requirements engineer, I attempt to create a toon image by applying linguistic mechanisms to the current issue. This is combined with the UML object model through the semantic role analysis technique of linguists Chomsky and Fillmore. Then, the derived properties are linked to the toon creation template. This is to ensure productivity based on reusability rather than creativity in toon engineering. In the future, we plan to increase toon image productivity by incorporating software development processes and reusability.

Reducing Toxic Response Generation in Conversational Models using Plug and Play Language Model (Plug and Play Language Model을 활용한 대화 모델의 독성 응답 생성 감소)

  • Kim, Byeong-Joo;Lee, Geun-Bae
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.433-438
    • /
    • 2021
  • 대화 시스템은 크게 사용자와 시스템이 특정 목적 혹은 자유 주제에 대해 대화를 진행하는 것으로 구분된다. 최근 자유주제 대화 시스템(Open-Domain Dialogue System)에 대한 연구가 활발히 진행됨에 따라 자유 주제를 기반으로 하는 상담 대화, 일상 대화 시스템의 독성 발화 제어 생성에 대한 연구의 중요성이 더욱 커지고 있다. 이에 본 논문에서는 대화 모델의 독성 응답 생성을 제어하기 위해 일상 대화 데이터셋으로 학습된 BART 모델에 Plug-and-Play Language Model 방법을 적용한다. 공개된 독성 대화 분류 데이터셋으로 학습된 독성 응답 분류기를 PPLM의 어트리뷰트(Attribute) 모델로 활용하여 대화 모델의 독성 응답 생성을 감소시키고 그 차이를 실험을 통해 정량적으로 비교한다. 실험 결과 어트리뷰트 모델을 활용한 모든 실험에서 독성 응답 생성이 감소함을 확인하였다.

  • PDF

Named entity recognition using transfer learning and small human- and meta-pseudo-labeled datasets

  • Kyoungman Bae;Joon-Ho Lim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.59-70
    • /
    • 2024
  • We introduce a high-performance named entity recognition (NER) model for written and spoken language. To overcome challenges related to labeled data scarcity and domain shifts, we use transfer learning to leverage our previously developed KorBERT as the base model. We also adopt a meta-pseudo-label method using a teacher/student framework with labeled and unlabeled data. Our model presents two modifications. First, the student model is updated with an average loss from both human- and pseudo-labeled data. Second, the influence of noisy pseudo-labeled data is mitigated by considering feedback scores and updating the teacher model only when below a threshold (0.0005). We achieve the target NER performance in the spoken language domain and improve that in the written language domain by proposing a straightforward rollback method that reverts to the best model based on scarce human-labeled data. Further improvement is achieved by adjusting the label vector weights in the named entity dictionary.

Research Trends in Large Language Models and Mathematical Reasoning (초거대 언어모델과 수학추론 연구 동향)

  • O.W. Kwon;J.H. Shin;Y.A. Seo;S.J. Lim;J. Heo;K.Y. Lee
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.6
    • /
    • pp.1-11
    • /
    • 2023
  • Large language models seem promising for handling reasoning problems, but their underlying solving mechanisms remain unclear. Large language models will establish a new paradigm in artificial intelligence and the society as a whole. However, a major challenge of large language models is the massive resources required for training and operation. To address this issue, researchers are actively exploring compact large language models that retain the capabilities of large language models while notably reducing the model size. These research efforts are mainly focused on improving pretraining, instruction tuning, and alignment. On the other hand, chain-of-thought prompting is a technique aimed at enhancing the reasoning ability of large language models. It provides an answer through a series of intermediate reasoning steps when given a problem. By guiding the model through a multistep problem-solving process, chain-of-thought prompting may improve the model reasoning skills. Mathematical reasoning, which is a fundamental aspect of human intelligence, has played a crucial role in advancing large language models toward human-level performance. As a result, mathematical reasoning is being widely explored in the context of large language models. This type of research extends to various domains such as geometry problem solving, tabular mathematical reasoning, visual question answering, and other areas.

English-Korean speech translation corpus (EnKoST-C): Construction procedure and evaluation results

  • Jeong-Uk Bang;Joon-Gyu Maeng;Jun Park;Seung Yun;Sang-Hun Kim
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.18-27
    • /
    • 2023
  • We present an English-Korean speech translation corpus, named EnKoST-C. End-to-end model training for speech translation tasks often suffers from a lack of parallel data, such as speech data in the source language and equivalent text data in the target language. Most available public speech translation corpora were developed for European languages, and there is currently no public corpus for English-Korean end-to-end speech translation. Thus, we created an EnKoST-C centered on TED Talks. In this process, we enhance the sentence alignment approach using the subtitle time information and bilingual sentence embedding information. As a result, we built a 559-h English-Korean speech translation corpus. The proposed sentence alignment approach showed excellent performance of 0.96 f-measure score. We also show the baseline performance of an English-Korean speech translation model trained with EnKoST-C. The EnKoST-C is freely available on a Korean government open data hub site.

The Relationship Among Domain-General Creativity, Linguistic Intelligence, Korean Language Grade and Linguistic Creativity of Elementary School Student (초등학생의 일반창의성, 언어지능, 국어성적과 언어창의성 간의 관계연구)

  • Park, Jung-Hwan;Hong, Mi-Sun;Lew, Kyoung-Hoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.8
    • /
    • pp.3760-3767
    • /
    • 2013
  • The purpose of this study is to investigate the relationship among domain-general creativity, linguistic intelligence, Korean language grade and linguistic creativity of elementary school student. And to confirm the relative predictive power of domain-general creativity variables in predicting elementary school students' linguistic creativity. The instruments used in this study were 'TTCT', 'Essay writing' and 'Linguistic intelligence ' and school grade of Korean language. Self-reported response data on these instruments from 338, 4th grade elementary school students in Seoul were analyzed. The data were analyzed with descriptive statistics, Pearson correlations, multiple stepwise regression analysis and ANOVA by using SPSS 18.0. The major results of this study were as follows; First, the correlations among domain-general creativity, Korean language grade and linguistic creativity were significant. Second, Abstractness of title were the best predictor of linguistic creativity in elementary school students.

Updated Primer on Generative Artificial Intelligence and Large Language Models in Medical Imaging for Medical Professionals

  • Kiduk Kim;Kyungjin Cho;Ryoungwoo Jang;Sunggu Kyung;Soyoung Lee;Sungwon Ham;Edward Choi;Gil-Sun Hong;Namkug Kim
    • Korean Journal of Radiology
    • /
    • v.25 no.3
    • /
    • pp.224-242
    • /
    • 2024
  • The emergence of Chat Generative Pre-trained Transformer (ChatGPT), a chatbot developed by OpenAI, has garnered interest in the application of generative artificial intelligence (AI) models in the medical field. This review summarizes different generative AI models and their potential applications in the field of medicine and explores the evolving landscape of Generative Adversarial Networks and diffusion models since the introduction of generative AI models. These models have made valuable contributions to the field of radiology. Furthermore, this review also explores the significance of synthetic data in addressing privacy concerns and augmenting data diversity and quality within the medical domain, in addition to emphasizing the role of inversion in the investigation of generative models and outlining an approach to replicate this process. We provide an overview of Large Language Models, such as GPTs and bidirectional encoder representations (BERTs), that focus on prominent representatives and discuss recent initiatives involving language-vision models in radiology, including innovative large language and vision assistant for biomedicine (LLaVa-Med), to illustrate their practical application. This comprehensive review offers insights into the wide-ranging applications of generative AI models in clinical research and emphasizes their transformative potential.

Korean Phoneme Sequence based Word Embedding (한국어 음소열 기반 워드 임베딩 기술)

  • Chung, Euisok;Jeon, Hwa Jeon;Lee, Sung Joo;Park, Jeon-Gue
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.225-227
    • /
    • 2017
  • 본 논문은 한국어 서브워드 기반 워드 임베딩 기술을 다룬다. 미등록어 문제를 가진 기존 워드 임베딩 기술을 대체할 수 있는 새로운 워드 임베딩 기술을 한국어에 적용하기 위해, 음소열 기반 서브워드 자질 검증을 진행한다. 기존 서브워드 자질은 문자 n-gram을 사용한다. 한국어의 경우 특정 단음절 발음은 단어에 따라 달라진다. 여기서 음소열 n-gram은 특정 서브워드 자질의 변별력을 확보할 수 있다는 장점이 있다. 본 논문은 서브워드 임베딩 기술을 재구현하여, 영어 환경에서 기존 워드 임베딩 사례와 비교하여 성능 우위를 확보한다. 또한, 한국어 음소열 자질을 활용한 실험 결과에서 의미적으로 보다 유사한 어휘를 벡터 공간상에 근접시키는 결과를 보여 준다.

  • PDF

Developing an Adaptive Dialogue System Using External Information (외부 상황 정보를 활용하는 적응적 대화 모델의 구현)

  • Jang, Jin Yea;Jung, Minyoung;Park, Hanmu;Shin, Saim
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.456-459
    • /
    • 2019
  • 대화 행위는 단순한 발화 문장들의 교환을 넘어 발화자들의 다양한 주변 정보를 고려한 종합적인 판단의 결과로 볼 수 있다. 본 논문은 여섯 가지 유형의 외부 상황 정보를 기반으로 적응적 발언을 생성하는 딥러닝 기반 대화 모델을 소개한다. 직접 구축한 상황 정보들이 태깅된 대화 데이터를 바탕으로, 외부 상황 정보를 사용자 발화와 더불어 활용하는 다양한 구조의 신경망 구조를 가지는 모델과 더불어 외부 상황 정보를 사용하지 않는 모델과의 성능에 대해 비교한다. 실험 결과들은 대화 모델의 발화 생성에 있어서 상황 정보 활용의 중요성을 보여준다.

  • PDF