• Title/Summary/Keyword: pretrained language model

Search Result 27, Processing Time 0.023 seconds

Dialog-based multi-item recommendation using automatic evaluation

  • Euisok Chung;Hyun Woo Kim;Byunghyun Yoo;Ran Han;Jeongmin Yang;Hwa Jeon Song
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.277-289
    • /
    • 2024
  • In this paper, we describe a neural network-based application that recommends multiple items using dialog context input and simultaneously outputs a response sentence. Further, we describe a multi-item recommendation by specifying it as a set of clothing recommendations. For this, a multimodal fusion approach that can process both cloth-related text and images is required. We also examine achieving the requirements of downstream models using a pretrained language model. Moreover, we propose a gate-based multimodal fusion and multiprompt learning based on a pretrained language model. Specifically, we propose an automatic evaluation technique to solve the one-to-many mapping problem of multi-item recommendations. A fashion-domain multimodal dataset based on Koreans is constructed and tested. Various experimental environment settings are verified using an automatic evaluation method. The results show that our proposed method can be used to obtain confidence scores for multi-item recommendation results, which is different from traditional accuracy evaluation.

Zero-anaphora resolution in Korean based on deep language representation model: BERT

  • Kim, Youngtae;Ra, Dongyul;Lim, Soojong
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.299-312
    • /
    • 2021
  • It is necessary to achieve high performance in the task of zero anaphora resolution (ZAR) for completely understanding the texts in Korean, Japanese, Chinese, and various other languages. Deep-learning-based models are being employed for building ZAR systems, owing to the success of deep learning in the recent years. However, the objective of building a high-quality ZAR system is far from being achieved even using these models. To enhance the current ZAR techniques, we fine-tuned a pretrained bidirectional encoder representations from transformers (BERT). Notably, BERT is a general language representation model that enables systems to utilize deep bidirectional contextual information in a natural language text. It extensively exploits the attention mechanism based upon the sequence-transduction model Transformer. In our model, classification is simultaneously performed for all the words in the input word sequence to decide whether each word can be an antecedent. We seek end-to-end learning by disallowing any use of hand-crafted or dependency-parsing features. Experimental results show that compared with other models, our approach can significantly improve the performance of ZAR.

Towards Korean-Centric Token-free Pretrained Language Model (한국어 중심의 토큰-프리 언어 이해-생성 모델 사전학습 연구)

  • Jong-Hun Shin;Jeong Heo;Ji-Hee Ryu;Ki-Young Lee;Young-Ae Seo;Jin Seong;Soo-Jong Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.711-715
    • /
    • 2023
  • 본 연구는 대부분의 언어 모델이 사용하고 있는 서브워드 토큰화 과정을 거치지 않고, 바이트 단위의 인코딩을 그대로 다룰 수 있는 토큰-프리 사전학습 언어모델에 대한 것이다. 토큰-프리 언어모델은 명시적인 미등록어 토큰이 존재하지 않고, 전 처리 과정이 단순하며 다양한 언어 및 표현 체계에 대응할 수 있는 장점이 있다. 하지만 관련 연구가 미흡, 서브워드 모델에 대비해 학습이 어렵고 낮은 성능이 보고되어 왔다. 본 연구에서는 한국어를 중심으로 토큰-프리 언어 이해-생성 모델을 사전 학습 후, 서브워드 기반 모델과 비교하여 가능성을 살펴본다. 또한, 토큰 프리 언어모델에서 지적되는 과도한 연산량을 감소시킬 수 있는 그래디언트 기반 서브워드 토크나이저를 적용, 처리 속도를 학습 2.7배, 추론 1.46배 개선하였다.

  • PDF

HeavyRoBERTa: Pretrained Language Model for Heavy Industry (HeavyRoBERTa: 중공업 특화 사전 학습 언어 모델)

  • Lee, Jeong-Doo;Na, Seung-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.602-604
    • /
    • 2021
  • 최근 자연어 처리 분야에서 사전 학습된 언어 모델은 다양한 응용 태스크에 적용되어 성능을 향상시켰다. 하지만 일반적인 말뭉치로 사전 학습된 언어 모델의 경우 중공업 분야처럼 전문적인 분야의 응용 태스크에서 좋은 성능을 나타내지 못한다. 때문에 본 논문에서는 이러한 문제점을 해결하기 위해 중공업 말뭉치를 이용한 RoBERTa 기반의 중공업 분야에 특화된 언어 모델 HeavyRoBERTa를 제안하고 이를 통해 중공업 말뭉치 상에서 Perplexity와 zero-shot 유의어 추출 태스크에서 성능을 개선시켰다.

  • PDF

Korean Dependency Parsing using Pretrained Language Model and Specific-Abstraction Encoder (사전 학습 모델과 Specific-Abstraction 인코더를 사용한 한국어 의존 구문 분석)

  • Kim, Bongsu;Whang, Taesun;Kim, Jungwook;Lee, Saebyeok
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.98-102
    • /
    • 2020
  • 의존 구문 분석은 입력된 문장 내의 어절 간의 의존 관계를 예측하기 위한 자연어처리 태스크이다. 최근에는 BERT와 같은 사전학습 모델기반의 의존 구문 분석 모델이 높은 성능을 보이고 있다. 본 논문에서는 추가적인 성능 개선을 위해 ALBERT, ELECTRA 언어 모델을 형태소 분석과 BPE를 적용해 학습한 후, 인코딩 과정에 사용하였다. 또한 의존소 어절과 지배소 어절의 특징을 specific하게 추상화 하기 위해 두 개의 트랜스포머 인코더 스택을 추가한 의존 구문 분석 모델을 제안한다. 실험결과 제안한 모델이 세종 코퍼스에 대해 UAS 94.77 LAS 94.06의 성능을 보였다.

  • PDF

Compressing intent classification model for multi-agent in low-resource devices (저성능 자원에서 멀티 에이전트 운영을 위한 의도 분류 모델 경량화)

  • Yoon, Yongsun;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.45-55
    • /
    • 2022
  • Recently, large-scale language models (LPLM) have been shown state-of-the-art performances in various tasks of natural language processing including intent classification. However, fine-tuning LPLM requires much computational cost for training and inference which is not appropriate for dialog system. In this paper, we propose compressed intent classification model for multi-agent in low-resource like CPU. Our method consists of two stages. First, we trained sentence encoder from LPLM then compressed it through knowledge distillation. Second, we trained agent-specific adapter for intent classification. The results of three intent classification datasets show that our method achieved 98% of the accuracy of LPLM with only 21% size of it.

Comparative study of text representation and learning for Persian named entity recognition

  • Pour, Mohammad Mahdi Abdollah;Momtazi, Saeedeh
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.794-804
    • /
    • 2022
  • Transformer models have had a great impact on natural language processing (NLP) in recent years by realizing outstanding and efficient contextualized language models. Recent studies have used transformer-based language models for various NLP tasks, including Persian named entity recognition (NER). However, in complex tasks, for example, NER, it is difficult to determine which contextualized embedding will produce the best representation for the tasks. Considering the lack of comparative studies to investigate the use of different contextualized pretrained models with sequence modeling classifiers, we conducted a comparative study about using different classifiers and embedding models. In this paper, we use different transformer-based language models tuned with different classifiers, and we evaluate these models on the Persian NER task. We perform a comparative analysis to assess the impact of text representation and text classification methods on Persian NER performance. We train and evaluate the models on three different Persian NER datasets, that is, MoNa, Peyma, and Arman. Experimental results demonstrate that XLM-R with a linear layer and conditional random field (CRF) layer exhibited the best performance. This model achieved phrase-based F-measures of 70.04, 86.37, and 79.25 and word-based F scores of 78, 84.02, and 89.73 on the MoNa, Peyma, and Arman datasets, respectively. These results represent state-of-the-art performance on the Persian NER task.

DG-based SPO tuple recognition using self-attention M-Bi-LSTM

  • Jung, Joon-young
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.438-449
    • /
    • 2022
  • This study proposes a dependency grammar-based self-attention multilayered bidirectional long short-term memory (DG-M-Bi-LSTM) model for subject-predicate-object (SPO) tuple recognition from natural language (NL) sentences. To add recent knowledge to the knowledge base autonomously, it is essential to extract knowledge from numerous NL data. Therefore, this study proposes a high-accuracy SPO tuple recognition model that requires a small amount of learning data to extract knowledge from NL sentences. The accuracy of SPO tuple recognition using DG-M-Bi-LSTM is compared with that using NL-based self-attention multilayered bidirectional LSTM, DG-based bidirectional encoder representations from transformers (BERT), and NL-based BERT to evaluate its effectiveness. The DG-M-Bi-LSTM model achieves the best results in terms of recognition accuracy for extracting SPO tuples from NL sentences even if it has fewer deep neural network (DNN) parameters than BERT. In particular, its accuracy is better than that of BERT when the learning data are limited. Additionally, its pretrained DNN parameters can be applied to other domains because it learns the structural relations in NL sentences.

Building robust Korean speech recognition model by fine-tuning large pretrained model (대형 사전훈련 모델의 파인튜닝을 통한 강건한 한국어 음성인식 모델 구축)

  • Changhan Oh;Cheongbin Kim;Kiyoung Park
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.75-82
    • /
    • 2023
  • Automatic speech recognition (ASR) has been revolutionized with deep learning-based approaches, among which self-supervised learning methods have proven to be particularly effective. In this study, we aim to enhance the performance of OpenAI's Whisper model, a multilingual ASR system on the Korean language. Whisper was pretrained on a large corpus (around 680,000 hours) of web speech data and has demonstrated strong recognition performance for major languages. However, it faces challenges in recognizing languages such as Korean, which is not major language while training. We address this issue by fine-tuning the Whisper model with an additional dataset comprising about 1,000 hours of Korean speech. We also compare its performance against a Transformer model that was trained from scratch using the same dataset. Our results indicate that fine-tuning the Whisper model significantly improved its Korean speech recognition capabilities in terms of character error rate (CER). Specifically, the performance improved with increasing model size. However, the Whisper model's performance on English deteriorated post fine-tuning, emphasizing the need for further research to develop robust multilingual models. Our study demonstrates the potential of utilizing a fine-tuned Whisper model for Korean ASR applications. Future work will focus on multilingual recognition and optimization for real-time inference.

The Verification of the Transfer Learning-based Automatic Post Editing Model (전이학습 기반 기계번역 사후교정 모델 검증)

  • Moon, Hyeonseok;Park, Chanjun;Eo, Sugyeong;Seo, Jaehyung;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.27-35
    • /
    • 2021
  • Automatic post editing is a research field that aims to automatically correct errors in machine translation results. This research is mainly being focus on high resource language pairs, such as English-German. Recent APE studies are mainly adopting transfer learning based research, where pre-training language models, or translation models generated through self-supervised learning methodologies are utilized. While translation based APE model shows superior performance in recent researches, as such researches are conducted on the high resource languages, the same perspective cannot be directly applied to the low resource languages. In this work, we apply two transfer learning strategies to Korean-English APE studies and show that transfer learning with translation model can significantly improves APE performance.