• Title/Summary/Keyword: Cross-Lingual Language Model

Search Result 7, Processing Time 0.018 seconds

Korean and Multilingual Language Models Study for Cross-Lingual Post-Training (XPT) (Cross-Lingual Post-Training (XPT)을 위한 한국어 및 다국어 언어모델 연구)

  • Son, Suhyune;Park, Chanjun;Lee, Jungseob;Shim, Midan;Lee, Chanhee;Park, Kinam;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.3
    • /
    • pp.77-89
    • /
    • 2022
  • It has been proven through many previous researches that the pretrained language model with a large corpus helps improve performance in various natural language processing tasks. However, there is a limit to building a large-capacity corpus for training in a language environment where resources are scarce. Using the Cross-lingual Post-Training (XPT) method, we analyze the method's efficiency in Korean, which is a low resource language. XPT selectively reuses the English pretrained language model parameters, which is a high resource and uses an adaptation layer to learn the relationship between the two languages. This confirmed that only a small amount of the target language dataset in the relationship extraction shows better performance than the target pretrained language model. In addition, we analyze the characteristics of each model on the Korean language model and the Korean multilingual model disclosed by domestic and foreign researchers and companies.

Cross-Lingual Style-Based Title Generation Using Multiple Adapters (다중 어댑터를 이용한 교차 언어 및 스타일 기반의 제목 생성)

  • Yo-Han Park;Yong-Seok Choi;Kong Joo Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.341-354
    • /
    • 2023
  • The title of a document is the brief summarization of the document. Readers can easily understand a document if we provide them with its title in their preferred styles and the languages. In this research, we propose a cross-lingual and style-based title generation model using multiple adapters. To train the model, we need a parallel corpus in several languages with different styles. It is quite difficult to construct this kind of parallel corpus; however, a monolingual title generation corpus of the same style can be built easily. Therefore, we apply a zero-shot strategy to generate a title in a different language and with a different style for an input document. A baseline model is Transformer consisting of an encoder and a decoder, pre-trained by several languages. The model is then equipped with multiple adapters for translation, languages, and styles. After the model learns a translation task from parallel corpus, it learns a title generation task from monolingual title generation corpus. When training the model with a task, we only activate an adapter that corresponds to the task. When generating a cross-lingual and style-based title, we only activate adapters that correspond to a target language and a target style. An experimental result shows that our proposed model is only as good as a pipeline model that first translates into a target language and then generates a title. There have been significant changes in natural language generation due to the emergence of large-scale language models. However, research to improve the performance of natural language generation using limited resources and limited data needs to continue. In this regard, this study seeks to explore the significance of such research.

Korean language model construction and comparative analysis with Cross-lingual Post-Training (XPT) (Cross-lingual Post-Training (XPT)을 통한 한국어 언어모델 구축 및 비교 실험)

  • Suhyune Son;Chanjun Park ;Jungseob Lee;Midan Shim;Sunghyun Lee;JinWoo Lee ;Aram So;Heuiseok Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.295-299
    • /
    • 2022
  • 자원이 부족한 언어 환경에서 사전학습 언어모델 학습을 위한 대용량의 코퍼스를 구축하는데는 한계가 존재한다. 본 논문은 이러한 한계를 극복할 수 있는 Cross-lingual Post-Training (XPT) 방법론을 적용하여 비교적 자원이 부족한 한국어에서 해당 방법론의 효율성을 분석한다. 적은 양의 한국어 코퍼스인 400K와 4M만을 사용하여 다양한 한국어 사전학습 모델 (KLUE-BERT, KLUE-RoBERTa, Albert-kor)과 mBERT와 전반적인 성능 비교 및 분석 연구를 진행한다. 한국어의 대표적인 벤치마크 데이터셋인 KLUE 벤치마크를 사용하여 한국어 하위태스크에 대한 성능평가를 진행하며, 총 7가지의 태스크 중에서 5가지의 태스크에서 XPT-4M 모델이 기존 한국어 언어모델과의 비교에서 가장 우수한 혹은 두번째로 우수한 성능을 보인다. 이를 통해 XPT가 훨씬 더 많은 데이터로 훈련된 한국어 언어모델과 유사한 성능을 보일 뿐 아니라 학습과정이 매우 효율적임을 보인다.

  • PDF

Study on Zero-shot based Quality Estimation (Zero-Shot 기반 기계번역 품질 예측 연구)

  • Eo, Sugyeong;Park, Chanjun;Seo, Jaehyung;Moon, Hyeonseok;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.35-43
    • /
    • 2021
  • Recently, there has been a growing interest in zero-shot cross-lingual transfer, which leverages cross-lingual language models (CLLMs) to perform downstream tasks that are not trained in a specific language. In this paper, we point out the limitations of the data-centric aspect of quality estimation (QE), and perform zero-shot cross-lingual transfer even in environments where it is difficult to construct QE data. Few studies have dealt with zero-shots in QE, and after fine-tuning the English-German QE dataset, we perform zero-shot transfer leveraging CLLMs. We conduct comparative analysis between various CLLMs. We also perform zero-shot transfer on language pairs with different sized resources and analyze results based on the linguistic characteristics of each language. Experimental results showed the highest performance in multilingual BART and multillingual BERT, and we induced QE to be performed even when QE learning for a specific language pair was not performed at all.

Burmese Sentiment Analysis Based on Transfer Learning

  • Mao, Cunli;Man, Zhibo;Yu, Zhengtao;Wu, Xia;Liang, Haoyuan
    • Journal of Information Processing Systems
    • /
    • v.18 no.4
    • /
    • pp.535-548
    • /
    • 2022
  • Using a rich resource language to classify sentiments in a language with few resources is a popular subject of research in natural language processing. Burmese is a low-resource language. In light of the scarcity of labeled training data for sentiment classification in Burmese, in this study, we propose a method of transfer learning for sentiment analysis of a language that uses the feature transfer technique on sentiments in English. This method generates a cross-language word-embedding representation of Burmese vocabulary to map Burmese text to the semantic space of English text. A model to classify sentiments in English is then pre-trained using a convolutional neural network and an attention mechanism, where the network shares the model for sentiment analysis of English. The parameters of the network layer are used to learn the cross-language features of the sentiments, which are then transferred to the model to classify sentiments in Burmese. Finally, the model was tuned using the labeled Burmese data. The results of the experiments show that the proposed method can significantly improve the classification of sentiments in Burmese compared to a model trained using only a Burmese corpus.

Combination of Classifiers Decisions for Multilingual Speaker Identification

  • Nagaraja, B.G.;Jayanna, H.S.
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.928-940
    • /
    • 2017
  • State-of-the-art speaker recognition systems may work better for the English language. However, if the same system is used for recognizing those who speak different languages, the systems may yield a poor performance. In this work, the decisions of a Gaussian mixture model-universal background model (GMM-UBM) and a learning vector quantization (LVQ) are combined to improve the recognition performance of a multilingual speaker identification system. The difference between these classifiers is in their modeling techniques. The former one is based on probabilistic approach and the latter one is based on the fine-tuning of neurons. Since the approaches are different, each modeling technique identifies different sets of speakers for the same database set. Therefore, the decisions of the classifiers may be used to improve the performance. In this study, multitaper mel-frequency cepstral coefficients (MFCCs) are used as the features and the monolingual and cross-lingual speaker identification studies are conducted using NIST-2003 and our own database. The experimental results show that the combined system improves the performance by nearly 10% compared with that of the individual classifier.

Word-level Korean-English Quality Estimation (단어 수준 한국어-영어 기계번역 품질 예측)

  • Eo, Sugyeong;Park, Chanjun;Seo, Jaehyung;Moon, Hyeonseok;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.9-15
    • /
    • 2021
  • 기계번역 품질 예측 (Quality Estimation, QE)은 정답 문장에 대한 참조없이 소스 문장과 기계번역 결과를 통해 기계번역 결과에 대한 품질을 수준별 주석으로 나타내주는 태스크이며, 다양한 활용도가 있다는 점에서 꾸준히 연구가 수행되고 있다. 그러나 QE 모델 학습을 위한 데이터 구성 시 기계번역 결과에 대해 번역 전문가가 교정한 문장이 필요한데, 이를 제작하는 과정에서 상당한 인건비와 시간 비용이 발생하는 한계가 있다. 본 논문에서는 번역 전문가 없이 병렬 또는 단일 말뭉치와 기계번역기만을 활용하여 자동화된 방식으로 한국어-영어 합성 QE 데이터를 구축하며, 최초로 단어 수준의 한국어-영어 기계번역 결과 품질 예측 모델을 제작하였다. QE 모델 제작 시에는 Cross-lingual language model (XLM), XLM-RoBERTa (XLM-R), multilingual BART (mBART)와 같은 다언어모델들을 활용하여 비교 실험을 수행했다. 또한 기계번역 결과에 대한 품질 예측의 객관성을 검증하고자 구글, 아마존, 마이크로소프트, 시스트란의 번역기를 활용하여 모델 평가를 진행했다. 실험 결과 XLM-R을 활용하여 미세조정학습한 QE 모델이 가장 좋은 성능을 보였으며, 품질 예측의 객관성을 확보함으로써 QE의 다양한 장점들을 한국어-영어 기계번역에서도 활용할 수 있도록 했다.

  • PDF