• Title/Summary/Keyword: XLM-RoBERTa

Search Result 5, Processing Time 0.021 seconds

Sentiment analysis of Korean movie reviews using XLM-R

  • Shin, Noo Ri;Kim, TaeHyeon;Yun, Dai Yeol;Moon, Seok-Jae;Hwang, Chi-gon
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.2
    • /
    • pp.86-90
    • /
    • 2021
  • Sentiment refers to a person's thoughts, opinions, and feelings toward an object. Sentiment analysis is a process of collecting opinions on a specific target and classifying them according to their emotions, and applies to opinion mining that analyzes product reviews and reviews on the web. Companies and users can grasp the opinions of public opinion and come up with a way to do so. Recently, natural language processing models using the Transformer structure have appeared, and Google's BERT is a representative example. Afterwards, various models came out by remodeling the BERT. Among them, the Facebook AI team unveiled the XLM-R (XLM-RoBERTa), an upgraded XLM model. XLM-R solved the data limitation and the curse of multilinguality by training XLM with 2TB or more refined CC (CommonCrawl), not Wikipedia data. This model showed that the multilingual model has similar performance to the single language model when it is trained by adjusting the size of the model and the data required for training. Therefore, in this paper, we study the improvement of Korean sentiment analysis performed using a pre-trained XLM-R model that solved curse of multilinguality and improved performance.

Research on Recent Quality Estimation (최신 기계번역 품질 예측 연구)

  • Eo, Sugyeong;Park, Chanjun;Moon, Hyeonseok;Seo, Jaehyung;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.7
    • /
    • pp.37-44
    • /
    • 2021
  • Quality estimation (QE) can evaluate the quality of machine translation output even for those who do not know the target language, and its high utilization highlights the need for QE. QE shared task is held every year at Conference on Machine Translation (WMT), and recently, researches applying Pretrained Language Model (PLM) are mainly being conducted. In this paper, we conduct a survey on the QE task and research trends, and we summarize the features of PLM. In addition, we used a multilingual BART model that has not yet been utilized and performed comparative analysis with the existing studies such as XLM, multilingual BERT, and XLM-RoBERTa. As a result of the experiment, we confirmed which PLM was most effective when applied to QE, and saw the possibility of applying the multilingual BART model to the QE task.

Word-level Korean-English Quality Estimation (단어 수준 한국어-영어 기계번역 품질 예측)

  • Eo, Sugyeong;Park, Chanjun;Seo, Jaehyung;Moon, Hyeonseok;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.9-15
    • /
    • 2021
  • 기계번역 품질 예측 (Quality Estimation, QE)은 정답 문장에 대한 참조없이 소스 문장과 기계번역 결과를 통해 기계번역 결과에 대한 품질을 수준별 주석으로 나타내주는 태스크이며, 다양한 활용도가 있다는 점에서 꾸준히 연구가 수행되고 있다. 그러나 QE 모델 학습을 위한 데이터 구성 시 기계번역 결과에 대해 번역 전문가가 교정한 문장이 필요한데, 이를 제작하는 과정에서 상당한 인건비와 시간 비용이 발생하는 한계가 있다. 본 논문에서는 번역 전문가 없이 병렬 또는 단일 말뭉치와 기계번역기만을 활용하여 자동화된 방식으로 한국어-영어 합성 QE 데이터를 구축하며, 최초로 단어 수준의 한국어-영어 기계번역 결과 품질 예측 모델을 제작하였다. QE 모델 제작 시에는 Cross-lingual language model (XLM), XLM-RoBERTa (XLM-R), multilingual BART (mBART)와 같은 다언어모델들을 활용하여 비교 실험을 수행했다. 또한 기계번역 결과에 대한 품질 예측의 객관성을 검증하고자 구글, 아마존, 마이크로소프트, 시스트란의 번역기를 활용하여 모델 평가를 진행했다. 실험 결과 XLM-R을 활용하여 미세조정학습한 QE 모델이 가장 좋은 성능을 보였으며, 품질 예측의 객관성을 확보함으로써 QE의 다양한 장점들을 한국어-영어 기계번역에서도 활용할 수 있도록 했다.

  • PDF

Explaining the Translation Error Factors of Machine Translation Services Using Self-Attention Visualization (Self-Attention 시각화를 사용한 기계번역 서비스의 번역 오류 요인 설명)

  • Zhang, Chenglong;Ahn, Hyunchul
    • Journal of Information Technology Services
    • /
    • v.21 no.2
    • /
    • pp.85-95
    • /
    • 2022
  • This study analyzed the translation error factors of machine translation services such as Naver Papago and Google Translate through Self-Attention path visualization. Self-Attention is a key method of the Transformer and BERT NLP models and recently widely used in machine translation. We propose a method to explain translation error factors of machine translation algorithms by comparison the Self-Attention paths between ST(source text) and ST'(transformed ST) of which meaning is not changed, but the translation output is more accurate. Through this method, it is possible to gain explainability to analyze a machine translation algorithm's inside process, which is invisible like a black box. In our experiment, it was possible to explore the factors that caused translation errors by analyzing the difference in key word's attention path. The study used the XLM-RoBERTa multilingual NLP model provided by exBERT for Self-Attention visualization, and it was applied to two examples of Korean-Chinese and Korean-English translations.

A study on the aspect-based sentiment analysis of multilingual customer reviews (다국어 사용자 후기에 대한 속성기반 감성분석 연구)

  • Sungyoung Ji;Siyoon Lee;Daewoo Choi;Kee-Hoon Kang
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.6
    • /
    • pp.515-528
    • /
    • 2023
  • With the growth of the e-commerce market, consumers increasingly rely on user reviews to make purchasing decisions. Consequently, researchers are actively conducting studies to effectively analyze these reviews. Among the various methods of sentiment analysis, the aspect-based sentiment analysis approach, which examines user reviews from multiple angles rather than solely relying on simple positive or negative sentiments, is gaining widespread attention. Among the various methodologies for aspect-based sentiment analysis, there is an analysis method using a transformer-based model, which is the latest natural language processing technology. In this paper, we conduct an aspect-based sentiment analysis on multilingual user reviews using two real datasets from the latest natural language processing technology model. Specifically, we use restaurant data from the SemEval 2016 public dataset and multilingual user review data from the cosmetic domain. We compare the performance of transformer-based models for aspect-based sentiment analysis and apply various methodologies to improve their performance. Models using multilingual data are expected to be highly useful in that they can analyze multiple languages in one model without building separate models for each language.