• Title/Summary/Keyword: multilingual BERT

Search Result 13, Processing Time 0.02 seconds

A Multi-task Self-attention Model Using Pre-trained Language Models on Universal Dependency Annotations

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.39-46
    • /
    • 2022
  • In this paper, we propose a multi-task model that can simultaneously predict general-purpose tasks such as part-of-speech tagging, lemmatization, and dependency parsing using the UD Korean Kaist v2.3 corpus. The proposed model thus applies the self-attention technique of the BERT model and the graph-based Biaffine attention technique by fine-tuning the multilingual BERT and the two Korean-specific BERTs such as KR-BERT and KoBERT. The performances of the proposed model are compared and analyzed using the multilingual version of BERT and the two Korean-specific BERT language models.

Research on Recent Quality Estimation (최신 기계번역 품질 예측 연구)

  • Eo, Sugyeong;Park, Chanjun;Moon, Hyeonseok;Seo, Jaehyung;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.7
    • /
    • pp.37-44
    • /
    • 2021
  • Quality estimation (QE) can evaluate the quality of machine translation output even for those who do not know the target language, and its high utilization highlights the need for QE. QE shared task is held every year at Conference on Machine Translation (WMT), and recently, researches applying Pretrained Language Model (PLM) are mainly being conducted. In this paper, we conduct a survey on the QE task and research trends, and we summarize the features of PLM. In addition, we used a multilingual BART model that has not yet been utilized and performed comparative analysis with the existing studies such as XLM, multilingual BERT, and XLM-RoBERTa. As a result of the experiment, we confirmed which PLM was most effective when applied to QE, and saw the possibility of applying the multilingual BART model to the QE task.

GMLP for Korean natural language processing and its quantitative comparison with BERT (GMLP를 이용한 한국어 자연어처리 및 BERT와 정량적 비교)

  • Lee, Sung-Min;Na, Seung-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.540-543
    • /
    • 2021
  • 본 논문에서는 Multi-Head Attention 대신 Spatial Gating Unit을 사용하는 GMLP[1]에 작은 Attention 신경망을 추가한 모델을 구성하여 뉴스와 위키피디아 데이터로 사전학습을 실시하고 한국어 다운스트림 테스크(감성분석, 개체명 인식)에 적용해 본다. 그 결과, 감성분석에서 Multilingual BERT보다 0.27%높은 Accuracy인 87.70%를 보였으며, 개체명 인식에서는 1.6%높은 85.82%의 F1 Score를 나타내었다. 따라서 GMLP가 기존 Transformer Encoder의 Multi-head Attention[2]없이 SGU와 작은 Attention만으로도 BERT[3]와 견줄만한 성능을 보일 수 있음을 확인할 수 있었다. 또한 BERT와 추론 속도를 비교 실험했을 때 배치사이즈가 20보다 작을 때 BERT보다 1에서 6배 정도 빠르다는 것을 확인할 수 있었다.

  • PDF

Analyzing Korean hate-speech detection using KcBERT (KcBERT를 활용한 한국어 악플 탐지 분석 및 개선방안 연구)

  • Seyoung Jeong;Byeongjin Kim;Daeshik Kim;Wooyoung Kim;Taeyong Kim;Hyunsoo Yoon;Wooju Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.577-580
    • /
    • 2023
  • 악성댓글은 인터넷상에서 정서적, 심리적 피해를 주는 문제로 인식되어 왔다. 본 연구는 한국어 악성댓글 탐지 분석을 위해 KcBERT 및 다양한 모델을 활용하여 성능을 비교하였다. 또한, 공개된 한국어 악성댓글 데이터가 부족한 것을 해소하기 위해 기계 번역을 이용하고, 다국어 언어 모델(Multilingual Model) mBERT를 활용하였다. 다양한 실험을 통해 KcBERT를 미세 조정한 모델의 정확도 및 F1-score가 타 모델에 비해 의미 있는 결과임을 확인할 수 있었다.

  • PDF

A study on the aspect-based sentiment analysis of multilingual customer reviews (다국어 사용자 후기에 대한 속성기반 감성분석 연구)

  • Sungyoung Ji;Siyoon Lee;Daewoo Choi;Kee-Hoon Kang
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.6
    • /
    • pp.515-528
    • /
    • 2023
  • With the growth of the e-commerce market, consumers increasingly rely on user reviews to make purchasing decisions. Consequently, researchers are actively conducting studies to effectively analyze these reviews. Among the various methods of sentiment analysis, the aspect-based sentiment analysis approach, which examines user reviews from multiple angles rather than solely relying on simple positive or negative sentiments, is gaining widespread attention. Among the various methodologies for aspect-based sentiment analysis, there is an analysis method using a transformer-based model, which is the latest natural language processing technology. In this paper, we conduct an aspect-based sentiment analysis on multilingual user reviews using two real datasets from the latest natural language processing technology model. Specifically, we use restaurant data from the SemEval 2016 public dataset and multilingual user review data from the cosmetic domain. We compare the performance of transformer-based models for aspect-based sentiment analysis and apply various methodologies to improve their performance. Models using multilingual data are expected to be highly useful in that they can analyze multiple languages in one model without building separate models for each language.

A Study on Fine-Tuning and Transfer Learning to Construct Binary Sentiment Classification Model in Korean Text (한글 텍스트 감정 이진 분류 모델 생성을 위한 미세 조정과 전이학습에 관한 연구)

  • JongSoo Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, generative models based on the Transformer architecture, such as ChatGPT, have been gaining significant attention. The Transformer architecture has been applied to various neural network models, including Google's BERT(Bidirectional Encoder Representations from Transformers) sentence generation model. In this paper, a method is proposed to create a text binary classification model for determining whether a comment on Korean movie review is positive or negative. To accomplish this, a pre-trained multilingual BERT sentence generation model is fine-tuned and transfer learned using a new Korean training dataset. To achieve this, a pre-trained BERT-Base model for multilingual sentence generation with 104 languages, 12 layers, 768 hidden, 12 attention heads, and 110M parameters is used. To change the pre-trained BERT-Base model into a text classification model, the input and output layers were fine-tuned, resulting in the creation of a new model with 178 million parameters. Using the fine-tuned model, with a maximum word count of 128, a batch size of 16, and 5 epochs, transfer learning is conducted with 10,000 training data and 5,000 testing data. A text sentiment binary classification model for Korean movie review with an accuracy of 0.9582, a loss of 0.1177, and an F1 score of 0.81 has been created. As a result of performing transfer learning with a dataset five times larger, a model with an accuracy of 0.9562, a loss of 0.1202, and an F1 score of 0.86 has been generated.

Explaining the Translation Error Factors of Machine Translation Services Using Self-Attention Visualization (Self-Attention 시각화를 사용한 기계번역 서비스의 번역 오류 요인 설명)

  • Zhang, Chenglong;Ahn, Hyunchul
    • Journal of Information Technology Services
    • /
    • v.21 no.2
    • /
    • pp.85-95
    • /
    • 2022
  • This study analyzed the translation error factors of machine translation services such as Naver Papago and Google Translate through Self-Attention path visualization. Self-Attention is a key method of the Transformer and BERT NLP models and recently widely used in machine translation. We propose a method to explain translation error factors of machine translation algorithms by comparison the Self-Attention paths between ST(source text) and ST'(transformed ST) of which meaning is not changed, but the translation output is more accurate. Through this method, it is possible to gain explainability to analyze a machine translation algorithm's inside process, which is invisible like a black box. In our experiment, it was possible to explore the factors that caused translation errors by analyzing the difference in key word's attention path. The study used the XLM-RoBERTa multilingual NLP model provided by exBERT for Self-Attention visualization, and it was applied to two examples of Korean-Chinese and Korean-English translations.

ColBERT with Adversarial Language Adaptation for Multilingual Information Retrieval (다국어 정보 검색을 위한 적대적 언어 적응을 활용한 ColBERT)

  • Jonghwi Kim;Yunsu Kim;Gary Geunbae Lee
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.239-244
    • /
    • 2023
  • 신경망 기반의 다국어 및 교차 언어 정보 검색 모델은 타겟 언어로 된 학습 데이터가 필요하지만, 이는 고자원 언어에 치중되어있다. 본 논문에서는 이를 해결하기 위해 영어 학습 데이터와 한국어-영어 병렬 말뭉치만을 이용한 효과적인 다국어 정보 검색 모델 학습 방법을 제안한다. 언어 예측 태스크와 경사 반전 계층을 활용하여 인코더가 언어에 구애 받지 않는 벡터 표현을 생성하도록 학습 방법을 고안하였고, 이를 한국어가 포함된 다국어 정보 검색 벤치마크에 대해 실험하였다. 본 실험 결과 제안 방법이 다국어 사전학습 모델과 영어 데이터만을 이용한 베이스라인보다 높은 성능을 보임을 실험적으로 확인하였다. 또한 교차 언어 정보 검색 실험을 통해 현재 검색 모델이 언어 편향성을 가지고 있으며, 성능에 직접적인 영향을 미치는 것을 보였다.

  • PDF

Sentiment analysis of Korean movie reviews using XLM-R

  • Shin, Noo Ri;Kim, TaeHyeon;Yun, Dai Yeol;Moon, Seok-Jae;Hwang, Chi-gon
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.2
    • /
    • pp.86-90
    • /
    • 2021
  • Sentiment refers to a person's thoughts, opinions, and feelings toward an object. Sentiment analysis is a process of collecting opinions on a specific target and classifying them according to their emotions, and applies to opinion mining that analyzes product reviews and reviews on the web. Companies and users can grasp the opinions of public opinion and come up with a way to do so. Recently, natural language processing models using the Transformer structure have appeared, and Google's BERT is a representative example. Afterwards, various models came out by remodeling the BERT. Among them, the Facebook AI team unveiled the XLM-R (XLM-RoBERTa), an upgraded XLM model. XLM-R solved the data limitation and the curse of multilinguality by training XLM with 2TB or more refined CC (CommonCrawl), not Wikipedia data. This model showed that the multilingual model has similar performance to the single language model when it is trained by adjusting the size of the model and the data required for training. Therefore, in this paper, we study the improvement of Korean sentiment analysis performed using a pre-trained XLM-R model that solved curse of multilinguality and improved performance.

Korean Contextual Information Extraction System using BERT and Knowledge Graph (BERT와 지식 그래프를 이용한 한국어 문맥 정보 추출 시스템)

  • Yoo, SoYeop;Jeong, OkRan
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • Along with the rapid development of artificial intelligence technology, natural language processing, which deals with human language, is also actively studied. In particular, BERT, a language model recently proposed by Google, has been performing well in many areas of natural language processing by providing pre-trained model using a large number of corpus. Although BERT supports multilingual model, we should use the pre-trained model using large amounts of Korean corpus because there are limitations when we apply the original pre-trained BERT model directly to Korean. Also, text contains not only vocabulary, grammar, but contextual meanings such as the relation between the front and the rear, and situation. In the existing natural language processing field, research has been conducted mainly on vocabulary or grammatical meaning. Accurate identification of contextual information embedded in text plays an important role in understanding context. Knowledge graphs, which are linked using the relationship of words, have the advantage of being able to learn context easily from computer. In this paper, we propose a system to extract Korean contextual information using pre-trained BERT model with Korean language corpus and knowledge graph. We build models that can extract person, relationship, emotion, space, and time information that is important in the text and validate the proposed system through experiments.