• 제목/요약/키워드: machine translations

검색결과 21건 처리시간 0.042초

Assessment of performance of machine learning based similarities calculated for different English translations of Holy Quran

  • Al Ghamdi, Norah Mohammad;Khan, Muhammad Badruddin
    • International Journal of Computer Science & Network Security
    • /
    • 제22권4호
    • /
    • pp.111-118
    • /
    • 2022
  • This research article presents the work that is related to the application of different machine learning based similarity techniques on religious text for identifying similarities and differences among its various translations. The dataset includes 10 different English translations of verses (Arabic: Ayah) of two Surahs (chapters) namely, Al-Humazah and An-Nasr. The quantitative similarity values for different translations for the same verse were calculated by using the cosine similarity and semantic similarity. The corpus went through two series of experiments: before pre-processing and after pre-processing. In order to determine the performance of machine learning based similarities, human annotated similarities between translations of two Surahs (chapters) namely Al-Humazah and An-Nasr were recorded to construct the ground truth. The average difference between the human annotated similarity and the cosine similarity for Surah (chapter) Al-Humazah was found to be 1.38 per verse (ayah) per pair of translation. After pre-processing, the average difference increased to 2.24. Moreover, the average difference between human annotated similarity and semantic similarity for Surah (chapter) Al-Humazah was found to be 0.09 per verse (Ayah) per pair of translation. After pre-processing, it increased to 0.78. For the Surah (chapter) An-Nasr, before preprocessing, the average difference between human annotated similarity and cosine similarity was found to be 1.93 per verse (Ayah), per pair of translation. And. After pre-processing, the average difference further increased to 2.47. The average difference between the human annotated similarity and the semantic similarity for Surah An-Nasr before preprocessing was found to be 0.93 and after pre-processing, it was reduced to 0.87 per verse (ayah) per pair of translation. The results showed that as expected, the semantic similarity was proven to be better measurement indicator for calculation of the word meaning.

BLEU 를 활용한 단기 서술형 답안의 자동 채점 (An Autonomous Assessment of a Short Essay Answer by Using the BLEU)

  • 조정현;정현기;박찬영;김유섭
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2009년도 학술대회
    • /
    • pp.606-610
    • /
    • 2009
  • 본 논문에서는 단기 서술형 답안의 자동 채점을 위하여 기계 번역 자동 평가에서 널리 사용되는 BLEU(BiLingual Evaluation Understudy)를 활용한 방법을 제안한다. BLEU 는 기계가 번역한 것이 사람이 번역한 것과 비슷할수록 기계번역의 질이 좋을 것이다 라는 것을 가정하여 평가한다. 즉, 특정 문장을 여러 사람이 번역한 문장을 기계가 번역한 문장과 n-gram 방식으로 비교해 점수를 매기는 것이다. 이와 비슷하게 본 연구에서는 여러 개의 정답 문장과 학생의 답안 문장을 BLEU 와 같은 방식으로 상호 비교하여 학생의 답안을 채점하였다. 실험에서는 이러한 채점 방식의 정확도를 평가하기 위하여 사람이 채점한 점수와의 상관관계를 계산하였다.

  • PDF

SciBabel: a system for crowd-sourced validation of automatic translations of scientific texts

  • Soares, Felipe;Rebechi, Rozane;Stevenson, Mark
    • Genomics & Informatics
    • /
    • 제18권2호
    • /
    • pp.21.1-21.7
    • /
    • 2020
  • Scientific research is mostly published in English, regardless of the researcher's nationality. However, this growing practice impairs or hinders the comprehension of professionals who depend on the results of these studies to provide adequate care for their patients. We suggest that machine translation (MT) can be used as a way of providing useful translation for biomedical articles, even though the translation itself may not be fluent. To tackle possible mistranslation that can harm a patient, we resort to crowd-sourced validation of translations. We developed a prototype of MT validation and edition, where users can vote for that translation as valid, or suggest modifications (i.e., post-editing the MT). A glossary match system is also included, aiming at terminology consistency.

기계번역 사후교정(Automatic Post Editing) 연구 (Automatic Post Editing Research)

  • 박찬준;임희석
    • 한국융합학회논문지
    • /
    • 제11권5호
    • /
    • pp.1-8
    • /
    • 2020
  • 기계번역이란 소스문장(Source Sentence)을 타겟문장(Target Sentence)으로 컴퓨터가 번역하는 시스템을 의미한다. 기계번역에는 다양한 하위분야가 존재하며 APE(Automatic Post Editing)이란 기계번역 시스템의 결과물을 교정하여 더 나은 번역문을 만들어내는 기계번역의 하위분야이다. 즉 기계번역 시스템이 생성한 번역문에 포함되어 있는 오류를 수정하여 교정문을 만드는 과정을 의미한다. 기계번역 모델을 변경하는 것이 아닌 기계번역 시스템의 결과 문장을 교정하여 번역품질을 높이는 연구분야이다. 2015년부터 WMT 공동 캠페인 과제로 선정되었으며 성능 평가는 TER(Translation Error Rate)을 이용한다. 이로 인해 최근 APE에 모델에 대한 다양한 연구들이 발표되고 있으며 이에 본 논문은 APE 분야의 최신 동향에 대해서 다루게 된다.

통계 정보를 이용한 전치사 최적 번역어 결정 모델 (A Statistical Model for Choosing the Best Translation of Prepositions.)

  • 심광섭
    • 한국언어정보학회지:언어와정보
    • /
    • 제8권1호
    • /
    • pp.101-116
    • /
    • 2004
  • This paper proposes a statistical model for the translation of prepositions in English-Korean machine translation. In the proposed model, statistical information acquired from unlabeled Korean corpora is used to choose the best translation from several possible translations. Such information includes functional word-verb co-occurrence information, functional word-verb distance information, and noun-postposition co-occurrence information. The model was evaluated with 443 sentences, each of which has a prepositional phrase, and we attained 71.3% accuracy.

  • PDF

Self-Attention 시각화를 사용한 기계번역 서비스의 번역 오류 요인 설명 (Explaining the Translation Error Factors of Machine Translation Services Using Self-Attention Visualization)

  • 장청롱;안현철
    • 한국IT서비스학회지
    • /
    • 제21권2호
    • /
    • pp.85-95
    • /
    • 2022
  • This study analyzed the translation error factors of machine translation services such as Naver Papago and Google Translate through Self-Attention path visualization. Self-Attention is a key method of the Transformer and BERT NLP models and recently widely used in machine translation. We propose a method to explain translation error factors of machine translation algorithms by comparison the Self-Attention paths between ST(source text) and ST'(transformed ST) of which meaning is not changed, but the translation output is more accurate. Through this method, it is possible to gain explainability to analyze a machine translation algorithm's inside process, which is invisible like a black box. In our experiment, it was possible to explore the factors that caused translation errors by analyzing the difference in key word's attention path. The study used the XLM-RoBERTa multilingual NLP model provided by exBERT for Self-Attention visualization, and it was applied to two examples of Korean-Chinese and Korean-English translations.

한-베 기계번역에서 한국어 분석기 (UTagger)의 영향 (Effect of Korean Analysis Tool (UTagger) on Korean-Vietnamese Machine Translations)

  • 원광복;옥철영
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 2017년도 제29회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.184-189
    • /
    • 2017
  • With the advent of robust deep learning method, Neural machine translation has recently become a dominant paradigm and achieved adequate results in translation between popular languages such as English, German, and Spanish. However, its results in under-resourced languages Korean and Vietnamese are still limited. This paper reports an attempt at constructing a bidirectional Korean-Vietnamese Neural machine translation system with the supporting of Korean analysis tool - UTagger, which includes morphological analyzing, POS tagging, and WSD. Experiment results demonstrate that UTagger can significantly improve translation quality of Korean-Vietnamese NMT system in both translation direction. Particularly, it improves approximately 15 BLEU scores for the translation from Korean to Vietnamese direction and 3.12 BLEU scores for the reverse direction.

  • PDF

한-베 기계번역에서 한국어 분석기 (UTagger)의 영향 (Effect of Korean Analysis Tool (UTagger) on Korean-Vietnamese Machine Translations)

  • 원광복;옥철영
    • 한국어정보학회:학술대회논문집
    • /
    • 한국어정보학회 2017년도 제29회 한글및한국어정보처리학술대회
    • /
    • pp.184-189
    • /
    • 2017
  • With the advent of robust deep learning method, Neural machine translation has recently become a dominant paradigm and achieved adequate results in translation between popular languages such as English, German, and Spanish. However, its results in under-resourced languages Korean and Vietnamese are still limited. This paper reports an attempt at constructing a bidirectional Korean-Vietnamese Neural machine translation system with the supporting of Korean analysis tool - UTagger, which includes morphological analyzing, POS tagging, and WSD. Experiment results demonstrate that UTagger can significantly improve translation quality of Korean-Vietnamese NMT system in both translation direction. Particularly, it improves approximately 15 BLEU scores for the translation from Korean to Vietnamese direction and 3.12 BLEU scores for the reverse direction.

  • PDF

구절 변환을 위한 한영 동사 사전 구성 (The Construction of Korean-to-English Verb Dictionary for Phrase-to-Phrase Translations)

  • 옥철영;김영택
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 1991년도 제3회 한글 및 한국어정보처리 학술대회
    • /
    • pp.44-57
    • /
    • 1991
  • 변환방식의 기계번역은 변환사전에서 제공하는 정보의 종류와 그의 정밀성에 따라서 변환과정의 복잡도와 번역의 질이 결정되어 진다. 사람에 의한 번역은 양국어 사전에서 제공하는 구절 중심의 번역정보를 이용함으로써, 그 번역의 결과는 정확하고 자연스럽다. 본 논문에서는 양국어 사전에서 제공하는 구절 중심의 여러가지 번역정보들을, 한영 기계번역시스템이 이용할 수 있는 형태의 동사 변환사전을 제안하였다. 제안된 변환사전에서는 첫째로, 구절 중심의 번역에서 동사의 역어가 선택되어지는 기준을 제공하여, 변환과정에서 추가적인 의미해석없이도 역어를 효과적으로 선택할 수 있도록 하였다. 둘째로 동사의 역어가 취하는 구체적인 구문구조를 제공하여, 여러 단계의 구조변환의 복잡도를 줄이면서도 두 언어간의 표현방식의 차이점을 해결할 수 있게 하였다.

  • PDF

Simultaneous neural machine translation with a reinforced attention mechanism

  • Lee, YoHan;Shin, JongHun;Kim, YoungKil
    • ETRI Journal
    • /
    • 제43권5호
    • /
    • pp.775-786
    • /
    • 2021
  • To translate in real time, a simultaneous translation system should determine when to stop reading source tokens and generate target tokens corresponding to a partial source sentence read up to that point. However, conventional attention-based neural machine translation (NMT) models cannot produce translations with adequate latency in online scenarios because they wait until a source sentence is completed to compute alignment between the source and target tokens. To address this issue, we propose a reinforced learning (RL)-based attention mechanism, the reinforced attention mechanism, which allows a neural translation model to jointly train the stopping criterion and a partial translation model. The proposed attention mechanism comprises two modules, one to ensure translation quality and the other to address latency. Different from previous RL-based simultaneous translation systems, which learn the stopping criterion from a fixed NMT model, the modules can be trained jointly with a novel reward function. In our experiments, the proposed model has better translation quality and comparable latency compared to previous models.