• Title/Summary/Keyword: Translation-Based Language Model

Search Result 51, Processing Time 0.023 seconds

Simultaneous neural machine translation with a reinforced attention mechanism

  • Lee, YoHan;Shin, JongHun;Kim, YoungKil
    • ETRI Journal
    • /
    • v.43 no.5
    • /
    • pp.775-786
    • /
    • 2021
  • To translate in real time, a simultaneous translation system should determine when to stop reading source tokens and generate target tokens corresponding to a partial source sentence read up to that point. However, conventional attention-based neural machine translation (NMT) models cannot produce translations with adequate latency in online scenarios because they wait until a source sentence is completed to compute alignment between the source and target tokens. To address this issue, we propose a reinforced learning (RL)-based attention mechanism, the reinforced attention mechanism, which allows a neural translation model to jointly train the stopping criterion and a partial translation model. The proposed attention mechanism comprises two modules, one to ensure translation quality and the other to address latency. Different from previous RL-based simultaneous translation systems, which learn the stopping criterion from a fixed NMT model, the modules can be trained jointly with a novel reward function. In our experiments, the proposed model has better translation quality and comparable latency compared to previous models.

Retrieval Model Based on Word Translation Probabilities and the Degree of Association of Query Concept (어휘 번역확률과 질의개념연관도를 반영한 검색 모델)

  • Kim, Jun-Gil;Lee, Kyung-Soon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.3
    • /
    • pp.183-188
    • /
    • 2012
  • One of the major challenge for retrieval performance is the word mismatch between user's queries and documents in information retrieval. To solve the word mismatch problem, we propose a retrieval model based on the degree of association of query concept and word translation probabilities in translation-based model. The word translation probabilities are calculated based on the set of a sentence and its succeeding sentence pair. To validate the proposed method, we experimented on TREC AP test collection. The experimental results show that the proposed model achieved significant improvement over the language model and outperformed translation-based language model.

Performance Analysis Using a DNN-Based Sign Language Translation Model (DNN 기반 수어 번역 모델을 통한 성능 분석)

  • Min-Jae Jeong;Soong-Hwan Ro;Jun-Ki Hong
    • The Journal of Bigdata
    • /
    • v.9 no.1
    • /
    • pp.187-196
    • /
    • 2024
  • In this study, we propose a DNN (Deep Neural Network)-based sign language translation model that can significantly reduce training time by compressing sign language coordinates. We compared and analyzed the accuracy and training time of the model with and without sign language coordinate compression. The results of using the proposed model for sign language translation showed that while the accuracy decreased by approximately 5.9% after compressing the sign language video, the training time was reduced by 56.57%, indicating a substantial gain in training efficiency compared to the loss in translation accuracy.

The Construction of a German-Korean Machine Translation System for Nominal Phrases (독-한 명사구 기계번역시스템의 구축)

  • Lee, Minhaeng;Choi, Sung-Kwon;Choi, Kyung-Eun
    • Language and Information
    • /
    • v.2 no.1
    • /
    • pp.79-105
    • /
    • 1998
  • This paper aims to describe a German-Korean machine translation system for nominal phrases. Besides, we have two subgoals. First, we are going to revea linguistic differences between two languages and propose a language-informational method fo overcome the differences. The method is based on an integrated model of translation knowledge, efficient information structure, and concordance selection. Then, we will show the statistical results about translation experiment and its evaluation as an evidence for the adequacy of our linguistic method and translation system itself.

  • PDF

Translation:Mapping and Evaluation (번역: 대응과 평가)

  • 장석진
    • Language and Information
    • /
    • v.2 no.1
    • /
    • pp.1-41
    • /
    • 1998
  • Evaluation of multilingual translation fundamentally involves measurement of meaning equivalences between the formally mapped discourses/texts of SL(source language) and TL(target language) both represented by a metalanguage called IL(interlingua). Unlike a usaal uni-directional MT(machine translation) model(e.g.:SL $\rightarrow$ analysis $\rightarrow$ transfer $\rightarrow$ generation $\rightarrow$ TL), a bi-directional(by 'negotiation') model(i.e.: SL $\rightarrow$ IL/S $\leftrightarrow$ IL $\leftrightarrow$ IL/T \leftarrow TL) is proposed here for the purpose of evaluating multilingual, not merely bilingual, translation. The IL, as conceived of in this study, is an English-based predicate logic represented in the framework of MRS(minimal recursion semantics), an MT-oriented off-shoot of HPSG(Head-driven Phrase Structure Grammar). In addition, a list of semantic and pragmatic checkpoints are set up, some being optional depending on the kind and use of the translation, so sa to have the evaluation of translation fine-grained by computing matching or mismatching of such checkpoints.

  • PDF

Sign2Gloss2Text-based Sign Language Translation with Enhanced Spatial-temporal Information Centered on Sign Language Movement Keypoints (수어 동작 키포인트 중심의 시공간적 정보를 강화한 Sign2Gloss2Text 기반의 수어 번역)

  • Kim, Minchae;Kim, Jungeun;Kim, Ha Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1535-1545
    • /
    • 2022
  • Sign language has completely different meaning depending on the direction of the hand or the change of facial expression even with the same gesture. In this respect, it is crucial to capture the spatial-temporal structure information of each movement. However, sign language translation studies based on Sign2Gloss2Text only convey comprehensive spatial-temporal information about the entire sign language movement. Consequently, detailed information (facial expression, gestures, and etc.) of each movement that is important for sign language translation is not emphasized. Accordingly, in this paper, we propose Spatial-temporal Keypoints Centered Sign2Gloss2Text Translation, named STKC-Sign2 Gloss2Text, to supplement the sequential and semantic information of keypoints which are the core of recognizing and translating sign language. STKC-Sign2Gloss2Text consists of two steps, Spatial Keypoints Embedding, which extracts 121 major keypoints from each image, and Temporal Keypoints Embedding, which emphasizes sequential information using Bi-GRU for extracted keypoints of sign language. The proposed model outperformed all Bilingual Evaluation Understudy(BLEU) scores in Development(DEV) and Testing(TEST) than Sign2Gloss2Text as the baseline, and in particular, it proved the effectiveness of the proposed methodology by achieving 23.19, an improvement of 1.87 based on TEST BLEU-4.

Korean Text to Gloss: Self-Supervised Learning approach

  • Thanh-Vu Dang;Gwang-hyun Yu;Ji-yong Kim;Young-hwan Park;Chil-woo Lee;Jin-Young Kim
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.32-46
    • /
    • 2023
  • Natural Language Processing (NLP) has grown tremendously in recent years. Typically, bilingual, and multilingual translation models have been deployed widely in machine translation and gained vast attention from the research community. On the contrary, few studies have focused on translating between spoken and sign languages, especially non-English languages. Prior works on Sign Language Translation (SLT) have shown that a mid-level sign gloss representation enhances translation performance. Therefore, this study presents a new large-scale Korean sign language dataset, the Museum-Commentary Korean Sign Gloss (MCKSG) dataset, including 3828 pairs of Korean sentences and their corresponding sign glosses used in Museum-Commentary contexts. In addition, we propose a translation framework based on self-supervised learning, where the pretext task is a text-to-text from a Korean sentence to its back-translation versions, then the pre-trained network will be fine-tuned on the MCKSG dataset. Using self-supervised learning help to overcome the drawback of a shortage of sign language data. Through experimental results, our proposed model outperforms a baseline BERT model by 6.22%.

Question Classification Based on Word Association for Question and Answer Archives (질문대답 아카이브에서 어휘 연관성을 이용한 질문 분류)

  • Jin, Xueying;Lee, Kyung-Soon
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.327-332
    • /
    • 2010
  • Word mismatch is the most significant problem that causes low performance in question classification, whose questions consist of only two or three words that expressed in many different ways. So, it is necessary to apply word association in question classification. In this paper, we propose question classification method using translation-based language model, which use word translation probabilities for question-question pair that is learned in the same category. In the experiment, we prove that translation probabilities of question-question pairs in the same category is more effective than question-answer pairs in total collection.

The Verification of the Transfer Learning-based Automatic Post Editing Model (전이학습 기반 기계번역 사후교정 모델 검증)

  • Moon, Hyeonseok;Park, Chanjun;Eo, Sugyeong;Seo, Jaehyung;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.27-35
    • /
    • 2021
  • Automatic post editing is a research field that aims to automatically correct errors in machine translation results. This research is mainly being focus on high resource language pairs, such as English-German. Recent APE studies are mainly adopting transfer learning based research, where pre-training language models, or translation models generated through self-supervised learning methodologies are utilized. While translation based APE model shows superior performance in recent researches, as such researches are conducted on the high resource languages, the same perspective cannot be directly applied to the low resource languages. In this work, we apply two transfer learning strategies to Korean-English APE studies and show that transfer learning with translation model can significantly improves APE performance.

A Survey of Machine Translation and Parts of Speech Tagging for Indian Languages

  • Khedkar, Vijayshri;Shah, Pritesh
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.245-253
    • /
    • 2022
  • Commenced in 1954 by IBM, machine translation has expanded immensely, particularly in this period. Machine translation can be broken into seven main steps namely- token generation, analyzing morphology, lexeme, tagging Part of Speech, chunking, parsing, and disambiguation in words. Morphological analysis plays a major role when translating Indian languages to develop accurate parts of speech taggers and word sense. The paper presents various machine translation methods used by different researchers for Indian languages along with their performance and drawbacks. Further, the paper concentrates on parts of speech (POS) tagging in Marathi dialect using various methods such as rule-based tagging, unigram, bigram, and more. After careful study, it is concluded that for machine translation, parts of speech tagging is a major step. Also, for the Marathi language, the Hidden Markov Model gives the best results for parts of speech tagging with an accuracy of 93% which can be further improved according to the dataset.