• Title/Summary/Keyword: language translation

Search Result 559, Processing Time 0.028 seconds

Development of Revised Korean Version of ICF (ICF 한글개정판 개발)

  • Lee, Haejung;Song, Jumin
    • The Journal of Korean Physical Therapy
    • /
    • v.26 no.5
    • /
    • pp.344-350
    • /
    • 2014
  • Purpose: The purpose of this study was to translate and culturally adapt the International Classification of Functioning, Disability and Health (ICF) into the Korean language. Methods: The process of translation and adaptation of the ICF used here followed the translation guidelines of WHO. Implementation of this procedure comprised of four steps; forward translation, expert panel back-translation, pre-testing and cognitive interviewing, and final adaptation. The translators included health professionals with knowledge of ICF and non-health professionals blinded to the ICF. Clinical academics with significant experience in the use of disability survey, medical doctors, special educators, related policy makers, clinicians, architecture professionals, and international experts in ICF were invited to integrate all versions of the ICF for testing; 151 clinicians volunteered from 19 medical institutes across the country. Four different core-sets and a questionnaire were used for testing its practical usability and adaptation. Results: All translations were reviewed and a consensus was reached on any discrepancy from the earlier versions. Over 90% of the newly translated version of K-ICF was found to be different from the 2004 K-ICF version in the ICF language. Understanding of K-ICF language was responded difficult and very difficult by 50% of participants, whereas its practical use was responded 'useful' by more than 50% of subjects. Conclusion: It can be suggested that the new version of K-ICF should be widely used for final adaptation in the field of areas. Future studies will be required for implementation of K-ICF.

Question Classification Based on Word Association for Question and Answer Archives (질문대답 아카이브에서 어휘 연관성을 이용한 질문 분류)

  • Jin, Xueying;Lee, Kyung-Soon
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.327-332
    • /
    • 2010
  • Word mismatch is the most significant problem that causes low performance in question classification, whose questions consist of only two or three words that expressed in many different ways. So, it is necessary to apply word association in question classification. In this paper, we propose question classification method using translation-based language model, which use word translation probabilities for question-question pair that is learned in the same category. In the experiment, we prove that translation probabilities of question-question pairs in the same category is more effective than question-answer pairs in total collection.

A Bidirectional Korean-Japanese Statistical Machine Translation System by Using MOSES (MOSES를 이용한 한/일 양방향 통계기반 자동 번역 시스템)

  • Lee, Kong-Joo;Lee, Song-Wook;Kim, Jee-Eun
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.36 no.5
    • /
    • pp.683-693
    • /
    • 2012
  • Recently, statistical machine translation (SMT) has received many attention with ease of its implementation and maintenance. The goal of our works is to build bidirectional Korean-Japanese SMT system by using MOSES [1] system. We use Korean-Japanese bilingual corpus which is aligned per sentence to train the translation model and use a large raw corpus in each language to train each language model. The proposed system shows results comparable to those of a rule-based machine translation system. Most of errors are caused by noises occurred in each processing stage.

Understanding recurrent neural network for texts using English-Korean corpora

  • Lee, Hagyeong;Song, Jongwoo
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.3
    • /
    • pp.313-326
    • /
    • 2020
  • Deep Learning is the most important key to the development of Artificial Intelligence (AI). There are several distinguishable architectures of neural networks such as MLP, CNN, and RNN. Among them, we try to understand one of the main architectures called Recurrent Neural Network (RNN) that differs from other networks in handling sequential data, including time series and texts. As one of the main tasks recently in Natural Language Processing (NLP), we consider Neural Machine Translation (NMT) using RNNs. We also summarize fundamental structures of the recurrent networks, and some topics of representing natural words to reasonable numeric vectors. We organize topics to understand estimation procedures from representing input source sequences to predict target translated sequences. In addition, we apply multiple translation models with Gated Recurrent Unites (GRUs) in Keras on English-Korean sentences that contain about 26,000 pairwise sequences in total from two different corpora, colloquialism and news. We verified some crucial factors that influence the quality of training. We found that loss decreases with more recurrent dimensions and using bidirectional RNN in the encoder when dealing with short sequences. We also computed BLEU scores which are the main measures of the translation performance, and compared them with the score from Google Translate using the same test sentences. We sum up some difficulties when training a proper translation model as well as dealing with Korean language. The use of Keras in Python for overall tasks from processing raw texts to evaluating the translation model also allows us to include some useful functions and vocabulary libraries as well.

Character-Level Neural Machine Translation (문자 단위의 Neural Machine Translation)

  • Lee, Changki;Kim, Junseok;Lee, Hyoung-Gyu;Lee, Jaesong
    • Annual Conference on Human and Language Technology
    • /
    • 2015.10a
    • /
    • pp.115-118
    • /
    • 2015
  • Neural Machine Translation (NMT) 모델은 단일 신경망 구조만을 사용하는 End-to-end 방식의 기계번역 모델로, 기존의 Statistical Machine Translation (SMT) 모델에 비해서 높은 성능을 보이고, Feature Engineering이 필요 없으며, 번역 모델 및 언어 모델의 역할을 단일 신경망에서 수행하여 디코더의 구조가 간단하다는 장점이 있다. 그러나 NMT 모델은 출력 언어 사전(Target Vocabulary)의 크기에 비례해서 학습 및 디코딩의 속도가 느려지기 때문에 출력 언어 사전의 크기에 제한을 갖는다는 단점이 있다. 본 논문에서는 NMT 모델의 출력 언어 사전의 크기 제한 문제를 해결하기 위해서, 입력 언어는 단어 단위로 읽고(Encoding) 출력 언어를 문자(Character) 단위로 생성(Decoding)하는 방법을 제안한다. 출력 언어를 문자 단위로 생성하게 되면 NMT 모델의 출력 언어 사전에 모든 문자를 포함할 수 있게 되어 출력 언어의 Out-of-vocabulary(OOV) 문제가 사라지고 출력 언어의 사전 크기가 줄어들어 학습 및 디코딩 속도가 빨라지게 된다. 실험 결과, 본 논문에서 제안한 방법이 영어-일본어 및 한국어-일본어 기계번역에서 기존의 단어 단위의 NMT 모델보다 우수한 성능을 보였다.

  • PDF

The Selection and Effects of Contract Language in International Contract (국제계약에 있어서 계약언어의 선택과 효과)

  • Song Yang-Ho
    • Journal of Arbitration Studies
    • /
    • v.15 no.1
    • /
    • pp.207-228
    • /
    • 2005
  • When closing an international contract, both contract parties endeavor to convey their intentions from the stage of negotiation to the moment of signing the contract. Of the many problems presently related to contract language, the first one to consider is which contract party will run the risk of the language deficiencies occurring as a result of the misunderstanding and misinterpretation between different languages. The second problem to consider is whether the interpretation and translation of the contract language is needed and, if so, which party is going to bear the expenses and assume responsibility of the misinterpretation in the translation of, the contract language. The third problem is related to the obligation of explaining to both contract parties the contents and details of the international contract written in different languages. The fourth issue is which language of both contract parties becomes the standard contract language in the procedure of arbitration. The fifth, but not the last problem, is how to solve the language defects in interpreting and translating the contract languages. These five problems can be easily solved by the approval of the contract parties in scrutinizing and selecting the contract languages. However, this research mainly focuses on which effects of the contract language and as how to define and select the contract language.

  • PDF

A Quality Comparison of English Translations of Korean Literature between Human Translation and Post-Editing

  • LEE, IL-JAE
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.165-171
    • /
    • 2018
  • As the artificial intelligence (AI) plays a crucial role in machine translation (MT) which has loomed large as a new translation paradigm, concerns have also arisen if MT can produce a quality product as human translation (HT) can. In fact, several MT experimental studies report cases in which the MT product called post-editing (PE) as equally as HT or often superior ([1],[2],[6]). As motivated from those studies on translation quality between HT and PE, this study set up an experimental situation in which Korean literature was translated into English, comparatively, by 3 translators and 3 post-editors. Afterwards, a group of 3 other Koreans checked for accuracy of HT and PE; a group of 3 English native speakers scored for fluency of HT and PE. The findings are (1) HT took the translation time, at least, twice longer than PE. (2) Both HT and PE produced similar error types, and Mistranslation and Omission were the major errors for accuracy and Grammar for fluency. (3) HT turned to be inferior to PE for both accuracy and fluency.

Research on Recent Quality Estimation (최신 기계번역 품질 예측 연구)

  • Eo, Sugyeong;Park, Chanjun;Moon, Hyeonseok;Seo, Jaehyung;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.7
    • /
    • pp.37-44
    • /
    • 2021
  • Quality estimation (QE) can evaluate the quality of machine translation output even for those who do not know the target language, and its high utilization highlights the need for QE. QE shared task is held every year at Conference on Machine Translation (WMT), and recently, researches applying Pretrained Language Model (PLM) are mainly being conducted. In this paper, we conduct a survey on the QE task and research trends, and we summarize the features of PLM. In addition, we used a multilingual BART model that has not yet been utilized and performed comparative analysis with the existing studies such as XLM, multilingual BERT, and XLM-RoBERTa. As a result of the experiment, we confirmed which PLM was most effective when applied to QE, and saw the possibility of applying the multilingual BART model to the QE task.