• Title/Summary/Keyword: Accuracy of Machine Translation

Search Result 42, Processing Time 0.024 seconds

A study on the measurement of rotary table error with 5-axis CNC machine (5축CNC공작기계의 회전테이블 오차 측정에 관한 연구)

  • SUH, S.H.;JUNG, S.Y.;LEE, E.S.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.14 no.11
    • /
    • pp.84-92
    • /
    • 1997
  • The purpose of this study is to develop a geometric error model and path compensation algorithm for rotating axes of the 5-axis machine tools, by a method to calibrate a rotary table using one master ball and three LVDTs. It was developed a new methodology to measure 3 translation errors of the rotary table and with a compensation procedure for setup errors of the master ball. The method is experimentally verified using a ball-table and on-machine inspection method. The results showed that the geometric error models with the path compensation strategy can be practically used as a means for improving the accuracy of the machine tools with rotary table.

  • PDF

An Analysis of Semantic Errors in Machine-Translated English Compositions by Korean EFL College Students

  • Baek, Ji-Yeon
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.71-76
    • /
    • 2022
  • The purpose of this research is to investigate the types of semantic errors made by MT in translating EFL college students' original drafts written in Korean into English. Specifically, this study attempts to find out 1) what types of semantic errors are most frequently committed by MT? and 2) how students feel about the quality of the MT-produced output? The findings from this study indicated that MT produced the errors related to accuracy (47%) the most, followed by the errors related to fluency and ambiguity (14.6% respectively). Students were well aware of the errors with accuracy and fluency but had limited ability to check the errors with ambiguity. Based on the findings, this study suggests pedagogical implications which can be implemented in L2 writing classrooms.

The Perception of Pre-service English Teachers' use of AI Translation Tools in EFL Writing (영작문 도구로서의 인공지능번역 활용에 대한 초등예비교사의 인식연구)

  • Jaeseok Yang
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.121-128
    • /
    • 2024
  • With the recent rise in the use of AI-based online translation tools, interest in their methods and effects on education has grown. This study involved 30 prospective elementary school teachers who completed an English writing task using an AI-based online translation tool. The study focused on assessing the impact of these tools on English writing skills and their practical applications. It examined the usability, educational value, and the advantages and disadvantages of the AI translation tool. Through data collected via writing tests, surveys, and interviews, the study revealed that the use of translation tools positively affects English writing skills. From the learners' perspective, these tools were perceived to provide support and convenience for learning. However, there was also recognition of the need for educational strategies to effectively use these tools, alongside concerns about methods to enhance the completeness or accuracy of translations and the potential for over-reliance on the tools. The study concluded that for effective utilization of translation tools, the implementation of educational strategies and the role of the teacher are crucial.

Sentiment Analysis to Evaluate Different Deep Learning Approaches

  • Sheikh Muhammad Saqib ;Tariq Naeem
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.83-92
    • /
    • 2023
  • The majority of product users rely on the reviews that are posted on the appropriate website. Both users and the product's manufacturer could benefit from these reviews. Daily, thousands of reviews are submitted; how is it possible to read them all? Sentiment analysis has become a critical field of research as posting reviews become more and more common. Machine learning techniques that are supervised, unsupervised, and semi-supervised have worked very hard to harvest this data. The complicated and technological area of feature engineering falls within machine learning. Using deep learning, this tedious process may be completed automatically. Numerous studies have been conducted on deep learning models like LSTM, CNN, RNN, and GRU. Each model has employed a certain type of data, such as CNN for pictures and LSTM for language translation, etc. According to experimental results utilizing a publicly accessible dataset with reviews for all of the models, both positive and negative, and CNN, the best model for the dataset was identified in comparison to the other models, with an accuracy rate of 81%.

Survey on Nucleotide Encoding Techniques and SVM Kernel Design for Human Splice Site Prediction

  • Bari, A.T.M. Golam;Reaz, Mst. Rokeya;Choi, Ho-Jin;Jeong, Byeong-Soo
    • Interdisciplinary Bio Central
    • /
    • v.4 no.4
    • /
    • pp.14.1-14.6
    • /
    • 2012
  • Splice site prediction in DNA sequence is a basic search problem for finding exon/intron and intron/exon boundaries. Removing introns and then joining the exons together forms the mRNA sequence. These sequences are the input of the translation process. It is a necessary step in the central dogma of molecular biology. The main task of splice site prediction is to find out the exact GT and AG ended sequences. Then it identifies the true and false GT and AG ended sequences among those candidate sequences. In this paper, we survey research works on splice site prediction based on support vector machine (SVM). The basic difference between these research works is nucleotide encoding technique and SVM kernel selection. Some methods encode the DNA sequence in a sparse way whereas others encode in a probabilistic manner. The encoded sequences serve as input of SVM. The task of SVM is to classify them using its learning model. The accuracy of classification largely depends on the proper kernel selection for sequence data as well as a selection of kernel parameter. We observe each encoding technique and classify them according to their similarity. Then we discuss about kernel and their parameter selection. Our survey paper provides a basic understanding of encoding approaches and proper kernel selection of SVM for splice site prediction.

Towards a Methodology for Evaluating English-to-Korean Machine Translation Systems (영-한 기계번역 성능 평가 연구)

  • 시정곤;김원경;고창수
    • Language and Information
    • /
    • v.4 no.2
    • /
    • pp.1-26
    • /
    • 2000
  • The purpose of this paper is to establish the standard method of evaluation English-to-Korean MT systems We focus on test suites, evaluation procedure and evaluation results. Four computer programs on the market are tested on a test suite consisting of 1,501 sentence, The quality of translation and the capacity of MT system are the key points for evaluation . The sentences in the suite are classified according to the grammatical properties they reveal. The classificatory scheme has the structure of a directory: each sentence belongs to a subclass, which belongs to a major class,. We place the sentences in the test suite on a scale of difficulty (hard ordinary easy) and each output sentence is graded on a scale of four accuracy levels. We also test the programs with respect to their speed.

  • PDF

Domain Adaptation Method for LHMM-based English Part-of-Speech Tagger (LHMM기반 영어 형태소 품사 태거의 도메인 적응 방법)

  • Kwon, Oh-Woog;Kim, Young-Gil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.10
    • /
    • pp.1000-1004
    • /
    • 2010
  • A large number of current language processing systems use a part-of-speech tagger for preprocessing. Most language processing systems required a tagger with the highest possible accuracy. Specially, the use of domain-specific advantages has become a hot issue in machine translation community to improve the translation quality. This paper addresses a method for customizing an HMM or LHMM based English tagger from general domain to specific domain. The proposed method is to semi-automatically customize the output and transition probabilities of HMM or LHMM using domain-specific raw corpus. Through the experiments customizing to Patent domain, our LHMM tagger adapted by the proposed method shows the word tagging accuracy of 98.87% and the sentence tagging accuracy of 78.5%. Also, compared with the general tagger, our tagger improved the word tagging accuracy of 2.24% (ERR: 66.4%) and the sentence tagging accuracy of 41.0% (ERR: 65.6%).

Intra-Sentence Segmentation using Maximum Entropy Model for Efficient Parsing of English Sentences (효율적인 영어 구문 분석을 위한 최대 엔트로피 모델에 의한 문장 분할)

  • Kim Sung-Dong
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.5
    • /
    • pp.385-395
    • /
    • 2005
  • Long sentence analysis has been a critical problem in machine translation because of high complexity. The methods of intra-sentence segmentation have been proposed to reduce parsing complexity. This paper presents the intra-sentence segmentation method based on maximum entropy probability model to increase the coverage and accuracy of the segmentation. We construct the rules for choosing candidate segmentation positions by a teaming method using the lexical context of the words tagged as segmentation position. We also generate the model that gives probability value to each candidate segmentation positions. The lexical contexts are extracted from the corpus tagged with segmentation positions and are incorporated into the probability model. We construct training data using the sentences from Wall Street Journal and experiment the intra-sentence segmentation on the sentences from four different domains. The experiments show about $88\%$ accuracy and about $98\%$ coverage of the segmentation. Also, the proposed method results in parsing efficiency improvement by 4.8 times in speed and 3.6 times in space.

Ontology-based Machine Translation Mashup System for Public Information (온톨로지 기반 공공정보 번역 매쉬업 시스템)

  • Oh, Kyeong-Jin;Kwon, Kee-Young;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.8
    • /
    • pp.21-29
    • /
    • 2012
  • We have proposed an ontology-based translation mashup system for foreigner to enjoy Korean cultural information without any language barrier(linguistic problem). In order to utilize public information, we use a mobile public information open API of Seoul metropolitan city. Google AJAX language API is used for translations of public information. We apply an ontology to minimize errors caused by the translations. For ontology modeling, we analyze the public information domain and define classes, relations and properties of cultural vocabulary ontology. We generate ontology instances for titles, places and sponsors which are the most frequently occurring translation errors. We compare the accuracy of translations through our experiment. Through the experimental results using the proposed ontology-based translation mashup system, we verify the validity of the system.

Translation of Korean Object Case Markers to Mongolian's Suffixes (한국어 목적격조사의 몽골어 격 어미 번역)

  • Setgelkhuu, Khulan;Shin, Joon Choul;Ock, Cheol Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.2
    • /
    • pp.79-88
    • /
    • 2019
  • Machine translation (MT) system, especially Korean-Mongolian MT system, has recently attracted much attention due to its necessary for the globalization generation. Korean and Mongolian have the same sentence structure SOV and the arbitrarily changing of their words order does not change the meaning of sentences due to postpositional particles. The particles that are attached behind words to indicate their grammatical relationship to the clause or make them more specific in meaning. Hence, the particles play an important role in the translation between Korean and Mongolian. However, one Korean particle can be translated into several Mongolian particles. This is a major issue of the Korean-Mongolian MT systems. In this paper, to address this issue, we propose a method to use the combination of UTagger and a Korean-Mongolian particles table. UTagger is a system that can analyze morphologies, tag POS, and disambiguate homographs for Korean texts. The Korean-Mongolian particles table was manually constructed for matching Korean particles with those of Mongolian. The experiment on the test set extracted from the National Institute of Korean Language's Korean-Mongolian Learner's Dictionary shows that our method achieved the accuracy of 88.38% and it improved the result of using only UTagger by 41.48%.