• Title/Summary/Keyword: bigram

Search Result 71, Processing Time 0.025 seconds

Morphological disambiguation using Local Context (국소 문맥을 이용한 형태적 중의성 해소)

  • Lee, Chung-Hee;Yoon, Jun-Tae;Song, Man-Suk
    • Annual Conference on Human and Language Technology
    • /
    • 2000.10d
    • /
    • pp.48-55
    • /
    • 2000
  • 본 논문은 국소문맥을 사용하여 만들어진 Decision List를 통해 단어의 형태적 중의성을 제거하는 방법을 기술한다. 최초 종자 연어(Seed Collocation)로 1차 Decision List를 만들어 실험 말뭉치에 적용하고 태깅된 결과를 자가 학습하는 반복과정에 의해 Decision List의 수행능력을 향상시킨다. 이 방법은 단어의 형태적 중의성 제거에 일정 거리의 연어가 가장 큰 영향을 끼친다는 직관에 바탕을 두며 사람의 추가적인 교정을 필요로 하지 않는 비교사 방식(대량의 원시 말뭉치에 기반한)에 의해 수행한다. 학습을 통해 얻어진 Decision List는 연세대 형태소 분석기인 MORANY의 형태소 분석 결과에 적용되어 태깅시 성능을 향상시킨다. 실험 말뭉치에 있는 중의성을 가진 12개의 단어들에 본 알고리즘을 적용하여 긍정적인 결과(90.61%)를 얻었다. 은닉 마르코프 모델의 바이그램(bigram) 모델과 비교하기 위하여 '들었다' 동사만을 가지고 실험하였는데 바이그램 모델의 태깅결과(72.61%)보다 뛰어난 결과 (94.25%)를 얻어서 본 모델이 형태적 중의성 해소에 유용함을 확인하였다.

  • PDF

Automatic Word-Spacing of Syllable Bi-gram Information for Korean OCR Postprocessing (음절 Bi-gram정보를 이용한 한국어 OCR 후처리용 자동 띄어쓰기)

  • Jeon, Nam-Youl;Park, Hyuk-Ro
    • Annual Conference on Human and Language Technology
    • /
    • 2000.10d
    • /
    • pp.95-100
    • /
    • 2000
  • 문자 인식기를 가지고 스캔된 원문 이미지를 인식한 결과로 형태소 분석과 어절 분석을 통해 대용량의 문서 정보를 데이터베이스에 구축하고 전문 검색(full text retrieval)이 가능하도록 한다. 그러나, 입력문자가 오인식된 경우나 띄어쓰기가 잘못된 데이터는 형태소 분석이나 어절 분석에 그대로 사용할 수가 없다. 한글 문자 인식의 경우 문자 단위의 인식률은 약 90.5% 정도나 문자 인식 오류와 띄어쓰기 오류 등을 고려한 어절 단위의 인식률은 현저하게 떨어진다. 이를 위해 한국어의 음절 특성을 고려해서 사전을 기반하지 않고 학습이 잘된 말뭉치(corpus)와 음절 단위의 bigram 정보를 이용한 자동 띄어쓰기를 하여 실험한 결과 학습 코퍼스의 크기와 띄어쓰기 오류 위치 정보에 따라 다르지만 약 86.2%의 띄어쓰기 정확도를 보였다. 이 결과를 가지고 형태소 분석과 언어 평가 등을 이용한 문자 인식 후처리 과정을 거치면 문자 인식 시스템의 인식률 향상에 크게 영향을 미칠 것이다.

  • PDF

An Extended Bigram Segmentation Method for Chinese Information Retrieval (중국어 정보검색을 위한 확장된 바이그램 분할기법)

  • Jin, Yun;Kang, Ji-Hoon;Myaeng, Sung-Hyon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10a
    • /
    • pp.490-492
    • /
    • 2003
  • 중국어 문장은 영어와 한국어와 달리 단어에 대한 명확한 한계가 없기 때문에 중국어 정보검색 시스템에서는 중국어 문장에 대한 색인 작업을 각각의 글자를 기본단위로 자르는 방법을 사용하거나 또는 단어의 한계에 관한 정보가 이미 제공된 단어 사전을 이용하여 색인하는 방법을 사용하고 있다. 하지만 이 두 가지 방법은 모두 장단점이 있다. 본 논문에서는 이 두 가지 방법의 장점을 취하고 단점을 보안하는 방법으로 확장한 바이그램 분할기법을 제안하려 한다. 이 방법은 실용성이 있으며, 검색성능 향상을 도모하였다.

  • PDF

Efficient Language Model based on VCCV unit for Sentence Speech Recognition (문장음성인식을 위한 VCCV 기반의 효율적인 언어모델)

  • Park, Seon-Hui;No, Yong-Wan;Hong, Gwang-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.836-839
    • /
    • 2003
  • In this paper, we implement a language model by a bigram and evaluate proper smoothing technique for unit of low perplexity. Word, morpheme, clause units are widely used as a language processing unit of the language model. We propose VCCV units which have more small vocabulary than morpheme and clauses units. We compare the VCCV units with the clause and the morpheme units using the perplexity. The most common metric for evaluating a language model is the probability that the model assigns the derivative measures of perplexity. Smoothing used to estimate probabilities when there are insufficient data to estimate probabilities accurately. In this paper, we constructed the N-grams of the VCCV units with low perplexity and tested the language model using Katz, Witten-Bell, absolute, modified Kneser-Ney smoothing and so on. In the experiment results, the modified Kneser-Ney smoothing is tested proper smoothing technique for VCCV units.

  • PDF

A Semi-supervised Learning of HMM to Build a POS Tagger for a Low Resourced Language

  • Pattnaik, Sagarika;Nayak, Ajit Kumar;Patnaik, Srikanta
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.4
    • /
    • pp.207-215
    • /
    • 2020
  • Part of speech (POS) tagging is an indispensable part of major NLP models. Its progress can be perceived on number of languages around the globe especially with respect to European languages. But considering Indian Languages, it has not got a major breakthrough due lack of supporting tools and resources. Particularly for Odia language it has not marked its dominancy yet. With a motive to make the language Odia fit into different NLP operations, this paper makes an attempt to develop a POS tagger for the said language on a HMM (Hidden Markov Model) platform. The tagger judiciously considers bigram HMM with dynamic Viterbi algorithm to give an output annotated text with maximum accuracy. The model is experimented on a corpus belonging to tourism domain accounting to a size of approximately 0.2 million tokens. With the proportion of training and testing as 3:1, the proposed model exhibits satisfactory result irrespective of limited training size.

A Survey of Machine Translation and Parts of Speech Tagging for Indian Languages

  • Khedkar, Vijayshri;Shah, Pritesh
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.245-253
    • /
    • 2022
  • Commenced in 1954 by IBM, machine translation has expanded immensely, particularly in this period. Machine translation can be broken into seven main steps namely- token generation, analyzing morphology, lexeme, tagging Part of Speech, chunking, parsing, and disambiguation in words. Morphological analysis plays a major role when translating Indian languages to develop accurate parts of speech taggers and word sense. The paper presents various machine translation methods used by different researchers for Indian languages along with their performance and drawbacks. Further, the paper concentrates on parts of speech (POS) tagging in Marathi dialect using various methods such as rule-based tagging, unigram, bigram, and more. After careful study, it is concluded that for machine translation, parts of speech tagging is a major step. Also, for the Marathi language, the Hidden Markov Model gives the best results for parts of speech tagging with an accuracy of 93% which can be further improved according to the dataset.

Biomedical Terminology Extraction using Syllable Bigram and CRFs (음절 바이그램과 CRFs를 이용한 의학 전문 용어 추출)

  • Song, Soo-Min;Shin, Junsoo;Kim, Harksoo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.04a
    • /
    • pp.505-507
    • /
    • 2010
  • 웹(Web)상에 전문용어를 포함한 문서가 증가함에 따라 전문용어를 자동으로 추출하는 연구가 계속해서 이루어지고 있다. 기존 연구에서는 전문용어를 추출하는 단계에서 대부분 형태소 분석기를 이용한다. 그러나 전문용어의 특성으로 인해 형태소 분석 단계에서 오분석 되는 경우가 발생한다. 이러한 문제를 해결하기 위해서 본 논문에서는 음절 바이그램과 CRFs(Conditional Random Fields)를 이용하여 의학 전문 용어를 추출하는 방법을 제안한다. 네이버 지식인의 의사 답변 문서 2000개로부터 5-fold cross validation을 이용하여 실험하였다. 실험 결과 정확률은 평균 68.91%, 재현율은 평균 71.25%로 나타났으며 F-measure는 70.06%로 나타났다.

A Speech Translation System for Hotel Reservation (호텔예약을 위한 음성번역시스템)

  • 구명완;김재인;박상규;김우성;장두성;홍영국;장경애;김응인;강용범
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4
    • /
    • pp.24-31
    • /
    • 1996
  • In this paper, we present a speech translation system for hotel reservation, KT_STS(Korea Telecom Speech Translation System). KT-STS is a speech-to-speech translation system which translates a spoken utterance in Korean into one in Japanese. The system has been designed around the task of hotel reservation(dialogues between a Korean customer and a hotel reservation de나 in Japan). It consists of a Korean speech recognition system, a Korean-to-Japanese machine translation system and a korean speech synthesis system. The Korean speech recognition system is an HMM(Hidden Markov model)-based speaker-independent, continuous speech recognizer which can recognize about 300 word vocabularies. Bigram language model is used as a forward language model and dependency grammar is used for a backward language model. For machine translation, we use dependency grammar and direct transfer method. And Korean speech synthesizer uses the demiphones as a synthesis unit and the method of periodic waveform analysis and reallocation. KT-STS runs in nearly real time on the SPARC20 workstation with one TMS320C30 DSP board. We have achieved the word recognition rate of 94. 68% and the sentence recognition rate of 82.42% after the speech recognition tests. On Korean-to-Japanese translation tests, we achieved translation success rate of 100%. We had an international joint experiment in which our system was connected with another system developed by KDD in Japan using the leased line.

  • PDF

Quantifying L2ers' phraseological competence and text quality in L2 English writing (L2 영어 학습자들의 연어 사용 능숙도와 텍스트 질 사이의 수치화)

  • Kwon, Junhyeok;Kim, Jaejun;Kim, Yoolae;Park, Myung-Kwan;Song, Sanghoun
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.281-284
    • /
    • 2017
  • On the basis of studies that show multi-word combinations, that is the field of phraseology, this study aims to examine relationship between the quality of text and phraseological competence in L2 English writing, following Yves Bestegen et al. (2014). Using two different association scores, t-score and Mutual Information(MI), which are opposite ways of measuring phraseological competence, in terms of scoring frequency and infrequency, bigrams from L2 writers' text scored based on a reference corpus, GloWbE (Corpus of Global Web based English). On a cross-sectional approach, we propose that the quality of the essays and the mean MI score of the bigram extracted from YELC, Yonsei English Learner Corpus, correlated to each other. The negative scores of bigrams are also correlated with the quality of the essays in the way that these bigrams are absent from the reference corpus, that is mostly ungrammatical. It indicates that increase in the proportion of the negative scored bigrams debases the quality of essays. The conclusion shows the quality of the essays scored by MI and t-score on cross-sectional approach, and application to teaching method and assessment for second language writing proficiency.

  • PDF

Quantifying L2ers' phraseological competence and text quality in L2 English writing (L2 영어 학습자들의 연어 사용 능숙도와 텍스트 질 사이의 수치화)

  • Kwon, Junhyeok;Kim, Jaejun;Kim, Yoolae;Park, Myung-Kwan;Song, Sanghoun
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.281-284
    • /
    • 2017
  • On the basis of studies that show multi-word combinations, that is the field of phraseology, this study aims to examine relationship between the quality of text and phraseological competence in L2 English writing, following Yves Bestegen et al. (2014). Using two different association scores, t-score and Mutual Information(MI), which are opposite ways of measuring phraseological competence, in terms of scoring frequency and infrequency, bigrams from L2 writers' text scored based on a reference corpus, GloWbE (Corpus of Global Web based English). On a cross-sectional approach, we propose that the quality of the essays and the mean MI score of the bigram extracted from YELC, Yonsei English Learner Corpus, correlated to each other. The negative scores of bigrams are also correlated with the quality of the essays in the way that these bigrams are absent from the reference corpus, that is mostly ungrammatical. It indicates that increase in the proportion of the negative scored bigrams debases the quality of essays. The conclusion shows the quality of the essays scored by MI and t-score on cross-sectional approach, and application to teaching method and assessment for second language writing proficiency.

  • PDF