• Title/Summary/Keyword: n-gram 언어모델

Search Result 52, Processing Time 0.025 seconds

Improving the Performance of Statistical Context-Sensitive Spelling Error Correction Techniques Using Default Operation Algorithm (Default 연산 알고리즘을 적용한 통계적 문맥의존 철자오류 교정 기법의 성능 향상)

  • Lee, Jung-Hun;Kim, Minho;Kwon, Hyuk-Chul
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.165-170
    • /
    • 2016
  • 본 논문에서 제안하는 문맥의존 철자오류 교정은 통계 정보를 이용한 방법으로 통계적 언어처리에서 가장 널리 쓰이는 샤논(Shannon)이 발표한 노이지 채널 모형(noisy channel model)을 기반으로 한다. 선행연구에서 부족하였던 부분의 성능 향상을 위해 교정대상단어의 오류생성 및 통계 데이터의 저장 방식을 개선하여 Default 연산을 적용한 모델을 제안한다. 선행 연구의 모델은 교정대상단어의 오류생성 시 편집거리의 제약을 1로 하여 교정 실험을 하지만 제안한 모델은 같은 환경에서 더욱 높은 검출과 정확도를 보였으며, 오류단어의 편집거리(edit distance) 제약을 넓게 적용하더라도 신뢰도가 있는 검출과 교정을 보였다.

  • PDF

A joint statistical model for word spacing and spelling error correction (띄어쓰기 및 철자 오류 동시교정을 위한 통계적 모델)

  • Noh, Hyung-Jong;Cha, Jeong-Won;Lee, Gary Geun-Bae
    • Annual Conference on Human and Language Technology
    • /
    • 2006.10e
    • /
    • pp.25-31
    • /
    • 2006
  • 본 논문에서는 띄어쓰기 오류와 철자 오류를 동시에 교정 가능한 전처리기를 제안한다. 제시된 알고리즘은 기존의 전처리기 알고리즘이 각 오류를 따로 해결하는 데에서 오는 한계를 극복하고, 기존의 noisy-channel model을 확장하여 대화체의 띄어쓰기 오류와 철자오류를 동시에 효과적으로 교정할 수 있다. N-gram과 자소변환확률 등의 통계적 방법과 어절변환패턴 사전을 이용하여 최대한 사전을 적게 이용하면서도 효과적으로 교정 후보들을 생성할 수 있다. 실험을 통해 현재 단계에서는 만족할 만한 성능을 얻지는 못하였지만 오류 분석을 통하여 이와 같은 방법론이 실제로 효용성이 있음을 알 수 있었고 앞으로 더 많은 개선을 통해 일상적인 대화체 문장에 대해서 효과적인 전처리기로서 기능할 수 있을 것으로 기대 된다.

  • PDF

Measuring Similarity of Korean Sentences based on BERT (BERT 기반 한국어 문장의 유사도 측정 방법)

  • Hyeon, Jonghwan;Choi, Ho-Jin
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.383-387
    • /
    • 2019
  • 자연어 문장의 자동 평가는 생성된 문장과 정답 문장을 자동으로 비교 및 평가하여, 두 문장 사이의 의미 유사도를 측정하는 기술이다. 이러한 자연어 문장 자동 평가는 기계 번역, 자연어 요약, 패러프레이징 등의 분야에서 자연어 생성 모델의 성능을 평가하는데 활용될 수 있다. 기존 자연어 문장의 유사도 측정 방법은 n-gram 기반의 문자열 비교를 수행하여 유사도를 산출한다. 이러한 방식은 계산 과정이 매우 간단하지만, 자연어의 다양한 특성을 반영할 수 없다. 본 논문에서는 BERT를 활용한 한국어 문장의 유사도 측정 방법을 제안하며, 이를 위해 ETRI에서 한국어 말뭉치를 대상으로 사전 학습하여 공개한 어절 단위의 KorBERT를 활용한다. 그 결과, 기존 자연어 문장의 유사도 평가 방법과 비교했을 때, 약 13%의 성능 향상을 확인할 수 있었다.

  • PDF

Development of a Korean Speech Recognition Platform (ECHOS) (한국어 음성인식 플랫폼 (ECHOS) 개발)

  • Kwon Oh-Wook;Kwon Sukbong;Jang Gyucheol;Yun Sungrack;Kim Yong-Rae;Jang Kwang-Dong;Kim Hoi-Rin;Yoo Changdong;Kim Bong-Wan;Lee Yong-Ju
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.8
    • /
    • pp.498-504
    • /
    • 2005
  • We introduce a Korean speech recognition platform (ECHOS) developed for education and research Purposes. ECHOS lowers the entry barrier to speech recognition research and can be used as a reference engine by providing elementary speech recognition modules. It has an easy simple object-oriented architecture, implemented in the C++ language with the standard template library. The input of the ECHOS is digital speech data sampled at 8 or 16 kHz. Its output is the 1-best recognition result. N-best recognition results, and a word graph. The recognition engine is composed of MFCC/PLP feature extraction, HMM-based acoustic modeling, n-gram language modeling, finite state network (FSN)- and lexical tree-based search algorithms. It can handle various tasks from isolated word recognition to large vocabulary continuous speech recognition. We compare the performance of ECHOS and hidden Markov model toolkit (HTK) for validation. In an FSN-based task. ECHOS shows similar word accuracy while the recognition time is doubled because of object-oriented implementation. For a 8000-word continuous speech recognition task, using the lexical tree search algorithm different from the algorithm used in HTK, it increases the word error rate by $40\%$ relatively but reduces the recognition time to half.

The Study of Korean Speech Recognition for Various Continue HMM (다양한 연속밀도 함수를 갖는 HMM에 대한 우리말 음성인식에 관한 연구)

  • Woo, In-Sung;Shin, Chwa-Cheul;Kang, Heung-Soon;Kim, Suk-Dong
    • Journal of IKEEE
    • /
    • v.11 no.2
    • /
    • pp.89-94
    • /
    • 2007
  • This paper is a study on continuous speech recognition in the Korean language using HMM-based models with continuous density functions. Here, we propose the most efficient method of continuous speech recognition for the Korean language under the condition of a continuous HMM model with 2 to 44 density functions. Two voice models were used CI-Model that uses 36 uni-phones and CD-Model that uses 3,000 tri-phones. Language model was based on N-gram. Using these models, 500 sentences and 6,486 words under speaker-independent condition were processed. In the case of the CI-Model, the maximum word recognition rate was 94.4% and sentence recognition rate was 64.6%. For the CD-Model, word recognition rate was 98.2% and sentence recognition rate was 73.6%. The recognition rate of CD-Model we obtained was stable.

  • PDF

A Text Reuse Measuring Model Using Circumference Sentence Similarity (주변 문장 유사도를 이용한 문서 재사용 측정 모델)

  • Choi, Sung-Won;Kim, Sang-Bum;Rim, Hae-Chang
    • Annual Conference on Human and Language Technology
    • /
    • 2005.10a
    • /
    • pp.179-183
    • /
    • 2005
  • 기존의 문서 재사용 탐지 모델은 문서 혹은 문장 단위로 그 내부의 단어 혹은 n-gram을 비교를 통해 문장의 재사용을 판별하였다. 그렇지만 문서 단위의 재사용 검사는 다른 문서의 일부분을 재사용하는 경우에 대해서는 문서 내에 문서 재사용이 이루어지지 않은 부분에 의해서 그 재사용 측정값이 낮아지게 되어 오류가 발생할 수 있는 가능성이 높아진다. 반면에 문장 단위의 문서 재사용 검사는 비교문서 내의 문장들에 대한 비교를 수행하게 되므로, 문서의 일부분에 대해 재사용물 수행한 경우에도 그 재사용된 부분 내의 문장들에 대한 비교를 수행하는 것이므로 문서 단위의 재사용에 비해 그런 경우에 더 견고하게 작동된다. 그렇지만, 문장 단위의 비교는 문서에 비해 짧은 문장을 단위로 하기 때문에 그 신뢰도에 문제가 발생하게 된다. 본 논문에서는 이런 문장단위 비교의 단점을 보완하기 위해 문장 단위의 문서 재사용 검사를 수행 후, 문장의 주변 문장의 재사용 검사 결과를 이용하여 문장 단위 재사용 검사에서 일어나는 오류를 감소시키고자 하였다.

  • PDF

Automatic Word Spacing of the Korean Sentences by Using End-to-End Deep Neural Network (종단 간 심층 신경망을 이용한 한국어 문장 자동 띄어쓰기)

  • Lee, Hyun Young;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.11
    • /
    • pp.441-448
    • /
    • 2019
  • Previous researches on automatic spacing of Korean sentences has been researched to correct spacing errors by using n-gram based statistical techniques or morpheme analyzer to insert blanks in the word boundary. In this paper, we propose an end-to-end automatic word spacing by using deep neural network. Automatic word spacing problem could be defined as a tag classification problem in unit of syllable other than word. For contextual representation between syllables, Bi-LSTM encodes the dependency relationship between syllables into a fixed-length vector of continuous vector space using forward and backward LSTM cell. In order to conduct automatic word spacing of Korean sentences, after a fixed-length contextual vector by Bi-LSTM is classified into auto-spacing tag(B or I), the blank is inserted in the front of B tag. For tag classification method, we compose three types of classification neural networks. One is feedforward neural network, another is neural network language model and the other is linear-chain CRF. To compare our models, we measure the performance of automatic word spacing depending on the three of classification networks. linear-chain CRF of them used as classification neural network shows better performance than other models. We used KCC150 corpus as a training and testing data.

Media-based Analysis of Gasoline Inventory with Korean Text Summarization (한국어 문서 요약 기법을 활용한 휘발유 재고량에 대한 미디어 분석)

  • Sungyeon Yoon;Minseo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.509-515
    • /
    • 2023
  • Despite the continued development of alternative energies, fuel consumption is increasing. In particular, the price of gasoline fluctuates greatly according to fluctuations in international oil prices. Gas stations adjust their gasoline inventory to respond to gasoline price fluctuations. In this study, news datasets is used to analyze the gasoline consumption patterns through fluctuations of the gasoline inventory. First, collecting news datasets with web crawling. Second, summarizing news datasets using KoBART, which summarizes the Korean text datasets. Finally, preprocessing and deriving the fluctuations factors through N-Gram Language Model and TF-IDF. Through this study, it is possible to analyze and predict gasoline consumption patterns.

A Statistical Prediction Model of Speakers' Intentions in a Goal-Oriented Dialogue (목적지향 대화에서 화자 의도의 통계적 예측 모델)

  • Kim, Dong-Hyun;Kim, Hark-Soo;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.9
    • /
    • pp.554-561
    • /
    • 2008
  • Prediction technique of user's intention can be used as a post-processing method for reducing the search space of an automatic speech recognizer. Prediction technique of system's intention can be used as a pre-processing method for generating a flexible sentence. To satisfy these practical needs, we propose a statistical model to predict speakers' intentions that are generalized into pairs of a speech act and a concept sequence. Contrary to the previous model using simple n-gram statistic of speech acts, the proposed model represents a dialogue history of a current utterance to a feature set with various linguistic levels (i.e. n-grams of speech act and a concept sequence pairs, clue words, and state information of a domain frame). Then, the proposed model predicts the intention of the next utterance by using the feature set as inputs of CRFs (Conditional Random Fields). In the experiment in a schedule management domain, The proposed model showed the precision of 76.25% on prediction of user's speech act and the precision of 64.21% on prediction of user's concept sequence. The proposed model also showed the precision of 88.11% on prediction of system's speech act and the Precision of 87.19% on prediction of system's concept sequence. In addition, the proposed model showed 29.32% higher average precision than the previous model.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.