• Title/Summary/Keyword: Word segmentation

Search Result 135, Processing Time 0.024 seconds

Discriminative Training of Stochastic Segment Model Based on HMM Segmentation for Continuous Speech Recognition

  • Chung, Yong-Joo;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4E
    • /
    • pp.21-27
    • /
    • 1996
  • In this paper, we propose a discriminative training algorithm for the stochastic segment model (SSM) in continuous speech recognition. As the SSM is usually trained by maximum likelihood estimation (MLE), a discriminative training algorithm is required to improve the recognition performance. Since the SSM does not assume the conditional independence of observation sequence as is done in hidden Markov models (HMMs), the search space for decoding an unknown input utterance is increased considerably. To reduce the computational complexity and starch space amount in an iterative training algorithm for discriminative SSMs, a hybrid architecture of SSMs and HMMs is programming using HMMs. Given the segment boundaries, the parameters of the SSM are discriminatively trained by the minimum error classification criterion based on a generalized probabilistic descent (GPD) method. With the discriminative training of the SSM, the word error rate is reduced by 17% compared with the MLE-trained SSM in speaker-independent continuous speech recognition.

  • PDF

Corpus Based Unrestricted vocabulary Mandarin TTS (코퍼스 기반 무제한 단어 중국어 TTS)

  • Yu Zheng;Ha Ju-Hong;Kim Byeongchang;Lee Gary Geunbae
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.175-179
    • /
    • 2003
  • In order to produce a high quality (intelligibility and naturalness) synthesized speech, it is very important to get an accurate grapheme-to-phoneme conversion and prosody model. In this paper, we analyzed Chinese texts using a segmentation, POS tagging and unknown word recognition. We present a grapheme-to-phoneme conversion using a dictionary-based and rule-based method. We constructed a prosody model using a probabilistic method and a decision tree-based error correction method. According to the result from the above analysis, we can successfully select and concatenate exact synthesis unit of syllables from the Chinese Synthesis DB.

  • PDF

Word Segmentation System Using Extended Syllable bigram (확장된 음절 bigram을 이용한 자동 띄어쓰기 시스템)

  • Lim, Dong-Hee;Chun, Young-Jin;Kim, Hyoung-Joon;Kang, Seung-Shik
    • Annual Conference on Human and Language Technology
    • /
    • 2005.10a
    • /
    • pp.189-193
    • /
    • 2005
  • 본 논문은 통계 기반 방법인 음절 bigram을 이용한 자동 띄어쓰기를 기본 방법으로 하고 경우의 수를 세분화한 확장된 음절 bigram을 이용한 공백 확률, 띄어쓰기 통계를 바탕으로 최종 띄어쓰기 임계치 차등 적용, 에러 사전 적용 3가지 방법을 추가로 사용하는 경우 기본적인 방법만을 쓴 경우보다 띄어쓰기 정확도가 향상된다는 것을 확인하였다. 그리고 해당 음절에 대한 bigram이 없는 경우 확장된 음절 unigram을 통해 근사적으로 계산해 데이터부족 문제를 개선하였다. 한국어 말뭉치와 중국어 말뭉치에 대한 실험을 통해 본 논문에서 제안하는 방법이 한국어 자동 띄어쓰기뿐만 아니라 중국어 단어 분리에 적용할 수 있다는 것도 확인하였다.

  • PDF

Word Segmentation and POS tagging using Seq2seq Attention Model (seq2seq 주의집중 모델을 이용한 형태소 분석 및 품사 태깅)

  • Chung, Euisok;Park, Jeon-Gue
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.217-219
    • /
    • 2016
  • 본 논문은 형태소 분석 및 품사 태깅을 위해 seq2seq 주의집중 모델을 이용하는 접근 방법에 대하여 기술한다. seq2seq 모델은 인코더와 디코더로 분할되어 있고, 일반적으로 RNN(recurrent neural network)를 기반으로 한다. 형태소 분석 및 품사 태깅을 위해 seq2seq 모델의 학습 단계에서 음절 시퀀스는 인코더의 입력으로, 각 음절에 해당하는 품사 태깅 시퀀스는 디코더의 출력으로 사용된다. 여기서 음절 시퀀스와 품사 태깅 시퀀스의 대응관계는 주의집중(attention) 모델을 통해 접근하게 된다. 본 연구는 사전 정보나 자질 정보와 같은 추가적 리소스를 배제한 end-to-end 접근 방법의 실험 결과를 제시한다. 또한, 디코딩 단계에서 빔(beam) 서치와 같은 추가적 프로세스를 배제하는 접근 방법을 취한다.

  • PDF

Word Segmentation for Korean with Syllable-Level Combinatory Categorial Grammar (음절단위 결합범주문법을 이용한 한국어 문장의 자동 띄어쓰기)

  • Lee, Ho-Joon;Park, Jong-C.
    • Annual Conference on Human and Language Technology
    • /
    • 2002.10e
    • /
    • pp.47-54
    • /
    • 2002
  • 한국어의 띄어쓰기 현상은 단어별로 정형화된 띄어쓰기를 하는 영어나 띄어쓰기가 발달하지 않은 중국어, 일본어와는 다르게 독특한 형태로 발전되어 왔다. 기존에는 부분적인 띄어쓰기 오류를 바로잡아주는 형태의 연구가 많이 진행되었지만 이제는 문자인식이나 음성인식 등의 연구와 결합하여 띄어쓰기가 완전히 무시된 문장의 띄어쓰기를 자동으로 처리하는 방법에 대한 연구가 활발히 진행 중이다. 본 논문에서는 한국어의 띄어쓰기 현상과 띄어쓰기 복원 방법에 대한 기존의 연구에 대해서 살펴보고 기존의 방법으로는 저리하기 힘들었던 형태를 음절단위 결합범주문법으로 설명한다.

  • PDF

Machine Printed and Handwritten Text Discrimination in Korean Document Images

  • Trieu, Son Tung;Lee, Guee Sang
    • Smart Media Journal
    • /
    • v.5 no.3
    • /
    • pp.30-34
    • /
    • 2016
  • Nowadays, there are a lot of Korean documents, which often need to be identified in one of printed or handwritten text. Early methods for the identification use structural features, which can be simple and easy to apply to text of a specific font, but its performance depends on the font type and characteristics of the text. Recently, the bag-of-words model has been used for the identification, which can be invariant to changes in font size, distortions or modifications to the text. The method based on bag-of-words model includes three steps: word segmentation using connected component grouping, feature extraction, and finally classification using SVM(Support Vector Machine). In this paper, bag-of-words model based method is proposed using SURF(Speeded Up Robust Feature) for the identification of machine printed and handwritten text in Korean documents. The experiment shows that the proposed method outperforms methods based on structural features.

A Rule-Based Analysis from Raw Korean Text to Morphologically Annotated Corpora

  • Lee, Ki-Yong;Markus Schulze
    • Language and Information
    • /
    • v.6 no.2
    • /
    • pp.105-128
    • /
    • 2002
  • Morphologically annotated corpora are the basis for many tasks of computational linguistics. Most current approaches use statistically driven methods of morphological analysis, that provide just POS-tags. While this is sufficient for some applications, a rule-based full morphological analysis also yielding lemmatization and segmentation is needed for many others. This work thus aims at 〔1〕 introducing a rule-based Korean morphological analyzer called Kormoran based on the principle of linearity that prohibits any combination of left-to-right or right-to-left analysis or backtracking and then at 〔2〕 showing how it on be used as a POS-tagger by adopting an ordinary technique of preprocessing and also by filtering out irrelevant morpho-syntactic information in analyzed feature structures. It is shown that, besides providing a basis for subsequent syntactic or semantic processing, full morphological analyzers like Kormoran have the greater power of resolving ambiguities than simple POS-taggers. The focus of our present analysis is on Korean text.

  • PDF

Learning-based Word Segmentation for Text Document Recognition (텍스트 문서 인식을 위한 학습 기반 단어 분할)

  • Lomaliza, Jean-Pierre;Moon, Kwang-Seok;Park, Hanhoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.41-42
    • /
    • 2018
  • 텍스트 문서 영상으로부터 단어를 검출하고, LLAH(locally likely arrangement hashing) 알고리즘을 이용하여 이웃 단어 사이의 기하 관계를 표현하는 특징 벡터를 계산한 후, 특징 벡터를 비교함으로써 텍스트 문서를 효과적으로 인식하거나 검색할 수 있다. 그러나, 이는 문서 내 각 단어가 정확하고 강건하게 검출된다는 전제를 필요로 한다. 본 논문에서는 텍스트 내 각 라인을 검출하고, 각 라인 내에서 단어 사이의 간격과 글자 사이의 간격을 깊은 신경망(deep neural network)을 이용하여 학습하고 분류함으로써, 보다 카메라와 텍스트 문서 사이의 거리나 방향이 동적으로 변하는 조건에서 각 단어를 강건하게 검출하는 방법을 제안한다. 모바일 환경에서 제안된 방법을 구현하였으며, 실험을 통해 단어 사이의 간격과 글자 사이의 간격을 92.5%의 정확도로 구별할 수 있으며, 이를 통해 동적인 환경에서 단어 검출의 강건성을 크게 개선할 수 있음을 확인하였다.

  • PDF

Manchu Script Letters Dataset Creation and Labeling

  • Aaron Daniel Snowberger;Choong Ho Lee
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.1
    • /
    • pp.80-87
    • /
    • 2024
  • The Manchu language holds historical significance, but a complete dataset of Manchu script letters for training optical character recognition machine-learning models is currently unavailable. Therefore, this paper describes the process of creating a robust dataset of extracted Manchu script letters. Rather than performing automatic letter segmentation based on whitespace or the thickness of the central word stem, an image of the Manchu script was manually inspected, and one copy of the desired letter was selected as a region of interest. This selected region of interest was used as a template to match all other occurrences of the same letter within the Manchu script image. Although the dataset in this study contained only 4,000 images of five Manchu script letters, these letters were collected from twenty-eight writing styles. A full dataset of Manchu letters is expected to be obtained through this process. The collected dataset was normalized and trained using a simple convolutional neural network to verify its effectiveness.

Topic Model Augmentation and Extension Method using LDA and BERTopic (LDA와 BERTopic을 이용한 토픽모델링의 증강과 확장 기법 연구)

  • Kim, SeonWook;Yang, Kiduk
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.3
    • /
    • pp.99-132
    • /
    • 2022
  • The purpose of this study is to propose AET (Augmented and Extended Topics), a novel method of synthesizing both LDA and BERTopic results, and to analyze the recently published LIS articles as an experimental approach. To achieve the purpose of this study, 55,442 abstracts from 85 LIS journals within the WoS database, which spans from January 2001 to October 2021, were analyzed. AET first constructs a WORD2VEC-based cosine similarity matrix between LDA and BERTopic results, extracts AT (Augmented Topics) by repeating the matrix reordering and segmentation procedures as long as their semantic relations are still valid, and finally determines ET (Extended Topics) by removing any LDA related residual subtopics from the matrix and ordering the rest of them by F1 (BERTopic topic size rank, Inverse cosine similarity rank). AET, by comparing with the baseline LDA result, shows that AT has effectively concretized the original LDA topic model and ET has discovered new meaningful topics that LDA didn't. When it comes to the qualitative performance evaluation, AT performs better than LDA while ET shows similar performances except in a few cases.