• Title/Summary/Keyword: word segmentation

Search Result 135, Processing Time 0.038 seconds

Decomposition of a Text Block into Words Using Projection Profiles, Gaps and Special Symbols (투영 프로파일, GaP 및 특수 기호를 이용한 텍스트 영역의 어절 단위 분할)

  • Jeong Chang Bu;Kim Soo Hyung
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.9
    • /
    • pp.1121-1130
    • /
    • 2004
  • This paper proposes a method for line and word segmentation for machine-printed text blocks. To separate a text region into the unit of lines, it analyses the horizontal projection profile and performs a recursive projection profile cut method. In the word segmentation, between-word gaps are identified by a hierarchical clustering method after finding gaps in the text line by using a connected component analysis. In addition, a special symbol detection technique is applied to find two types of special symbols tying between words using their morphologic features. An experiment with 84 text regions from English and Korean documents shows that the proposed method achieves 99.92% accuracy of word segmentation, while a commercial OCR software named Armi 6.0 Pro$^{TM}$ has 97.58% accuracy.y.

Intonational Pattern Frequency of Seoul Korean and Its Implication to Word Segmentation

  • Kim, Sa-Hyang
    • Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.21-30
    • /
    • 2008
  • The current study investigated distributional properties of the Korean Accentual Phrase and their implication to word segmentation. The properties examined were the frequency of various AP tonal patterns, the types of tonal patterns that are imposed upon content words, and the average number and temporal location of content words within the AP. A total of 414 sentences from the Read speech corpus and the Radio corpus were used for the data analysis. The results showed that the 84% of the APs contained one content word, and that almost 90% of the content words are located in AP-initial position. When the AP-initial onset was not an aspirated or tense consonant, the most common AP patterns were LH, LHH, and LHLH (78%), and 88% of the multisyllabic content words start with a rising tone in AP-initial position. When the AP-initial onset was an aspirated or tense consonant, the most common AP patterns were HH, HHLH, and HHL (72%), and 74% of the multisyllabic content words start with a level H tone in AP-initial position. The data further showed that 84.1% of APs end with the final H tone. The findings provide valuable information about the prosodic pattern and structure of Korean APs, and account for the results of a previous study which showed that Korean listeners are sensitive to AP-initial rising and AP-final high tones (Kim, 2007). This is in line with other cross-linguistic research which has revealed the correlation between prosodic probability and speech processing strategy.

  • PDF

A Study on the Phonemic Analysis for Korean Speech Segmentation (한국어 음소분리에 관한 연구)

  • Lee, Sou-Kil;Song, Jeong-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.4E
    • /
    • pp.134-139
    • /
    • 2004
  • It is generally known that accurate segmentation is very necessary for both an individual word and continuous utterances in speech recognition. It is also commonly known that techniques are now being developed to classify the voiced and the unvoiced, also classifying the plosives and the fricatives. The method for accurate recognition of the phonemes isn't yet scientifically established. Therefore, in this study we analyze the Korean language, using the classification of 'Hunminjeongeum' and contemporary phonetics, with the frequency band, Mel band and Mel Cepstrum, we extract notable features of the phonemes from Korean speech and segment speech by the unit of the phonemes to normalize them. Finally, through the analysis and verification, we intend to set up Phonemic Segmentation System that will make us able to adapt it to both an individual word and continuous utterances.

Segmentation of region strings using connection-characteristic function (연결특성함수를 이용한 문서화상에서의 영역 분리와 문자열 추출)

  • 김석태;이대원;박찬용;남궁재찬
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.11
    • /
    • pp.2531-2542
    • /
    • 1997
  • This paper describes a method for region segmentation and string extractionin documents which are mixed with text, graphic and picture images by the use of the structural characteristic of connceted components. In segmentation of non-text regionas, with connection-characteristic functions which are made by structural characteristic of connected components, segmentation process is progressed. In the string extraction, first we organize basic-unit-region of which vertical and horizontal length are 1/4 of average length of connection components. Second, by merging the basic-unit-regions one other that have smaller values than a given connection intensity threshold. Third, by linking the word blocks with similar block anagles, initial strings are cresed. Finally the whold strings are generated by merging remaining word blocks whose angles are not decided, if their height and prosition are similar to the initial strings. This method can extract strings that are neither horizontal nor of various character sizes. Through computer exteriments with different style documents, we have shown that the feasibility of our method successes.

  • PDF

Ambiguity Resolution in Chinese Word Segmentation

  • Maosong, Sun;T'sou, Benjamin-K.
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 1995.02a
    • /
    • pp.121-126
    • /
    • 1995
  • A new method for Chinese word segmentation named Conditional F'||'&'||'BMM (Forward and Backward Maximal Matching) which incorporates both bigram statistics (ie., mutual infonllation and difference of t-test between Chinese characters) and linguistic rules for ambiguity resolution is proposed in this paper The key characteristics of this model are the use of: (i) statistics which can be automatically derived from any raw corpus, (ii) a rule base for disambiguation with consistency and controlled size to be built up in a systematic way.

  • PDF

Typographical Analyses and Classes in Optical Character Recognition

  • Jung, Min-Chul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.5 no.1
    • /
    • pp.21-25
    • /
    • 2004
  • This paper presents a typographical analyses and classes. Typographical analysis is an indispensable tool for machine-printed character recognition in English. This analysis is a preliminary step for character segmentation in OCR. This paper is divided into two parts. In the first part, word typographical classes from words are defined by the word typographical analysis. In the second part, character typographical classes from connected components are defined by the character typographical analysis. The character typographical classes are used in the character segmentation.

  • PDF

Isolated Words Recognition using K-means iteration without Initialization (초기화하지 않은 K-means iteration을 이용한 고립단어 인식)

  • Kim, Jin-Young;Sung, Keong-Mo
    • Proceedings of the KIEE Conference
    • /
    • 1988.07a
    • /
    • pp.7-9
    • /
    • 1988
  • K-means iteration method is generally used for creating the templates in speaker-independent isolated-word recognition system. In this paper the initialization method of initial centers is proposed. The concepts are sorting and trace segmentation. All the tokens are sorted and segmented by trace segmentation so that initial centers are decided. The performance of this method is evaluated by isolated-word recognition of Korean digits. The highest recognition rate is 97.6%.

  • PDF

Bi-directional Maximal Matching Algorithm to Segment Khmer Words in Sentence

  • Mao, Makara;Peng, Sony;Yang, Yixuan;Park, Doo-Soon
    • Journal of Information Processing Systems
    • /
    • v.18 no.4
    • /
    • pp.549-561
    • /
    • 2022
  • In the Khmer writing system, the Khmer script is the official letter of Cambodia, written from left to right without a space separator; it is complicated and requires more analysis studies. Without clear standard guidelines, a space separator in the Khmer language is used inconsistently and informally to separate words in sentences. Therefore, a segmented method should be discussed with the combination of the future Khmer natural language processing (NLP) to define the appropriate rule for Khmer sentences. The critical process in NLP with the capability of extensive data language analysis necessitates applying in this scenario. One of the essential components in Khmer language processing is how to split the word into a series of sentences and count the words used in the sentences. Currently, Microsoft Word cannot count Khmer words correctly. So, this study presents a systematic library to segment Khmer phrases using the bi-directional maximal matching (BiMM) method to address these problematic constraints. In the BiMM algorithm, the paper focuses on the Bidirectional implementation of forward maximal matching (FMM) and backward maximal matching (BMM) to improve word segmentation accuracy. A digital or prefix tree of data structure algorithm, also known as a trie, enhances the segmentation accuracy procedure by finding the children of each word parent node. The accuracy of BiMM is higher than using FMM or BMM independently; moreover, the proposed approach improves dictionary structures and reduces the number of errors. The result of this study can reduce the error by 8.57% compared to FMM and BFF algorithms with 94,807 Khmer words.

Two-Level Clausal Segmentation using Sense Information (의미 정보를 이용한 이단계 단문분할)

  • Park, Hyun-Jae;Woo, Yo-Seop
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.9
    • /
    • pp.2876-2884
    • /
    • 2000
  • Clausal segmentation is the method that parses Korean sentences by segmenting one long sentence into several phrases according to the predicates. So far most of researches could be useful for literary sentences, but long sentences increase complexities of the syntax analysis. Thus this paper proposed Two-Level Clausal Segmentation using sense information which was designed and implemented to solve this problem. Analysis of clausal segmentation and understanding of word senses can reduce syntactic and semantic ambiguity. Clausal segmentation using Sense Information is necessary because there are structural ambiguity of sentences and a frequent abbreviation of auxiliary word in common sentences. Two-Level Clausal Segmentation System(TLCSS) consists of Complement Selection Process(CSP) and Noncomplement Expansion Process(NEP). CSP matches sentence elements to subcategorization dictionary and noun thesaurus. As a result of this step, we can find the complement and subcategorization pattern. Secondly, NEP is the method that uses syntactic property and the others methods for noncomplement increase of growth. As a result of this step, we acquire segmented sentences. We present a technique to estimate the precision of Two-Level Clausal Segmentation System, and shows a result of Clausal Segmentation with 25,000 manually sense tagged corpus constructed by ETRl-KONAN group. An Two-Level Clausal Segmentation System shows clausal segmentation precision of 91.8%.

  • PDF

A Comparative study on the Effectiveness of Segmentation Strategies for Korean Word and Sentence Classification tasks (한국어 단어 및 문장 분류 태스크를 위한 분절 전략의 효과성 연구)

  • Kim, Jin-Sung;Kim, Gyeong-min;Son, Jun-young;Park, Jeongbae;Lim, Heui-seok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.39-47
    • /
    • 2021
  • The construction of high-quality input features through effective segmentation is essential for increasing the sentence comprehension of a language model. Improving the quality of them directly affects the performance of the downstream task. This paper comparatively studies the segmentation that effectively reflects the linguistic characteristics of Korean regarding word and sentence classification. The segmentation types are defined in four categories: eojeol, morpheme, syllable and subchar, and pre-training is carried out using the RoBERTa model structure. By dividing tasks into a sentence group and a word group, we analyze the tendency within a group and the difference between the groups. By the model with subchar-level segmentation showing higher performance than other strategies by maximal NSMC: +0.62%, KorNLI: +2.38%, KorSTS: +2.41% in sentence classification, and the model with syllable-level showing higher performance at maximum NER: +0.7%, SRL: +0.61% in word classification, the experimental results confirm the effectiveness of those schemes.