• Title/Summary/Keyword: 음절체

Search Result 31, Processing Time 0.023 seconds

Korean Word Segmentation and Compound-noun Decomposition Using Markov Chain and Syllable N-gram (마코프 체인 밀 음절 N-그램을 이용한 한국어 띄어쓰기 및 복합명사 분리)

  • 권오욱
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.274-284
    • /
    • 2002
  • Word segmentation errors occurring in text preprocessing often insert incorrect words into recognition vocabulary and cause poor language models for Korean large vocabulary continuous speech recognition. We propose an automatic word segmentation algorithm using Markov chains and syllable-based n-gram language models in order to correct word segmentation error in teat corpora. We assume that a sentence is generated from a Markov chain. Spaces and non-space characters are generated on self-transitions and other transitions of the Markov chain, respectively Then word segmentation of the sentence is obtained by finding the maximum likelihood path using syllable n-gram scores. In experimental results, the algorithm showed 91.58% word accuracy and 96.69% syllable accuracy for word segmentation of 254 sentence newspaper columns without any spaces. The algorithm improved the word accuracy from 91.00% to 96.27% for word segmentation correction at line breaks and yielded the decomposition accuracy of 96.22% for compound-noun decomposition.

An Implementation of Hangul Handwriting Correction Application Based on Deep Learning (딥러닝에 의한 한글 필기체 교정 어플 구현)

  • Jae-Hyeong Lee;Min-Young Cho;Jin-soo Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.3
    • /
    • pp.13-22
    • /
    • 2024
  • Currently, with the proliferation of digital devices, the significance of handwritten texts in daily lives is gradually diminishing. As the use of keyboards and touch screens increase, a decline in Korean handwriting quality is being observed across a broad spectrum of Korean documents, from young students to adults. However, Korean handwriting still remains necessary for many documentations, as it retains individual unique features while ensuring readability. To this end, this paper aims to implement an application designed to improve and correct the quality of handwritten Korean script The implemented application utilizes the CRAFT (Character-Region Awareness For Text Detection) model for handwriting area detection and employs the VGG-Feature-Extraction as a deep learning model for learning features of the handwritten script. Simultaneously, the application presents the user's handwritten Korean script's reliability on a syllable-by-syllable basis as a recognition rate and also suggests the most similar fonts among candidate fonts. Furthermore, through various experiments, it can be confirmed that the proposed application provides an excellent recognition rate comparable to conventional commercial character recognition OCR systems.

A Study on a Generation of a Syllable Restoration Candidate Set and a Candidate Decrease (음절 복원 후보 집합의 생성과 후보 감소에 관한 연구)

  • 김규식;김경징;이상범
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.12
    • /
    • pp.1679-1690
    • /
    • 2002
  • This paper, describe about a generation of a syllable restoration regulation for a post processing of a speech recognition and a decrease of a restoration candidate. It created a syllable restoration regulation to create a restoration candidate pronounced with phonetic value recognized through a post processing of the formula system that was a tone to recognize syllable unit phonetic value for a performance enhancement of a dialogue serial speech recognition. Also, I presented a plan to remove a regulation to create unused notation from a real life in a restoration regulation with a plan to reduce number candidate of a restoration meeting. A design implemented a restoration candidate set generator in order a syllable restoration regulation display that it created a proper restoration candidate set. The proper notation meeting that as a result of having proved about a standard pronunciation example and a word extracted from a pronunciation dictionary at random, the notation that an utterance was former was included in proved with what a generation became.

  • PDF

Control Rules of Synthetical Pauses and Syllable Duration depending on Pronunciation Speed in Korean Speech (발음속도에 따른 한국어의 휴지기 규칙 및 평균음절길이 조절규칙)

  • Kim, Jae-In;Kim, Jin-Young;Lee, Tae-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.1
    • /
    • pp.56-64
    • /
    • 1995
  • In this paper we extracted control rules of synthetical pauses and syllable durations depending on pronunciation speed in Korean speech from the spoken sentences recorded by 18 professional announcers. Pause rules were divided into three categories : pause between sentences(PBS), pause between clauses(PBC) and pause between intonational phrases(PBI). From the analysis results it is found that comparing the slowly spoken sentence with the fast spoken sentence the duration difference between them is due to the synthetical pause increments, expecially, of PBS and PBC. In addition, it is also found that the increment ratio of the mean syllable duration Is low. On the other hand, PBI was not pronounced in the fast spoken sentences. PBI was pronounced at the pronunciation speed(PS) over some PS.

  • PDF

IP generating factors and rules of read speech and dialogue in Korean (대화체와 낭독체의 억양구 형성에 관한 연구)

  • Park Jihye
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.285-288
    • /
    • 2002
  • 본 논문에서는 발화 유형을 대화체와 낭독체의 두 가지로 구분하여 각 발화 유형에서 억양구를 형성하는 특징을 살펴보았다. 실험 결과, 한 문장 내에 두 개 이상의 억양구가 생성되는 경우와 접속문의 경우에는 낭독체에서 더 많은 억양구가 형성되었다. 대화체에서 더 많은 억양구가 형성되는 경우는 주로 주어 다음에 억양구가 형성되는 경우이며, 대화체 발화에서는 한 문장내에 두 개 이상의 억양구가 형성된 경우는 존재하지 않았다. 이러한 실험 결과를 바탕으로 억양구의 형성이 음절수뿐만 아니라 문장의 구조에 영향을 받으며, 이 두 가지 요인이 발화 유형에 따라 다르게 적용된다는 운율적 특징을 파악할 수 있다.

  • PDF

Generative Chatting Model based on Index-Term Encoding and Syllable Decoding (색인어 인코딩과 음절 디코딩에 기반한 생성 채팅 모델)

  • Kim, JinTae;Kim, Sihyung;Kim, HarkSoo;Lee, Yeonsoo;Choi, Maengsic
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.125-129
    • /
    • 2017
  • 채팅 시스템은 사람이 사용하는 자연어를 이용해 컴퓨터와 대화를 하는 시스템이다. 한국어 특성상 대화체에서 동일한 의미를 가졌지만 다른 형태를 가진 경우가 많다. 본 논문에서는 Attention mechanism Encoder-Decoder Model을 사용해 한국어 특성에 맞는 효과적인 생성 모델을 만들 수 있는 입력, 출력 단위를 제안한다. 실험에서 정성 평가와 ROUSE, BLEU 평가를 진행한 결과 형태소 단위의 입력 보다 본 논문에서 제안한 색인어 입력 단위의 성능이 높고, 의사 형태소 단위 출력 보다 음절 단위 출력을 사용한 시스템이 더 문법적 오류가 적고 적합한 응답을 생성하는 것을 보였다.

  • PDF

Generative Chatting Model based on Index-Term Encoding and Syllable Decoding (색인어 인코딩과 음절 디코딩에 기반한 생성 채팅 모델)

  • Kim, JinTae;Kim, Sihyung;Kim, HarkSoo;Lee, Yeonsoo;Choi, Maengsic
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.125-129
    • /
    • 2017
  • 채팅 시스템은 사람이 사용하는 자연어를 이용해 컴퓨터와 대화를 하는 시스템이다. 한국어 특성상 대화체에서 동일한 의미를 가졌지만 다른 형태를 가진 경우가 많다. 본 논문에서는 Attention mechanism Encoder-Decoder Model을 사용해 한국어 특성에 맞는 효과적인 생성 모델을 만들 수 있는 입력, 출력 단위를 제안한다. 실험에서 정성 평가와 ROUSE, BLEU 평가를 진행한 결과 형태소 단위의 입력 보다 본 논문에서 제안한 색인어 입력 단위의 성능이 높고, 의사 형태소 단위 출력 보다 음절 단위 출력을 사용한 시스템이 더 문법적 오류가 적고 적합한 응답을 생성하는 것을 보였다.

  • PDF

Study on the Neural Network for Handwritten Hangul Syllabic Character Recognition (수정된 Neocognitron을 사용한 필기체 한글인식)

  • 김은진;백종현
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.1
    • /
    • pp.61-78
    • /
    • 1991
  • This paper descibes the study of application of a modified Neocognitron model with backward path for the recognition of Hangul(Korean) syllabic characters. In this original report, Fukushima demonstrated that Neocognitron can recognize hand written numerical characters of $19{\times}19$ size. This version accepts $61{\times}61$ images of handwritten Hangul syllabic characters or a part thereof with a mouse or with a scanner. It consists of an input layer and 3 pairs of Uc layers. The last Uc layer of this version, recognition layer, consists of 24 planes of $5{\times}5$ cells which tell us the identity of a grapheme receiving attention at one time and its relative position in the input layer respectively. It has been trained 10 simple vowel graphemes and 14 simple consonant graphemes and their spatial features. Some patterns which are not easily trained have been trained more extrensively. The trained nerwork which can classify indivisual graphemes with possible deformation, noise, size variance, transformation or retation wre then used to recongnize Korean syllabic characters using its selective attention mechanism for image segmentation task within a syllabic characters. On initial sample tests on input characters our model could recognize correctly up to 79%of the various test patterns of handwritten Korean syllabic charactes. The results of this study indeed show Neocognitron as a powerful model to reconginze deformed handwritten charavters with big size characters set via segmenting its input images as recognizable parts. The same approach may be applied to the recogition of chinese characters, which are much complex both in its structures and its graphemes. But processing time appears to be the bottleneck before it can be implemented. Special hardware such as neural chip appear to be an essestial prerquisite for the practical use of the model. Further work is required before enabling the model to recognize Korean syllabic characters consisting of complex vowels and complex consonants. Correct recognition of the neighboring area between two simple graphemes would become more critical for this task.

The Factors In Reading Hangul Text : font width-to-height ratio of a letter, line length (글자꼴, 글줄길이, 글줄모양과 한글의 가독성)

  • Lee, Soo-Jeong;Jung, Woo-Hyun;Chung, Chan-Sup
    • Annual Conference on Human and Language Technology
    • /
    • 1993.10a
    • /
    • pp.193-205
    • /
    • 1993
  • 한글의 글자꼴과 장평율 그리고 글줄 길이와 글줄꼴 처리 방식이 가독성에 미치는 효과를 측정하였다. 글자꼴은 명조체, 고딕체, 샘물체를 사용하였고, 장평율은 글자의 가로 대 세로 비율을 1 대 1, 1 대 2 그리고 2 대 1로 변형시킨 세 가지를 사용하였다. 글줄 길이는 60mm와 120mm의 두 가지로 하였고 글줄 끝에서 음절 단위로 끊어 쓴 문장과 어절 단위로 끊어 쓰되 띄어쓰기 여백을 조절한 문장과 조절하지 않은 문장을 사용하였다. 연구결과, 글자꼴에서는 명조체와 고딕체의 가독성이 샘물체보다 좋았고, 가로 대 세로의 비율이 1 대 1이거나 1 대 2인 글자의 가독성이 2 대 1인 글자의 가독성보다 우수하였다. 이러한 연구 결과는 한글 정보 처리 과정에서 자모보다 글자가 중요한 시각 정보로 사용되고 한번 응시하는 동안에 표집되는 글자수가 가독성에 영향을 미칠 수 있다는 사실을 시사한다. 글줄 길이는 120mm일 때의 가독성이 더 좋았고 글줄 끝처리 방식은 가독성에 영향을 미치지 않는 것으로 나타났다.

  • PDF

Automatic Word Spacing of the Korean Sentences by Using End-to-End Deep Neural Network (종단 간 심층 신경망을 이용한 한국어 문장 자동 띄어쓰기)

  • Lee, Hyun Young;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.11
    • /
    • pp.441-448
    • /
    • 2019
  • Previous researches on automatic spacing of Korean sentences has been researched to correct spacing errors by using n-gram based statistical techniques or morpheme analyzer to insert blanks in the word boundary. In this paper, we propose an end-to-end automatic word spacing by using deep neural network. Automatic word spacing problem could be defined as a tag classification problem in unit of syllable other than word. For contextual representation between syllables, Bi-LSTM encodes the dependency relationship between syllables into a fixed-length vector of continuous vector space using forward and backward LSTM cell. In order to conduct automatic word spacing of Korean sentences, after a fixed-length contextual vector by Bi-LSTM is classified into auto-spacing tag(B or I), the blank is inserted in the front of B tag. For tag classification method, we compose three types of classification neural networks. One is feedforward neural network, another is neural network language model and the other is linear-chain CRF. To compare our models, we measure the performance of automatic word spacing depending on the three of classification networks. linear-chain CRF of them used as classification neural network shows better performance than other models. We used KCC150 corpus as a training and testing data.