• Title/Summary/Keyword: vowel recognition

Search Result 137, Processing Time 0.029 seconds

Korean Character Recognition with Tree Structure Using Representative Images (대표영상을 이용한 나무구조의 한글문자 인식)

  • 김정우;정수길;조웅호;김성용;김수중
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.4
    • /
    • pp.18-29
    • /
    • 1994
  • For the efficient recognition of Korean Alphabets, we proposed the tree structure algorithm which was based on K-tuple NRF-SDF using representative images as training images. Representative images consisted of ECP-SDF images of several consonants or vowels. To reduce the effect of sidelobe in the output correlation plane, we used the representative images as training images and obtained the elements of a vector inner product matrix using the peak value of AMPOF correlation of training images with one another. The proposed algorithm consisted of three main-step containing several substeps. In filter synthesis of each step, representative images were used as training images in the first and the second main-step and each consonant or vowel was used as training images in the third main-step. The performance of this algorithm is demonstrated by computer simulation and optical experiment.

  • PDF

Recognition of Handprinted Hangul Line using Vowel Pre-Recognition Method (모음 우선 인식에 의한 즐단위 필기체 한글의 인식)

  • Ham, Kyung-Soo
    • Annual Conference on Human and Language Technology
    • /
    • 1994.11a
    • /
    • pp.195-200
    • /
    • 1994
  • 본 논문에서는 글자 구분선 없이 자유로이 쓰여진 필기체 한글의 인식 방안을 보인다. 즐단위의 한글 입력 영상에서 글자의 골격선을 추출하는 새로운 방법과 골격선들 간의 접촉점과 끝점을 그래프의 노드로 표현하고, 획은 그래프의 가지로 표현하는 방안을 보인다. 한글의 글자 구성 원리는 모음을 중심으로 모아쓰므로, 그래프로 표현된 즐단위의 한글에서 모음의 시작위치 및 속성을 가지는 로드로부터 한글의 모음을 가장 먼저 유도하여 인식하고, 우측 글자 및 자소끼리의 접촉을 분리하여 초성 자음 및 종성 자음을 인식하여, 좌에서 우의 방향으로 한 문자씩 인식해 나간다. 본 논문에서의 자유로이 필기된 한글의 인식 실험은 우리나라의 주소 50개를 서로 다른 25인이 필기한 영상 데이터를 사용하였고 한글 문자의 인식율은 89%이다.

  • PDF

Design and Manufacture of a Device for the Recognition of Long Vowels (장모음 인식장치 설계 제작)

  • 구용회
    • Journal of the Korean Institute of Telematics and Electronics T
    • /
    • v.35T no.3
    • /
    • pp.9-14
    • /
    • 1998
  • The speech recognition on long vowels are carried out by electric circuits. A level compressor is able to transform the wave of voice to serial pulses. The obtained pulses have informations to distinguish the vowels. The sampling of the pulses is carried out by the register which picks up a series of serial signals in a pitch of a vowel as an unit. The timing control pulses such as sampling pulses are generated by using peak pulses in the speech wave. The parallel data in the register assign the phonetic symbol by means of the decision making circuit which carries out the IF-THEN rule.

  • PDF

Language Model based on VCCV and Test of Smoothing Techniques for Sentence Speech Recognition (문장음성인식을 위한 VCCV 기반의 언어모델과 Smoothing 기법 평가)

  • Park, Seon-Hee;Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.11B no.2
    • /
    • pp.241-246
    • /
    • 2004
  • In this paper, we propose VCCV units as a processing unit of language model and compare them with clauses and morphemes of existing processing units. Clauses and morphemes have many vocabulary and high perplexity. But VCCV units have low perplexity because of the small lexicon and the limited vocabulary. The construction of language models needs an issue of the smoothing. The smoothing technique used to better estimate probabilities when there is an insufficient data to estimate probabilities accurately. This paper made a language model of morphemes, clauses and VCCV units and calculated their perplexity. The perplexity of VCCV units is lower than morphemes and clauses units. We constructed the N-grams of VCCV units with low perplexity and tested the language model using Katz, absolute, modified Kneser-Ney smoothing and so on. In the experiment results, the modified Kneser-Ney smoothing is tested proper smoothing technique for VCCV units.

A Study on the Spectrum Variation of Korean Speech (한국어 음성의 스펙트럼 변화에 관한 연구)

  • Lee Sou-Kil;Song Jeong-Young
    • Journal of Internet Computing and Services
    • /
    • v.6 no.6
    • /
    • pp.179-186
    • /
    • 2005
  • We can extract spectrum of the voices and analyze those, after employing features of frequency that voices have. In the spectrum of the voices monophthongs are thought to be stable, but when a consonant(s) meet a vowel(s) in a syllable or a word, there is a lot of changes. This becomes the biggest obstacle to phoneme speech recognition. In this study, using Mel Cepstrum and Mel Band that count Frequency Band and auditory information, we analyze the spectrums that each and every consonant and vowel has and the changes in the voices reftects auditory features and make it a system. Finally we are going to present the basis that can segment the voices by an unit of phoneme.

  • PDF

Korean ESL Learners' Perception of English Segments: a Cochlear Implant Simulation Study (인공와우 시뮬레이션에서 나타난 건청인 영어학습자의 영어 말소리 지각)

  • Yim, Ae-Ri;Kim, Dahee;Rhee, Seok-Chae
    • Phonetics and Speech Sciences
    • /
    • v.6 no.3
    • /
    • pp.91-99
    • /
    • 2014
  • Although it is well documented that patients with cochlear implant experience hearing difficulties when processing their first language, very little is known whether or not and to what extent cochlear implant patients recognize segments in a second language. This preliminary study examines how Korean learners of English identify English segments in a normal hearing and cochlear implant simulation conditions. Participants heard English vowels and consonants in the following three conditions: normal hearing condition, 12-channel noise vocoding with 0mm spectral shift, and 12-channel noise vocoding with 3mm spectral shift. Results confirmed that nonnative listeners could also retrieve spectral information from vocoded speech signal, as they recognized vowel features fairly accurately despite the vocoding. In contrast, the intelligibility of manner and place features of consonants was significantly decreased by vocoding. In addition, we found that spectral shift affected listeners' vowel recognition, probably because information regarding F1 is diminished by spectral shifting. Results suggest that patients with cochlear implant and normal hearing second language learners would experience different patterns of listening errors when processing their second language(s).

A Recognition Algorithm of Hangeul Alphabet Using 2-D Digital filtering (2차원 디지털 필터링에 의한 한글 자모의 인식 알고리즘)

  • O, Gil-Nam;Sin, Seong-Ho;Jin, Yong-Ok
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.21 no.3
    • /
    • pp.55-59
    • /
    • 1984
  • This paper describes a method of Hangout recognition using 2 - D digital filtering. The 170 patterns classified by the positions of the initial sound (consonant), middle sound (vowel) and terminal sound (consonant) of the 1,659 characters were established and models formed by using 2 - D digital filtering for each patterns were obtained. Based on these models we proposed an algorithm that can recognize KOREAN combinational characters by separating patterns from them with superpostion principles. As a result of simulation, 100% of recognition rate is obtained in the case of the print letter.

  • PDF

A study on the phoneme recognition using radial basis function network (RBFN을 이용한 음소인식에 관한 연구)

  • 김주성;김수훈;허강인
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.5
    • /
    • pp.1026-1035
    • /
    • 1997
  • In this paper, we studied for phoneme recognition using GPFN and PNN as a kind of RBFN. The structure of RBFN is similar to a feedforward networks but different from choosing of activation function, reference vector and learnign algorithm in a hidden layer. Expecially sigmoid function in PNN is replaced by one category included exponential function. And total calculation performance is high, because PNN performs pattern classification with out learning. In phonemerecognition experiment with 5 vowel and 12 consant, recognition rates of GPFN and PNN as a kind of RBFN reflected statistic characteristic of speech are higher than ones of MLP in case of using test data and quantizied data by VQ and LVQ.

  • PDF

Speaker Adapted Real-time Dialogue Speech Recognition Considering Korean Vocal Sound System (한국어 음운체계를 고려한 화자적응 실시간 단모음인식에 관한 연구)

  • Hwang, Seon-Min;Yun, Han-Kyung;Song, Bok-Hee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.6 no.4
    • /
    • pp.201-207
    • /
    • 2013
  • Voice Recognition technique has been developed and it has been actively applied to various information devices such as smart phones and car navigation system. But the basic research technique related the speech recognition is based on research results in English. Since the lip sync producing generally requires tedious hand work of animators and it serious affects the animation producing cost and development period to get a high quality lip animation. In this research, a real time processed automatic lip sync algorithm for virtual characters in digital contents is studied by considering Korean vocal sound system. This suggested algorithm contributes to produce a natural lip animation with the lower producing cost and the shorter development period.

Sentence design for speech recognition database

  • Zu Yiqing
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.472-472
    • /
    • 1996
  • The material of database for speech recognition should include phonetic phenomena as much as possible. At the same time, such material should be phonetically compact with low redundancy[1, 2]. The phonetic phenomena in continuous speech is the key problem in speech recognition. This paper describes the processing of a set of sentences collected from the database of 1993 and 1994 "People's Daily"(Chinese newspaper) which consist of news, politics, economics, arts, sports etc.. In those sentences, both phonetic phenometla and sentence patterns are included. In continuous speech, phonemes always appear in the form of allophones which result in the co-articulary effects. The task of designing a speech database should be concerned with both intra-syllabic and inter-syllabic allophone structures. In our experiments, there are 404 syllables, 415 inter-syllabic diphones, 3050 merged inter-syllabic triphones and 2161 merged final-initial structures in read speech. Statistics on the database from "People's Daily" gives and evaluation to all of the possible phonetic structures. In this sentence set, we first consider the phonetic balances among syllables, inter-syllabic diphones, inter-syllabic triphones and semi-syllables with their junctures. The syllabic balances ensure the intra-syllabic phenomena such as phonemes, initial/final and consonant/vowel. the rest describes the inter-syllabic jucture. The 1560 sentences consist of 96% syllables without tones(the absent syllables are only used in spoken language), 100% inter-syllabic diphones, 67% inter-syllabic triphones(87% of which appears in Peoples' Daily). There are rougWy 17 kinds of sentence patterns which appear in our sentence set. By taking the transitions between syllables into account, the Chinese speech recognition systems have gotten significantly high recognition rates[3, 4]. The following figure shows the process of collecting sentences. [people's Daily Database] -> [segmentation of sentences] -> [segmentation of word group] -> [translate the text in to Pin Yin] -> [statistic phonetic phenomena & select useful paragraph] -> [modify the selected sentences by hand] -> [phonetic compact sentence set]

  • PDF