• Title/Summary/Keyword: Korean consonants

Search Result 400, Processing Time 0.027 seconds

Hierarchical Hidden Markov Model for Finger Language Recognition (지화 인식을 위한 계층적 은닉 마코프 모델)

  • Kwon, Jae-Hong;Kim, Tae-Yong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.9
    • /
    • pp.77-85
    • /
    • 2015
  • The finger language is the part of the sign language, which is a language system that expresses vowels and consonants with hand gestures. Korean finger language has 31 gestures and each of them needs a lot of learning models for accurate recognition. If there exist mass learning models, it spends a lot of time to search. So a real-time awareness system concentrates on how to reduce search spaces. For solving these problems, this paper suggest a hierarchy HMM structure that reduces the exploration space effectively without decreasing recognition rate. The Korean finger language is divided into 3 categories according to the direction of a wrist, and a model can be searched within these categories. Pre-classification can discern a similar finger Korean language. And it makes a search space to be managed effectively. Therefore the proposed method can be applied on the real-time recognition system. Experimental results demonstrate that the proposed method can reduce the time about three times than general HMM recognition method.

A Study on Processing of Speech Recognition Korean Words (한글 단어의 음성 인식 처리에 관한 연구)

  • Nam, Kihun
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.4
    • /
    • pp.407-412
    • /
    • 2019
  • In this paper, we propose a technique for processing of speech recognition in korean words. Speech recognition is a technology that converts acoustic signals from sensors such as microphones into words or sentences. Most foreign languages have less difficulty in speech recognition. On the other hand, korean consists of vowels and bottom consonants, so it is inappropriate to use the letters obtained from the voice synthesis system. That improving the conventional structure speech recognition can the correct words recognition. In order to solve this problem, a new algorithm was added to the existing speech recognition structure to increase the speech recognition rate. Perform the preprocessing process of the word and then token the results. After combining the result processed in the Levenshtein distance algorithm and the hashing algorithm, the normalized words is output through the consonant comparison algorithm. The final result word is compared with the standardized table and output if it exists, registered in the table dose not exists. The experimental environment was developed by using a smartphone application. The proposed structure shows that the recognition rate is improved by 2% in standard language and 7% in dialect.

The Influence of Non-Linear Frequency Compression on the Perception of Speech and Music in Patients with High Frequency Hearing Loss

  • Ahn, Jungmin;Choi, Ji Eun;Kang, Ju Yong;Choi, Ik Joon;Lee, Myung-Chul;Lee, Byeong-Cheol;Hong, Sung Hwa;Moon, Il Joon
    • Journal of Audiology & Otology
    • /
    • v.25 no.2
    • /
    • pp.80-88
    • /
    • 2021
  • Background and Objectives: Non-linear frequency compression (NLFC) technology compresses and shifts higher frequencies into a lower frequency area that has better residual hearing. Because consonants are uttered in the high-frequency area, NLFC could provide better speech understanding. The aim of this study was to investigate the clinical effectiveness of NLFC technology on the perception of speech and music in patients with high-frequency hearing loss. Subjects and Methods: Twelve participants with high-frequency hearing loss were tested in a counter-balanced order, and had two weeks of daily experience with NLFC set on/off prior to testing. Performance was repeatedly evaluated with consonant tests in quiet and noise environments, speech perception in noise, music perception and acceptableness of sound quality rating tasks. Additionally, two questionnaires (the Abbreviated Profile of Hearing Aid Benefit and the Korean version of the International Outcome Inventory-Hearing Aids) were administered. Results: Consonant and speech perception improved with hearing aids (NLFC on/off conditions), but there was no significant difference between NLFC on and off states. Music perception performances revealed no notable difference among unaided and NLFC on and off states. The benefits and satisfaction ratings between NLFC on and off conditions were also not significantly different, based on questionnaires, however great individual variability preferences were noted. Conclusions: Speech perception as well as music perception both in quiet and noise environments was similar between NLFC on and off states, indicating that real world benefits from NLFC technology may be limited in Korean adult hearing aid users.

The influence of syllable frequency, syllable type and its position on naming two-syllable Korean words and pseudo-words (한글 두 글자 단어와 비단어의 명명에 글자 빈도, 글자 유형과 위치가 미치는 영향)

  • Myong Seok Shin;ChangHo Park
    • Korean Journal of Cognitive Science
    • /
    • v.35 no.2
    • /
    • pp.97-112
    • /
    • 2024
  • This study investigated how syllable-level variables such as syllable frequency, syllable (i.e. vowel) type, presence of final consonants (i.e. batchim) and syllable position influence naming of both words and pseudo-words. The results of the linear mixed-effects model analysis showed that, for words, naming time decreased as the frequency of the first syllable increased, and when the first syllable had a final consonant. Additionally, words were named more accurately when they had vertical vowels compared to horizontal vowels. For pseudo-words, naming time decreased and accuracy rate increased as the frequency of the first or the second syllable increased. Furthermore, pseudo-words were named more accurately when they had vertical vowels compared to horizontal vowels. These results suggest that while the frequency of the second syllable had differential effects between words and pseudo-words, the frequency of the first syllable and the syllable type had consistent effects for both words and pseudo-words. The implications of this study were discussed concerning visual word recognition processing.

A Study on On-line Recognition System of Korean Characters (온라인 한글자소 인식시스템의 구성에 관한 연구)

  • Choi, Seok;Kim, Gil-Jung;Huh, Man-Tak;Lee, Jong-Hyeok;Nam, Ki-Gon;Yoon, Tae-Hoon;Kim, Jae-Chang;Lee, Ryang-Seong
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.9
    • /
    • pp.94-105
    • /
    • 1993
  • In this paper propose a Koaren character recognition system using a neural network is proposed. This system is a multilayer neural network based on the masking field model which consists of a input layer, four feature extraction layers which extracts type, direction, stroke, and connection features, and an output layer which gives us recognized character codes. First, 4x4 subpatterns of an NxN character pattern stored in the input buffer are applied into the feature extraction layers sequentially. Then, each of feature extraction layers extracts sequentially features such as type, direction, stroke, and connection, respectively. Type features for direction and connection are extracted by the type feature extraction layer, direction features for stroke by the direction feature extraction layer and stroke and connection features for stroke by the direction feature extraction layer and stroke and connection features for the recongnition of character by the stroke and the connection feature extractions layers, respectively. The stroke and connection features are saved in the sequential buffer layer sequentially and using these features the characters are recognized in the output layer. The recognition results of this system by tests with 8 single consonants and 6 single vowels are promising.

  • PDF

Study on regional Distribution and Etymology according to the Type in the World's Tobacco Name (세계 담배이름의 유형에 따른 지역적 분포와 어원에 관한 연구)

  • Jeong, Kee-Taeg
    • Journal of the Korean Society of Tobacco Science
    • /
    • v.37 no.1
    • /
    • pp.8-17
    • /
    • 2015
  • The purpose of this study is to classify the tobacco names in the world, to investigate the regional distribution of the classified type, and to show origin of names according to the type. The names of tobacco used in this study was 50. The type of tobacco names was classified by the presence or absence of nasal sound(morn) on the first syllable, the Number of syllable, and the structure of consonants and vowels of tobacco names. Type I (Dambago) has the nasal sound on the first syllable. The proportion of Type I(Dambago) was 28%. And the rest(Type I~Type V ; 72%) has no nasal sound. Type II(Tabaco) has three syllables, and its proportion was 20%. Type III(Tabac) has the two syllables and the structure of T+vowels+B+vowels. And its proportion was 30%. Type IV(Tutun) has the two syllables and the structure of T+vowels+T+vowels. And its proportion was 12%. Type V(Duhan) has the two syllables and the structure of D+vowels+H(V)+vowels. And its proportion was 10%. The world's most widely distributed type was Type I(Dambago). regional distribution of the world's tobacco names were clustered by the type. 72% of Type I(Dambago) was distributed in Asia. The etymology of Type I(Dambago) was only 14% Tambaku and the other is not yet known. The etymology of Type I(Dambago) seems to be derived from the Haitian Tambaku(meaning a tobacco pipe). 88% of Type II(Tabaco) and III(Tabac) were distributed in Europe. The etymology of Type II(Tabaco) and Type III(Tabac) were 84% Spanish "Tabaco". 100% of Type IV(Tutun) and V(Duhan) were distributed in Europe. The etymology of Type IV(Tutun) and Type V(Duhan) were 100% Turkish tutun and duhan, respectively. This finding suggests that the etymology of Type I(Dambago) is certainly may be Haitian "Tambaku(meaning a tobacco pipe)".

  • PDF

Automatic Phonetic Segmentation of Korean Speech Signal Using Phonetic-acoustic Transition Information (음소 음향학적 변화 정보를 이용한 한국어 음성신호의 자동 음소 분할)

  • 박창목;왕지남
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.8
    • /
    • pp.24-30
    • /
    • 2001
  • This article is concerned with automatic segmentation for Korean speech signals. All kinds of transition cases of phonetic units are classified into 3 types and different strategies for each type are applied. The type 1 is the discrimination of silence, voiced-speech and unvoiced-speech. The histogram analysis of each indicators which consists of wavelet coefficients and SVF (Spectral Variation Function) in wavelet coefficients are used for type 1 segmentation. The type 2 is the discrimination of adjacent vowels. The vowel transition cases can be characterized by spectrogram. Given phonetic transcription and transition pattern spectrogram, the speech signal, having consecutive vowels, are automatically segmented by the template matching. The type 3 is the discrimination of vowel and voiced-consonants. The smoothed short-time RMS energy of Wavelet low pass component and SVF in cepstral coefficients are adopted for type 3 segmentation. The experiment is performed for 342 words utterance set. The speech data are gathered from 6 speakers. The result shows the validity of the method.

  • PDF

Speech Stimuli on the Diagnostic Evaluation of Speech with Cleft Lip and Palate : Clinical Use and Literature Review (구개열 환자 말 평가 시 검사어에 대한 고찰 : 임상현장의 말 평가 어음자료와 문헌적 고찰을 중심으로)

  • Choi, Seong-Hee;Choi, Jae-Nam;Nam, Do-Hyun;Choi, Hong-Shik
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.16 no.1
    • /
    • pp.33-48
    • /
    • 2005
  • Differential diagnosis of articulation and resonance problems in the cleft lip and palate speech is required for evaluating various factors contribute to speech problems such as VPI, dental occlusion, palatal fistulae, learning. However, validity of speech stimuli is current issue to evaluate accurately each problem in cleft speech. This study was conducted to investigate speech stimuli using in the clinical setting and review the literatures and articles published 1990 to 2005 for helping develop standardized speech samples. The results were recommendation to evaluate properly velopharyngeal function when conducting a diagnostic evaluation as follows : 1) In identification hypernasality, the speech stimuli should be included low pressure consonants to eliminate effects of nasal emission, compensatory articulation. 2) Speech stimuli should be consist of visual, front sounds to eliminate compensatory articulation and to stimulate easily. 3) Regarding early diagnosis and treatment, speech stimuli need to develop for infants and preschooler. 4) Stimulus length on nasalance scores should be at least 6 syllables. 5) In phonetic context on nasalance scores, /i/ vowel should be take into consideration excluding paragraph. 6) Connected speech stimuli should be developed for evaluating intelligibility and VP function.

  • PDF

Functional Assessment after Tongue Reconstruction using Free Flap (유리피판을 이용한 설재건 후의 기능평가)

  • Park, Sung-Ho;Chung, Chul-Hoon;Lee, Jong-Wook;Chang, Yong-Joon;Rho, Young-Soo
    • Korean Journal of Head & Neck Oncology
    • /
    • v.25 no.2
    • /
    • pp.119-122
    • /
    • 2009
  • Objectives : Ablation of carcinoma of the tongue leads to deficits in speech and swallowing, but none to date has provided all of the qualities of mobility and sensation to simulate the complex function of the tongue. The authors evaluated postoperative swallowing and pronouncing function in patients who underwent tongue reconstruction using free flap. Material and Methods : This is a retrospective review documenting the outcome of 42 patients between January of 1991 and August of 2008. We classified patients according to the size of resection of the tongue like as 7 partial glossectomy, 25 hemiglossectomy, 2 subtotal glossectomy, and 8 total glossectomy. Swallowing function was graded into 4 point scale and pronouncing function was analyzed using picture consonant articulation test. Aspiration was evaluated with videofluoroscopic swallowing study. Results : The average points for swallowing function were 3.43 in partial glossectomy, 3.52 in hemiglossectomy, 3 in subtotal glossectomy, and 2.63 in total glossectomy. The percentage of consonants correct showed 76.5% in partial glossectomy, 72.29% in hemiglossectomy, 47.69% in subtotal glossectomy, and 29.94% in total glossectomy. Aspiration was noted in 3 patients(1 hemiglossectomy and 2 total glossectomy) and 2 total glossectomy patients were taken permanent feeding gastrostomy. Conclusion : Free flap gave us proper volume in tongue reconstruction and showed good result in preserving swallowing function. Swallowing function difference according to the size of defect showed no statistical significance, whereas articulation function was shown to decrease in accuracy as the size of defect was larger.

The Compensatory Articulation in the Patients with Cleft Palate having Velopharyngeal Insufficiency (구개열로 인한 연인두 폐쇄 부전 환자의 보상조음)

  • Lee Eun-Kyung;Park Mi-Kyong;Son Young-Ik
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.16 no.2
    • /
    • pp.118-122
    • /
    • 2005
  • Background and Objectives The compensatory articulation not only influences general speech intelligibility, but also prevents precise assessment of the velopharyngeal function. This study was performed to investigate frequently affected phonemes, prevalence and the characteristics of compensatory articulation in the patients with cleft palate having velopharyngeal insufficiency. Material and Method An archival review was taken on 103 cleft palate subjects. Their age ranged from 2.6 to 63 years (mean age of 9.8 years). They were grouped into two : preschool group (n=71) and older patient group (n=32). The prevalence and patterns of compensatory articulation were examined on oral high pressure consonants such as plosives, fricatives and affricates. Results : Compensatory errors were observed in $49.5\%$ of the subjects and were mostly glottal stops with the exception of 4cases who had pharyngeal fricatives in addition to glottal stops. The most frequently substituted phonemes were velar plosives and tense sound. There was no significant difference of prevalence in both groups. However, errors for bilabial and alveolar plosives were more frequently observed in preschool group. Conclusion High prevalence of compensatory articulation observed in both preschool and older age group indicates that their articulation errors tend to remain unless appropriate speech therapy is provided. To improve speech intelligibility of the patients with cleft palate having velopharyngeal insufficiency, it is advisable to address and correct the compensatory articulation errors in their earlier ages.

  • PDF