• Title/Summary/Keyword: Speech feature

Search Result 712, Processing Time 0.025 seconds

The Difference between Acoustic Characteristics of Acute Epiglottitis and Peritonsillar Abscess (급성 후두개염과 편도주위 농양 환자의 발화시 조음 및 음성의 차이)

  • Lee, Nam-Hoon;Lee, Jae-Yeon;Lee, Sang-Hyuck;Choi, Jung-Im;Song, Yun-Kyung;Jin, Sung-Min
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.21 no.1
    • /
    • pp.48-53
    • /
    • 2010
  • Backgraound and Objectives : The voice change can occur in acute epiglottitis or peritonsillar abscess, and the labelings of both changes as a "muffled voice" or "hot potato voice", The aim of this study was to investigate the difference of changes in acoustic feature of voice before and after treatment in patients with acute epiglottitis or peritonsillar abscess. Subjects and Method: 13 patients with acute epiglottitis and 12 patients with peritonsillar abscess were enrolled in the study. Acoustic analysis on sustained Korean vowels /${\alpha}$/, /u/ and /i/ were performed before and after treatment. Results: In patients with acute epiglottitis, the first formant frequency (F1) of /${\alpha}$/ was increased, and the second frequency (F2) of /i/ was decreased. In patients with peritonsillar abscess, F1 and F2 of /${\alpha}$/ were decreased. F1 of /i/ and /u/ were increased, while F2 were decreased. Conclusion : The anatomical and functional changes of oropharynx and larynx by acute epiglottitis and peritonsillar abscess can cause different change in resonance and speech quality. We suggest that these changes could be the cause of 'muffled vocie' in patients of acute epiglottitis or peritonsillar abscess, but different characteristics of phonation in each disease should be distinguished.

  • PDF

Multi frequency band noise suppression system using signal-to-noise ratio estimation (신호 대 잡음비 추정 방법을 이용한 다중 주파수 밴드 잡음 억제 시스템)

  • Oh, In Kyu;Lee, In Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.2
    • /
    • pp.102-109
    • /
    • 2016
  • This paper proposes a noise suppression method through SNR (Singal-to Noise Ratio) estimation in the two microphone array environment of close spacing. The conventional method uses a noise suppression method for a gain function obtained through the SNR estimation based on coherence function from full band. However, this method cause performance decreased by the noise damage that affects all the feature vector component. So, we propose a noise suppression method that allocates a frequency domain signal into N constant multi frequency band and each frequency band gets a gain function through SNR estimation based on coherence function. Performance evaluation of the proposed method is shown by comparison with PESQ (Perceptual Evaluation of Speech Quality) value which is an objective quality evaluation method provided by the ITU-T (International Telecommunications Union Telecommunication).

스웨덴어 발음 교육상의 몇 가지 문제점 - 모음을 중심으로 -

  • Byeon Gwang-Su
    • MALSORI
    • /
    • no.4
    • /
    • pp.20-30
    • /
    • 1982
  • The aim of this paper is to analyse difficulties of the pronunciation in swedish vowels encountered by Koreans learners and to seek solutions in order to correct the possible errors. In the course of the analysis the swedish and Korean vowels in question are compared with the purpose of describing differences aha similarities between these two systems. This contrastive description is largely based on the students' articulatory speech level ana the writer's auditory , judgement . The following points are discussed : 1 ) Vowel length as a distinctive feature in Swedish compared with that of Korean. 2) A special attention is paid on the Swedish vowel [w:] that is characterized by its peculiar type of lip rounding. 3) The six pairs of Swedish vowels that are phonologically contrastive but difficult for Koreans to distinguish one from the other: [y:] ~ [w:], [i:] ~ [y:], [e:] ~ [${\phi}$:], [w;] ~ [u:] [w:] ~ [$\theta$], [$\theta$] ~ [u] 4) The r-colored vowel in the case of the postvocalic /r/ that is very common in American English is not allowed in English sound sequences. The r-colored vowel in the American English pattern has to be broken up and replaced hi-segmental vowel-consonant sequences . Korean accustomed to the American pronunciation are warned in this respect. For a more distinct articulation of the postvocalic /r/ trill [r] is preferred to fricative [z]. 5) The front vowels [e, $\varepsilon, {\;}{\phi}$) become opener variants (${\ae}, {\;}:{\ae}$] before / r / or supradentals. The results of the analysis show that difficulties of the pronunciation of the target language (Swedish) are mostly due to the interference from the Learner's source language (Korean). However, the Learner sometimes tends to get interference also from the other foreign language with which he or she is already familiar when he or she finds in that language more similarity to the target language than in his or her own mother tongue. Hence this foreign language (American English) in this case functions as a second language for Koreans in Learning Swedish.

  • PDF

Simultaneous Speaker and Environment Adaptation by Environment Clustering in Various Noise Environments (다양한 잡음 환경하에서 환경 군집화를 통한 화자 및 환경 동시 적응)

  • Kim, Young-Kuk;Song, Hwa-Jeon;Kim, Hyung-Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.566-571
    • /
    • 2009
  • This paper proposes noise-robust fast speaker adaptation method based on the eigenvoice framework in various noisy environments. The proposed method is focused on de-noising and environment clustering. Since the de-noised adaptation DB still has residual noise in itself, environment clustering divides the noisy adaptation data into similar environments by a clustering method using the cepstral mean of non-speech segments as a feature vector. Then each adaptation data in the same cluster is used to build an environment-clustered speaker adapted (SA) model. After selecting multiple environmentally clustered SA models which are similar to test environment, the speaker adaptation based on an appropriate linear combination of clustered SA models is conducted. According to our experiments, we observe that the proposed method provides error rate reduction of $40{\sim}59%$ over baseline with speaker independent model.

Bi-directional LSTM-CNN-CRF for Korean Named Entity Recognition System with Feature Augmentation (자질 보강과 양방향 LSTM-CNN-CRF 기반의 한국어 개체명 인식 모델)

  • Lee, DongYub;Yu, Wonhee;Lim, HeuiSeok
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.12
    • /
    • pp.55-62
    • /
    • 2017
  • The Named Entity Recognition system is a system that recognizes words or phrases with object names such as personal name (PS), place name (LC), and group name (OG) in the document as corresponding object names. Traditional approaches to named entity recognition include statistical-based models that learn models based on hand-crafted features. Recently, it has been proposed to construct the qualities expressing the sentence using models such as deep-learning based Recurrent Neural Networks (RNN) and long-short term memory (LSTM) to solve the problem of sequence labeling. In this research, to improve the performance of the Korean named entity recognition system, we used a hand-crafted feature, part-of-speech tagging information, and pre-built lexicon information to augment features for representing sentence. Experimental results show that the proposed method improves the performance of Korean named entity recognition system. The results of this study are presented through github for future collaborative research with researchers studying Korean Natural Language Processing (NLP) and named entity recognition system.

A Study on the Spoken Korean Citynames Using Multi-Layered Perceptron of Back-Propagation Algorithm (오차 역전파 알고리즘을 갖는 MLP를 이용한 한국 지명 인식에 대한 연구)

  • Song, Do-Sun;Lee, Jae-Gheon;Kim, Seok-Dong;Lee, Haing-Sei
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.6
    • /
    • pp.5-14
    • /
    • 1994
  • This paper is about an experiment of speaker-independent automatic Korean spoken words recognition using Multi-Layered Perceptron and Error Back-propagation algorithm. The object words are 50 citynames of D.D.D local numbers. 43 of those are 2 syllables and the rest 7 are 3 syllables. The words were not segmented into syllables or phonemes, and some feature components extracted from the words in equal gap were applied to the neural network. That led independent result on the speech duration, and the PARCOR coefficients calculated from the frames using linear predictive analysis were employed as feature components. This paper tried to find out the optimum conditions through 4 differerent experiments which are comparison between total and pre-classified training, dependency of recognition rate on the number of frames and PAROCR order, recognition change due to the number of neurons in the hidden layer, and the comparison of the output pattern composition method of output neurons. As a result, the recognition rate of $89.6\%$ is obtaimed through the research.

  • PDF

A Study on the Performance of Music Retrieval Based on the Emotion Recognition (감정 인식을 통한 음악 검색 성능 분석)

  • Seo, Jin Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.3
    • /
    • pp.247-255
    • /
    • 2015
  • This paper presents a study on the performance of the music search based on the automatically recognized music-emotion labels. As in the other media data, such as speech, image, and video, a song can evoke certain emotions to the listeners. When people look for songs to listen, the emotions, evoked by songs, could be important points to consider. However; very little study has been done on the performance of the music-emotion labels to the music search. In this paper, we utilize the three axes of human music perception (valence, activity, tension) and the five basic emotion labels (happiness, sadness, tenderness, anger, fear) in measuring music similarity for music search. Experiments were conducted on both genre and singer datasets. The search accuracy of the proposed emotion-based music search was up to 75 % of that of the conventional feature-based music search. By combining the proposed emotion-based method with the feature-based method, we achieved up to 14 % improvement of search accuracy.

Audio Segmentation and Classification Using Support Vector Machine and Fuzzy C-Means Clustering Techniques (서포트 벡터 머신과 퍼지 클러스터링 기법을 이용한 오디오 분할 및 분류)

  • Nguyen, Ngoc;Kang, Myeong-Su;Kim, Cheol-Hong;Kim, Jong-Myon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.19-26
    • /
    • 2012
  • The rapid increase of information imposes new demands of content management. The purpose of automatic audio segmentation and classification is to meet the rising need for efficient content management. With this reason, this paper proposes a high-accuracy algorithm that segments audio signals and classifies them into different classes such as speech, music, silence, and environment sounds. The proposed algorithm utilizes support vector machine (SVM) to detect audio-cuts, which are boundaries between different kinds of sounds using the parameter sequence. We then extract feature vectors that are composed of statistical data and they are used as an input of fuzzy c-means (FCM) classifier to partition audio-segments into different classes. To evaluate segmentation and classification performance of the proposed SVM-FCM based algorithm, we consider precision and recall rates for segmentation and classification accuracy for classification. Furthermore, we compare the proposed algorithm with other methods including binary and FCM classifiers in terms of segmentation performance. Experimental results show that the proposed algorithm outperforms other methods in both precision and recall rates.

Classification of Consonants by SOM and LVQ (SOM과 LVQ에 의한 자음의 분류)

  • Lee, Chai-Bong;Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.1
    • /
    • pp.34-42
    • /
    • 2011
  • In an effort to the practical realization of phonetic typewriter, we concentrate on the classification of consonants in this paper. Since many of consonants do not show periodic behavior in time domain and thus the validity for Fourier analysis of them are not convincing, vector quantization (VQ) via LBG clustering is first performed to check if the feature vectors of MFCC and LPCC are ever meaningful for consonants. Experimental results of VQ showed that it's not easy to draw a clear-cut conclusion as to the validity of Fourier analysis for consonants. For classification purpose, two kinds of neural networks are employed in our study: self organizing map (SOM) and learning vector quantization (LVQ). Results from SOM revealed that some pairs of phonemes are not resolved. Though LVQ is free from this difficulty inherently, the classification accuracy was found to be low. This suggests that, as long as consonant classification by LVQ is concerned, other types of feature vectors than MFCC should be deployed in parallel. However, the combination of MFCC/LVQ was not found to be inferior to the classification of phonemes by language-moded based approach. In all of our work, LPCC worked worse than MFCC.

A Study on Speech Recognition using DMS Model (DMS 모델을 이용한 음성인식에 관한 연구)

  • An, Tae-Ock;Byun, Yong-Kyu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.2E
    • /
    • pp.41-50
    • /
    • 1994
  • This paper proposes a DMS(Dynamic Multi-Section) model based on the information of the similar features in word pattern. This model represents each word as a time series of several sections and each section implies duration time information and typical feature vectors. The procedure to make a model in the word pattern is that typical feature vector and duration time information are reflected in the distance, when matching between word pattern and model is repeated. As the result of it, the accumulated distance by matching is to be minimized.

  • PDF