• Title/Summary/Keyword: RASTA

Search Result 29, Processing Time 0.024 seconds

A Method of Evaluating Korean Articulation Quality for Rehabilitation of Articulation Disorder in Children

  • Lee, Keonsoo;Nam, Yunyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3257-3269
    • /
    • 2020
  • Articulation disorders are characterized by an inability to achieve clear pronunciation due to misuse of the articulators. In this paper, a method of detecting such disorders by comparing to the standard pronunciations is proposed. This method defines the standard pronunciations from the speeches of normal children by clustering them with three features which are the Linear Predictive Cepstral Coefficient (LPCC), the Mel-Frequency Cepstral Coefficient (MFCC), and the Relative Spectral Analysis Perceptual Linear Prediction (RASTA-PLP). By calculating the distance between the centroid of the standard pronunciation and the inputted pronunciation, disordered speech whose features locates outside the cluster is detected. 89 children (58 of normal children and 31 of children with disorders) were recruited. 35 U-TAP test words were selected and each word's standard pronunciation is made from normal children and compared to each pronunciation of children with disorders. In the experiments, the pronunciations with disorders were successfully distinguished from the standard pronunciations.

Emotion Recognition using Robust Speech Recognition System (강인한 음성 인식 시스템을 사용한 감정 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.586-591
    • /
    • 2008
  • This paper studied the emotion recognition system combined with robust speech recognition system in order to improve the performance of emotion recognition system. For this purpose, the effect of emotional variation on the speech recognition system and robust feature parameters of speech recognition system were studied using speech database containing various emotions. Final emotion recognition is processed using the input utterance and its emotional model according to the result of speech recognition. In the experiment, robust speech recognition system is HMM based speaker independent word recognizer using RASTA mel-cepstral coefficient and its derivatives and cepstral mean subtraction(CMS) as a signal bias removal. Experimental results showed that emotion recognizer combined with speech recognition system showed better performance than emotion recognizer alone.

Robust Feature Extraction Based on Image-based Approach for Visual Speech Recognition (시각 음성인식을 위한 영상 기반 접근방법에 기반한 강인한 시각 특징 파라미터의 추출 방법)

  • Gyu, Song-Min;Pham, Thanh Trung;Min, So-Hee;Kim, Jing-Young;Na, Seung-You;Hwang, Sung-Taek
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.3
    • /
    • pp.348-355
    • /
    • 2010
  • In spite of development in speech recognition technology, speech recognition under noisy environment is still a difficult task. To solve this problem, Researchers has been proposed different methods where they have been used visual information except audio information for visual speech recognition. However, visual information also has visual noises as well as the noises of audio information, and this visual noises cause degradation in visual speech recognition. Therefore, it is one the field of interest how to extract visual features parameter for enhancing visual speech recognition performance. In this paper, we propose a method for visual feature parameter extraction based on image-base approach for enhancing recognition performance of the HMM based visual speech recognizer. For experiments, we have constructed Audio-visual database which is consisted with 105 speackers and each speaker has uttered 62 words. We have applied histogram matching, lip folding, RASTA filtering, Liner Mask, DCT and PCA. The experimental results show that the recognition performance of our proposed method enhanced at about 21% than the baseline method.

Acoustic Channel Compensation at Mel-frequency Spectrum Domain

  • Jeong, So-Young;Oh, Sang-Hoon;Lee, Soo-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1E
    • /
    • pp.43-48
    • /
    • 2003
  • The effects of linear acoustic channels have been analyzed and compensated at mel-frequency feature domain. Unlike popular RASTA filtering our approach incorporates separate filters for each mel-frequency band, which results in better recognition performance for heavy-reverberated speeches.

RECOGNITION SYSTEM USING VOCAL-CORD SIGNAL (성대 신호를 이용한 인식 시스템)

  • Cho, Kwan-Hyun;Han, Mun-Sung;Park, Jun-Seok;Jeong, Young-Gyu
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.216-218
    • /
    • 2005
  • This paper present a new approach to a noise robust recognizer for WPS interface. In noisy environments, performance of speech recognition is decreased rapidly. To solve this problem, We propose the recognition system using vocal-cord signal instead of speech. Vocal-cord signal has low quality but it is more robust to environment noise than speech signal. As a result, we obtained 75.21% accuracy using MFCC with CMS and 83.72% accuracy using ZCPA with RASTA.

  • PDF

Real Time Lip Reading System Implementation in Embedded Environment (임베디드 환경에서의 실시간 립리딩 시스템 구현)

  • Kim, Young-Un;Kang, Sun-Kyung;Jung, Sung-Tae
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.227-232
    • /
    • 2010
  • This paper proposes the real time lip reading method in the embedded environment. The embedded environment has the limited sources to use compared to existing PC environment, so it is hard to drive the lip reading system with existing PC environment in the embedded environment in real time. To solve the problem, this paper suggests detection methods of lip region, feature extraction of lips, and awareness methods of phonetic words suitable to the embedded environment. First, it detects the face region by using face color information to find out the accurate lip region and then detects the exact lip region by finding the position of both eyes from the detected face region and using the geometric relations. To detect strong features of lighting variables by the changing surroundings, histogram matching, lip folding, and RASTA filter were applied, and the properties extracted by using the principal component analysis(PCA) were used for recognition. The result of the test has shown the processing speed between 1.15 and 2.35 sec. according to vocalizations in the embedded environment of CPU 806Mhz, RAM 128MB specifications and obtained 77% of recognition as 139 among 180 words were recognized.

Adaptive Channel Normalization Based on Infomax Algorithm for Robust Speech Recognition

  • Jung, Ho-Young
    • ETRI Journal
    • /
    • v.29 no.3
    • /
    • pp.300-304
    • /
    • 2007
  • This paper proposes a new data-driven method for high-pass approaches, which suppresses slow-varying noise components. Conventional high-pass approaches are based on the idea of decorrelating the feature vector sequence, and are trying for adaptability to various conditions. The proposed method is based on temporal local decorrelation using the information-maximization theory for each utterance. This is performed on an utterance-by-utterance basis, which provides an adaptive channel normalization filter for each condition. The performance of the proposed method is evaluated by isolated-word recognition experiments with channel distortion. Experimental results show that the proposed method yields outstanding improvement for channel-distorted speech recognition.

  • PDF

Comparison of Feature Extraction Methods for the Telephone Speech Recognition (전화 음성 인식을 위한 특징 추출 방법 비교)

  • 전원석;신원호;김원구;이충용;윤대희
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.7
    • /
    • pp.42-49
    • /
    • 1998
  • 본 논문에서는 전화망 환경에서 음성 인식 성능을 개선하기 위한 특징 벡터 추출 단계에서의 처리 방법들을 연구하였다. 먼저, 고립 단어 인식 시스템에서 채널 왜곡 보상 방 법들을 단어 모델과 문맥 독립 음소 모델에 대하여 인식 실험을 하였다. 켑스트럼 평균 차 감법, RASTA 처리, 켑스트럼-시간 행렬을 실험하였으며, 인식 모델에 따른 각 알고리즘의 성능을 비교하였다. 둘째로, 문맥 독립 음소 모델을 이용한 인식 시스템의 성능 향상을 위하 여 정적 특징 벡터에 대하여 주성분 분석 방법(principal component analysis)과 선형 판별 분석(linear discriminant analysis)과 같은 선형 변환 방법을 적용하여 분별력이 높은 벡터 공간으로 변환함으로써 인식 성능을 향상시켰다. 또한 선형 변환 방법을 켑스트럼 평균 차 감법과 결합하여 더욱 뛰어난 성능을 보여주었다.

  • PDF

Speech Parameters for the Robust Emotional Speech Recognition (감정에 강인한 음성 인식을 위한 음성 파라메터)

  • Kim, Weon-Goo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1137-1142
    • /
    • 2010
  • This paper studied the speech parameters less affected by the human emotion for the development of the robust speech recognition system. For this purpose, the effect of emotion on the speech recognition system and robust speech parameters of speech recognition system were studied using speech database containing various emotions. In this study, mel-cepstral coefficient, delta-cepstral coefficient, RASTA mel-cepstral coefficient and frequency warped mel-cepstral coefficient were used as feature parameters. And CMS (Cepstral Mean Subtraction) method were used as a signal bias removal technique. Experimental results showed that the HMM based speaker independent word recognizer using vocal tract length normalized mel-cepstral coefficient, its derivatives and CMS as a signal bias removal showed the best performance of 0.78% word error rate. This corresponds to about a 50% word error reduction as compare to the performance of baseline system using mel-cepstral coefficient, its derivatives and CMS.

Time domain Filtering of Image for Lip-reading Enhancement (시간영역 이미지 필터링에 의한 립리딩 성능 향상)

  • Lee Jeeeun;Kim Jinyoung;Lee Joohun
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.45-48
    • /
    • 2001
  • 립리딩은 잡음 환경 하에서 음성 인식 성능을 향상을 위해 영상정보를 이용한 바이모달(bimodal)음성인식으로 연구되었다[1][2]. 그 일환으로 이미 영상정보를 이용한 립리딩은 구현되었다. 그러나 현재까지의 시스템들은 환경의 변화에 강인하지 못하다. 본 논문에서는 이미지 기반 립리딩 방법을 적용하여 입술 영역을 보다 안정적으로 찾아 성능을 향상 시켰다. 그러나 이 방법은 많은 데이터량을 처리해야 하므로 전처리 과정이 필요하다. 전처리로 입력영상을 그레이 레벨로 변환하는 방법과, 입술을 반으로 접는 방법, 그리고 주성분 분석(PCA: Principal Component Analysis)을 사용하였다. 또한 인식성능 향상을 위해 음성에서 잡음 제거나 분석$\cdot$합성에 효과적인 성능을 보이는 RASTA(Relative Spectral)필터를 적용하여 시간 영역에서의 변화가 적은 성분이나 급변하는 성분, 그 밖의 잡음 등을 제거하였다. 그 결과 $72.7\%$의 높은 인식 성능을 보였다.

  • PDF