• 제목/요약/키워드: Speech analysis

검색결과 1,592건 처리시간 0.021초

청각 모델에 기초한 음성 특징 추출에 관한 연구 (A study on the speech feature extraction based on the hearing model)

  • 김바울;윤석현;홍광석;박병철
    • 전자공학회논문지B
    • /
    • 제33B권4호
    • /
    • pp.131-140
    • /
    • 1996
  • In this paper, we propose the method that extracts the speech feature using the hearing model through signal precessing techniques. The proposed method includes following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using thediscrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digita recognition experiments were carried out using both the dTW and the VQ-HMM. The results showed that, in case of using dTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potentials to use as a simple and efficient feature for recognition task.

  • PDF

Speech Denoising via Low-Rank and Sparse Matrix Decomposition

  • Huang, Jianjun;Zhang, Xiongwei;Zhang, Yafei;Zou, Xia;Zeng, Li
    • ETRI Journal
    • /
    • 제36권1호
    • /
    • pp.167-170
    • /
    • 2014
  • In this letter, we propose an unsupervised framework for speech noise reduction based on the recent development of low-rank and sparse matrix decomposition. The proposed framework directly separates the speech signal from noisy speech by decomposing the noisy speech spectrogram into three submatrices: the noise structure matrix, the clean speech structure matrix, and the residual noise matrix. Evaluations on the Noisex-92 dataset show that the proposed method achieves a signal-to-distortion ratio approximately 2.48 dB and 3.23 dB higher than that of the robust principal component analysis method and the non-negative matrix factorization method, respectively, when the input SNR is -5 dB.

훈련데이터 기반의 temporal filter를 적용한 4연숫자 전화음성 인식 (Recognition of Korean Connected Digit Telephone Speech Using the Training Data Based Temporal Filter)

  • 정성윤;배건성
    • 대한음성학회지:말소리
    • /
    • 제53호
    • /
    • pp.93-102
    • /
    • 2005
  • The performance of a speech recognition system is generally degraded in telephone environment because of distortions caused by background noise and various channel characteristics. In this paper, data-driven temporal filters are investigated to improve the performance of a specific recognition task such as telephone speech. Three different temporal filtering methods are presented with recognition results for Korean connected-digit telephone speech. Filter coefficients are derived from the cepstral domain feature vectors using the principal component analysis. According to experimental results, the proposed temporal filtering method has shown slightly better performance than the previous ones.

  • PDF

인간의 청각 척도에 관한 고찰 (A Study on the Human Auditory Scaling)

  • 양병곤
    • 음성과학
    • /
    • 제2권
    • /
    • pp.125-134
    • /
    • 1997
  • Human beings can perceive various aspects of sound including loudness, pitch, length, and timber. Recently many studies were conducted to clarify complex auditory scales of the human ear. This study critically reviews some of these scales (decibel, sone, phon for loudness perception; mel and bark for pitch) and proposes to apply the scales to normalize acoustic correlates of human speech. One of the most important aspects of human auditory perception is the nonlinearity which should be incorporated into the linear speech analysis and synthesis system. Further studies using more sophisticated equipment are desirable to refine these scales, through the analysis of human auditory perception of complex tones or speech. This will lead scientists to develop better speech recognition and synthesis devices.

  • PDF

Analysis of the Timing of Spoken Korean Using a Classification and Regression Tree (CART) Model

  • Chung, Hyun-Song;Huckvale, Mark
    • 음성과학
    • /
    • 제8권1호
    • /
    • pp.77-91
    • /
    • 2001
  • This paper investigates the timing of Korean spoken in a news-reading speech style in order to improve the naturalness of durations used in Korean speech synthesis. Each segment in a corpus of 671 read sentences was annotated with 69 segmental and prosodic features so that the measured duration could be correlated with the context in which it occurred. A CART model based on the features showed a correlation coefficient of 0.79 with an RMSE (root mean squared prediction error) of 23 ms between actual and predicted durations in reserved test data. These results are comparable with recent published results in Korean and similar to results found in other languages. An analysis of the classification tree shows that phrasal structure has the greatest effect on the segment duration, followed by syllable structure and the manner features of surrounding segments. The place features of surrounding segments only have small effects. The model has application in Korean speech synthesis systems.

  • PDF

음향신호의 분석에 의한 후두질환의 진단에 관한 연구 (A Study on the Diagnosis of Laryngeal Diseases by Acoustic Signal Analysis)

  • 조철우;양병곤;왕수건
    • 음성과학
    • /
    • 제5권1호
    • /
    • pp.151-165
    • /
    • 1999
  • This paper describes a series of researches to diagnose vocal diseases using the statistical method and the acoustic signal analysis method. Speech materials are collected at the hospital. Using the pathological database, the basic parameters for the diagnosis are obtained. Based on the statistical characteristics of the parameters, valid parameters are chosen and those are used to diagnose the pathological speech signal. Cepstrum is used to extract parameters which represents characteristics of pathological speech. 3 layered neural network is used to train and classify pathological speech into normal, benign and malignant case.

  • PDF

한국어 자동 발음열 생성을 위한 예외발음사전 구축 (Building an Exceptional Pronunciation Dictionary For Korean Automatic Pronunciation Generator)

  • 김선희
    • 음성과학
    • /
    • 제10권4호
    • /
    • pp.167-177
    • /
    • 2003
  • This paper presents a method of building an exceptional pronunciation dictionary for Korean automatic pronunciation generator. An automatic pronunciation generator is an essential element of speech recognition system and a TTS (Text-To-Speech) system. It is composed of a part of regular rules and an exceptional pronunciation dictionary. The exceptional pronunciation dictionary is created by extracting the words which have exceptional pronunciations from text corpus based on the characteristics of the words of exceptional pronunciation through phonological research and text analysis. Thus, the method contributes to improve performance of Korean automatic pronunciation generator as well as the performance of speech recognition system and TTS system.

  • PDF

언어 수행에서의 호흡과 기억 -호흡 단위와 휴지 단위의 양적 분석 결과를 바탕으로- (Breath and Memory in Speech based on Quantitative Analysis of Breath Groups and Pause Units in Korean)

  • 신지영
    • 한국어학
    • /
    • 제79권
    • /
    • pp.91-116
    • /
    • 2018
  • This paper aims at proposing issues of breath and memory in speech based on the quantitative analysis of breath groups and pause units in Korean. As a human being, we have two kinds of limitations on continuing speech; breath and memory. The prosodic structure and temporal structure of spontaneous speech data from six speakers were closely examined. One of the main findings of the present study is that the prosodic structure and temporal structure of Korean appears to reflect the breath and memory problems in speech.

Progress, challenges, and future perspectives in genetic researches of stuttering

  • Kang, Changsoo
    • Journal of Genetic Medicine
    • /
    • 제18권2호
    • /
    • pp.75-82
    • /
    • 2021
  • Speech and language functions are highly cognitive and human-specific features. The underlying causes of normal speech and language function are believed to reside in the human brain. Developmental persistent stuttering, a speech and language disorder, has been regarded as the most challenging disorder in determining genetic causes because of the high percentage of spontaneous recovery in stutters. This mysterious characteristic hinders speech pathologists from discriminating recovered stutters from completely normal individuals. Over the last several decades, several genetic approaches have been used to identify the genetic causes of stuttering, and remarkable progress has been made in genome-wide linkage analysis followed by gene sequencing. So far, four genes, namely GNPTAB, GNPTG, NAGPA, and AP4E1, are known to cause stuttering. Furthermore, thegeneration of mouse models of stuttering and morphometry analysis has created new ways for researchers to identify brain regions that participate in human speech function and to understand the neuropathology of stuttering. In this review, we aimed to investigate previous progress, challenges, and future perspectives in understanding the genetics and neuropathology underlying persistent developmental stuttering.

한국어 자유 발화 음성의 억양 패턴 (Intonation Patterns of Korean Spontaneous Speech)

  • 김선희
    • 말소리와 음성과학
    • /
    • 제1권4호
    • /
    • pp.85-94
    • /
    • 2009
  • This paper investigates the intonation patterns of Korean spontaneous speech through an analysis of four dialogues in the domain of travel planning. The speech corpus, which is a subset of spontaneous speech database recorded and distributed by ETRI, is labeled in APs and IPs based on K-ToBI system using Momel, an intonation stylization algorithm. It was found that unlike in English, a significant number of APs and IPs include hesitation lengthening, which is known to be a disfluency phenomenon due to speech planning. This paper also claims that the hesitation lengthening is different from the IP-final lengthening and that it should be categorized as a new category, as it greatly affects the intonation patterns of the language. Except for the fact that 19.09% of APs show hesitation lengthening, the spontaneous speech shows the same AP patterns as in read speech with higher frequency of falling patterns such as LHL in comparison with read speech which show more LH and LHLH patterns. The IP boundary tones of spontaneous speech, showing the same five patterns such as L%, HL%, LHL%, H%, LH% as in read speech, show higher frequency of rising patterns (H% and LH%) and contour tones (HL%, LH%, LHL%) while read speech on the contrary shows higher frequency of falling patterns and simple tones at the end of IPs.

  • PDF