• 제목/요약/키워드: Speech signal analysis

검색결과 275건 처리시간 0.032초

웨이브렛 변환을 이용한 피치검출 (Pitch Detection Using Wavelet Transform)

  • 석종원;손영호;배건성
    • 음성과학
    • /
    • 제5권1호
    • /
    • pp.23-33
    • /
    • 1999
  • Mallat has shown that, with a proper choice of wavelet function, the local maxima of wavelet transformed signal indicate a sharp variation in the signal. Since the glottal closure causes sharp discontinuities in the speech signal, dyadic wavelet transform can be useful for detecting abrupt change in the voiced sounds, i.e., epochs. In this paper, we investigate the glottal closure instants obtained from the wavelet analysis of speech signal and compare them with those obtained from the EGG signal. Then, we detect pitch period of speech signal on the basis of these results. Experimental results demonstrated that local maxima of wavelet transformed signal give accurate estimation of epoch and pitch periods of voiced sound obtained by the proposed algorithm also correspond to those from EGG well.

  • PDF

Analysis of Speech Signals Depending on the Microphone and Micorphone Distance

  • Son, Jong-Mok
    • The Journal of the Acoustical Society of Korea
    • /
    • 제17권4E호
    • /
    • pp.41-47
    • /
    • 1998
  • Microphone is the first link in the speech recognition system. Depending on its type and mounting position, the microphone can significantly distort the spectrum and affect the performance of the speech recognition system. In this paper, characteristics of the speech signal for different microphones and microphone distances are investigated both in time and frequency domains. In the time domain analysis, the average signal-to-noise ration is measure ration is measured for the database we collected depending on the microphones and microphone distances. Mel-frequency spectral coefficients and mel-frequency cepstrum are computed to examine the spectral characteristics. Analysis results are discussed with our findings, and the result of recognition experiments is given.

  • PDF

피치 검출을 위한 스펙트럼 평탄화 기법 (Flattening Techniques for Pitch Detection)

  • 김종국;조왕래;배명진
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.381-384
    • /
    • 2002
  • In speech signal processing, it Is very important to detect the pitch exactly in speech recognition, synthesis and analysis. but, it is very difficult to pitch detection from speech signal because of formant and transition amplitude affect. therefore, in this paper, we proposed a pitch detection using the spectrum flattening techniques. Spectrum flattening is to eliminate the formant and transition amplitude affect. In time domain, positive center clipping is process in order to emphasize pitch period with a glottal component of removed vocal tract characteristic. And rough formant envelope is computed through peak-fitting spectrum of original speech signal in frequency domain. As a results, well get the flattened harmonics waveform with the algebra difference between spectrum of original speech signal and smoothed formant envelope. After all, we obtain residual signal which is removed vocal tract element The performance was compared with LPC and Cepstrum, ACF 0wing to this algorithm, we have obtained the pitch information improved the accuracy of pitch detection and gross error rate is reduced in voice speech region and in transition region of changing the phoneme.

  • PDF

MMSE-STSA 기반의 음성개선 기법에서 잡음 및 신호 전력 추정에 사용되는 파라미터 값의 변화에 따른 잡음음성의 인식성능 분석 (Performance Analysis of Noisy Speech Recognition Depending on Parameters for Noise and Signal Power Estimation in MMSE-STSA Based Speech Enhancement)

  • 박철호;배건성
    • 대한음성학회지:말소리
    • /
    • 제57호
    • /
    • pp.153-164
    • /
    • 2006
  • The MMSE-STSA based speech enhancement algorithm is widely used as a preprocessing for noise robust speech recognition. It weighs the gain of each spectral bin of the noisy speech using the estimate of noise and signal power spectrum. In this paper, we investigate the influence of parameters used to estimate the speech signal and noise power in MMSE-STSA upon the recognition performance of noisy speech. For experiments, we use the Aurora2 DB which contains noisy speech with subway, babble, car, and exhibition noises. The HTK-based continuous HMM system is constructed for recognition experiments. Experimental results are presented and discussed with our findings.

  • PDF

실시간 음성분석도구의 MatLab 구현 (Matlab Implementation of Real-time Speech Analysis Tool)

  • 박일서;김대현;조철우
    • 대한음성학회지:말소리
    • /
    • 제44호
    • /
    • pp.93-104
    • /
    • 2002
  • There are many speech analysis tools available. Among them real-time analysis tool is very useful for interactive experiments. A real-time speech analysis tool was implemented using Matlab. Matlab is a very widely used general purpose signal processing tool. In general, its computational speed is relatively lower than that of the codes from conventional programming languages. Especially, real-time analysis including input of signal and output of the result was not possible in the past. However, due to the improvement of computing power of PCs and inclusion of real-time I/O toolboxes in Matlab, real-time analysis is now possible in some extent by Matlab only. In this experiment, we tried to implement a real-time speech analysis tool using Matlab. Pitch and spectral information is computed in real-time. From the result it is shown that such real-time applications can be implemented easily using Matlab.

  • PDF

Split Model Speech Analysis Techniques for Wideband Speech Signal

  • Park YoungHo;Ham MyungKyu;You KwangBock;Bae MyungJin
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1999년도 학술발표대회 논문집 제18권 1호
    • /
    • pp.20-23
    • /
    • 1999
  • In this paper, The Split Model Analysis Algorithm, which can generate the wideband speech signal from the spectral information of narrowband signal, is developed. The Split Model Analysis Algorithm deals with the separation of the $10^{th}$ order LPC model into five cascade-connected $2^{nd}$ order model. The use of the less complex $2^{nd}$ order models allows for the exclusion of the complicated nonlinear relationships between model parameters and all the poles of the LPC model. The relationships between the model parameters and its corresponding analog poles is proved and applied to each $2^{nd}$ order model. The wideband speech signal is obtained by changing only the sampling rate

  • PDF

Split Model Speech Analysis Techniques for Speech Signal Enhancement

  • Park, Young-Ho;You, Kwang-Bock;Bae, Myung-Jin
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1999년도 추계종합학술대회 논문집
    • /
    • pp.1135-1138
    • /
    • 1999
  • In this paper, The Split Model Analysis Algorithm, which can generate the wideband speech signal from the spectral information of narrowband signal, is developed. The Split Model Analysis Algorithm deals with the separation of the 10$\^$th/ order LPC model into five cascade-connected 2$\^$nd/ order model. The use of the less complex 2$\^$nd/ order models allows for the exclusion of the complicated nonlinear relationships between model parameters and all the poles of the LPC model. The relationships between the model parameters and its corresponding analog poles is proved and applied to each 2$\^$nd/ order model. The wideband speech signal is obtained by changing only the sampling rate.

  • PDF

카오스 패턴 발견을 위한 음성 데이터의 처리 기법 (Speech Signal Processing for Analysis of Chaos Pattern)

  • 김태식
    • 음성과학
    • /
    • 제8권3호
    • /
    • pp.149-157
    • /
    • 2001
  • Based on the chaos theory, a new method of presentation of speech signal has been presented in this paper. This new method can be used for pattern matching such as speaker recognition. The expressions of attractors are represented very well by the logistic maps that show the chaos phenomena. In the speaker recognition field, a speaker's vocal habit could be a very important matching parameter. The attractor configuration using change value of speech signal can be utilized to analyze the influence of voice undulations at a point on the vocal loudness scale to the next point. The attractors arranged by the method could be used in research fields of speech recognition because the attractors also contain unique information for each speaker.

  • PDF

변곡점 및 단구간 에너지평가에 의한 음성의 천이구간 특징분석 (Analysis of Transient Features in Speech Signal by Estimating the Short-term Energy and Inflection points)

  • 최일홍;장승관;차태호;최웅세;김창석
    • 음성과학
    • /
    • 제3권
    • /
    • pp.156-166
    • /
    • 1998
  • In this paper, I would like to propose a dividing method by estimating the inflection points and the average magnitude energy in speech signals. The method proposed in this paper gave not only a satisfactory solution for the problems on dividing method by zero-crossing rate, but could estimate the feature of the transient period after dividing the starting point and transient period in speech signals before steady state. In the results of the experiment carried out with monosyllabic speech, it was found that even through speech samples indicated in D.C. level, the staring and ending point of the speech signals were exactly divided by the method. In addition to the results, I could compare with the features, such as the length of transient period, the short term energy, the frequency characteristics, in each speech signal.

  • PDF

음악과 음성 판별을 위한 웨이브렛 영역에서의 특징 파라미터 (Feature Parameter Extraction and Analysis in the Wavelet Domain for Discrimination of Music and Speech)

  • 김정민;배건성
    • 대한음성학회지:말소리
    • /
    • 제61호
    • /
    • pp.63-74
    • /
    • 2007
  • Discrimination of music and speech from the multimedia signal is an important task in audio coding and broadcast monitoring systems. This paper deals with the problem of feature parameter extraction for discrimination of music and speech. The wavelet transform is a multi-resolution analysis method that is useful for analysis of temporal and spectral properties of non-stationary signals such as speech and audio signals. We propose new feature parameters extracted from the wavelet transformed signal for discrimination of music and speech. First, wavelet coefficients are obtained on the frame-by-frame basis. The analysis frame size is set to 20 ms. A parameter $E_{sum}$ is then defined by adding the difference of magnitude between adjacent wavelet coefficients in each scale. The maximum and minimum values of $E_{sum}$ for period of 2 seconds, which corresponds to the discrimination duration, are used as feature parameters for discrimination of music and speech. To evaluate the performance of the proposed feature parameters for music and speech discrimination, the accuracy of music and speech discrimination is measured for various types of music and speech signals. In the experiment every 2-second data is discriminated as music or speech, and about 93% of music and speech segments have been successfully detected.

  • PDF