• Title/Summary/Keyword: fSpeech signal

Search Result 26, Processing Time 0.023 seconds

Binary Mask Criteria Based on Distortion Constraints Induced by a Gain Function for Speech Enhancement

  • Kim, Gibak
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.4
    • /
    • pp.197-202
    • /
    • 2013
  • Large gains in speech intelligibility can be obtained using the SNR-based binary mask approach. This approach retains the time-frequency (T-F) units of the mixture signal, where the target signal is stronger than the interference noise (masker) (e.g., SNR > 0 dB), and removes the T-F units, where the interfering noise is dominant. This paper introduces two alternative binary masks based on the distortion constraints to improve the speech intelligibility. The distortion constraints are induced by a gain function for estimating the short-time spectral amplitude. One binary mask is designed to retain the speech underestimated (T-F) units while removing the speech overestimated (T-F)units. The other binary mask is designed to retain the noise overestimated (T-F) units while removing noise underestimated (T-F) units. Listening tests with oracle binary masks were conducted to assess the potential of the two binary masks in improving the intelligibility. The results suggested that the two binary masks based on distortion constraints can provide large gains in intelligibility when applied to noise-corrupted speech.

  • PDF

Two-Microphone Binary Mask Speech Enhancement in Diffuse and Directional Noise Fields

  • Abdipour, Roohollah;Akbari, Ahmad;Rahmani, Mohsen
    • ETRI Journal
    • /
    • v.36 no.5
    • /
    • pp.772-782
    • /
    • 2014
  • Two-microphone binary mask speech enhancement (2mBMSE) has been of particular interest in recent literature and has shown promising results. Current 2mBMSE systems rely on spatial cues of speech and noise sources. Although these cues are helpful for directional noise sources, they lose their efficiency in diffuse noise fields. We propose a new system that is effective in both directional and diffuse noise conditions. The system exploits two features. The first determines whether a given time-frequency (T-F) unit of the input spectrum is dominated by a diffuse or directional source. A diffuse signal is certainly a noise signal, but a directional signal could correspond to a noise or speech source. The second feature discriminates between T-F units dominated by speech or directional noise signals. Speech enhancement is performed using a binary mask, calculated based on the proposed features. In both directional and diffuse noise fields, the proposed system segregates speech T-F units with hit rates above 85%. It outperforms previous solutions in terms of signal-to-noise ratio and perceptual evaluation of speech quality improvement, especially in diffuse noise conditions.

Implementation of a Speaker-independent Speech Recognizer Using the TMS320F28335 DSP (TMS320F28335 DSP를 이용한 화자독립 음성인식기 구현)

  • Chung, Ik-Joo
    • Journal of Industrial Technology
    • /
    • v.29 no.A
    • /
    • pp.95-100
    • /
    • 2009
  • In this paper, we implemented a speaker-independent speech recognizer using the TMS320F28335 DSP which is optimized for control applications. For this implementation, we used a small-sized commercial DSP module and developed a peripheral board including a codec, signal conditioning circuits and I/O interfaces. The speech signal digitized by the TLV320AIC23 codec is analyzed based on MFCC feature extraction methed and recognized using the continuous-density HMM. Thanks to the internal SRAM and flash memory on the TMS320F28335 DSP, we did not need any external memory devices. The internal flash memory contains ADPCM data for voice response as well as HMM data. Since the TMS320F28335 DSP is optimized for control applications, the recognizer may play a good role in the voice-activated control areas in aspect that it can integrate speech recognition capability and inherent control functions into the single DSP.

  • PDF

Lexical Status and the Degree of /l/-darkening

  • Ahn, Miyeon
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.73-78
    • /
    • 2015
  • This study explores the degree of velarization of English word-final /l/ (i.e., /l/-darkness) according to the lexical status. Lexical status is defined as whether a speech stimulus is considered as a word or a non-word. We examined the temporal and spectral properties of word-final /l/ in terms of the duration and the frequency difference of F2-F1 values by varying the immediate pre-liquid vowels. The result showed that both temporal and spectral properties were contrastive across all vowel contexts in the way of real words having shorter [l] duration and low F2-F1 values, compared to non-words. That is, /l/ is more heavily velarized in words than in non-words, which suggests that lexical status whether language users encode the speech signal as a word or not is deeply involved in their speech production.

A Study of Peak Finding Algorithms for the Autocorrelation Function of Speech Signal

  • So, Shin-Ae;Lee, Kang-Hee;You, Kwang-Bock;Lim, Ha-Young;Park, Ji Su
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.12
    • /
    • pp.131-137
    • /
    • 2016
  • In this paper, the peak finding algorithms corresponding to the Autocorrelation Function (ACF), which are widely exploited for detecting the pitch of voiced signal, are proposed. According to various researchers, it is well known fact that the estimation of fundamental frequency (F0) in speech signal is not only very important task but quite difficult mission. The proposed algorithms, presented in this paper, are implemented by using many characteristics - such as monotonic increasing function - of ACF function. Thus, the proposed algorithms may be able to estimate both reliable and correct the fundamental frequency as long as the autocorrelation function of speech signal is accurate. Since the proposed algorithms may reduce the computational complexity it can be applied to the real-time processing. The speech data, is composed of Korean emotion expressed words, is used for evaluation of their performance. The pitches are measured to compare the performance of proposed algorithms.

Investigation on Dynamic Behavior of Formant Information (포만트 정보의 동적 변화특성 조사에 관한 연구)

  • Jo, Cheolwoo
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.157-162
    • /
    • 2015
  • This study reports on the effective way of displaying dynamic formant information on F1-F2 space. Conventional ways of F1-F2 space (different name of vowel triangle or vowel rectangle) have been used for investigating vowel characteristics of a speaker or a language based on statistics of the F1 and F2 values, which were computed by spectral envelope search method. Those methods were dealing mainly with the static information of the formants, not the changes of the formant values (i.e. dynamic information). So a better way of investigating dynamic informations from the formant values of speech signal is suggested so that more convenient and detailed investigation of the dynamic changes can be achieved on F1-F2 space. Suggested method used visualization of static and dynamic information in overlapped way to be able to observe the change of the formant information easily. Finally some examples of the implemented display on some cases of the continuous vowels are shown to prove the usefulness of suggested method.

F-ratio of Speaker Variability in Emotional Speech

  • Yi, So-Pae
    • Speech Sciences
    • /
    • v.15 no.1
    • /
    • pp.63-72
    • /
    • 2008
  • Various acoustic features were extracted and analyzed to estimate the inter- and intra-speaker variability of emotional speech. Tokens of vowel /a/ from sentences spoken with different modes of emotion (sadness, neutral, happiness, fear and anger) were analyzed. All of the acoustic features (fundamental frequency, spectral slope, HNR, H1-A1 and formant frequency) indicated greater contribution to inter- than intra-speaker variability across all emotions. Each acoustic feature of speech signal showed a different degree of contribution to speaker discrimination in different emotional modes. Sadness and neutral indicated greater speaker discrimination than other emotional modes (happiness, fear, anger in descending order of F-ratio). In other words, the speaker specificity was better represented in sadness and neutral than in happiness, fear and anger with any of the acoustic features.

  • PDF

A Study of the Pitch Estimation Algorithms of Speech Signal by Using Average Magnitude Difference Function (AMDF) (AMDF 함수를 이용한 음성 신호의 피치 추정 Algorithm들에 관한 연구)

  • So, Shinae;Lee, Kang Hee;You, Kwang-Bock;Lim, Ha-Young;Park, Jisu
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.4
    • /
    • pp.235-242
    • /
    • 2017
  • Peaks (or Nulls) finding algorithms for Average Magnitude Difference Function (AMDF) of speech signal are proposed in this paper. Both AMDF and Autocorrelation Function (ACF) are widely used to estimate a pitch of speech signal. It is well known that the estimation of the fundamental requency (F0) for speech signal is not only important but also very difficult. In this paper, two algorithms, are exploited the characteristics of AMDF, are proposed. First, the proposed algorithm which has a Threshold value is applied to the local minima to detect a pitch period. The Other proposed algorithm to estimate a pitch period of speech signal is utilized the relationship between AMDF and ACF. The data in this paper, is recorded by using general commercial device, is composed of Korean emotion expression words. The recorded speech data are applied to two proposed algorithms and tested their performance.

A Comparison Study on the Speech Signal Parameters for Chinese Leaners' Korean Pronunciation Errors - Focused on Korean /ㄹ/ Sound (중국인 학습자의 한국어 발음 오류에 대한 음성 신호 파라미터들의 비교 연구 - 한국어의 /ㄹ/ 발음을 중심으로)

  • Lee, Kang-Hee;You, Kwang-Bock;Lim, Ha-Young
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.239-246
    • /
    • 2017
  • This paper compares the speech signal parameters between Korean and Chinese for Korean pronunciation /ㄹ/, which is caused many errors by Chinese leaners. Allophones of /ㄹ/ in Korean is divided into lateral group and tap group. It has been investigated the reasons for these errors by studying the similarity and the differences between Korean /ㄹ/ pronunciation and its corresponding Chinese pronunciation. In this paper, for the purpose of comparison the speech signal parameters such as energy, waveform in time domain, spectrogram in frequency domain, pitch based on ACF, Formant frequencies are used. From the phonological perspective the speech signal parameters such as signal energy, a waveform in the time domain, a spectrogram in the frequency domain, the pitch (F0) based on autocorrelation function (ACF), Formant frequencies (f1, f2, f3, and f4) are measured and compared. The data, which are composed of the group of Korean words by through a philological investigation, are used and simulated in this paper. According to the simulation results of the energy and spectrogram, there are meaningful differences between Korean native speakers and Chinese leaners for Korean /ㄹ/ pronunciation. The simulation results also show some differences even other parameters. It could be expected that Chinese learners are able to reduce the errors considerably by exploiting the parameters used in this paper.

Noise Reduction Using the Standard Deviation of the Time-Frequency Bin and Modified Gain Function for Speech Enhancement in Stationary and Nonstationary Noisy Environments

  • Lee, Soo-Jeong;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.3E
    • /
    • pp.87-96
    • /
    • 2007
  • In this paper we propose a new noise reduction algorithm for stationary and nonstationary noisy environments. Our algorithm classifies the speech and noise signal contributions in time-frequency bins, and is not based on a spectral algorithm or a minimum statistics approach. It relies on calculating the ratio of the standard deviation of the noisy power spectrum in time-frequency bins to its normalized time-frequency average. We show that good quality can be achieved for enhancement speech signal by choosing appropriate values for ${\delta}_t\;and\;{\delta}_f$. The proposed method greatly reduces the noise while providing enhanced speech with lower residual noise and somewhat higher mean opinion score (MOS), background intrusiveness (BAK) and signal distortion (SIG) scores than conventional methods.