• Title/Summary/Keyword: speech process

Search Result 526, Processing Time 0.025 seconds

Acoustic Masking Effect That Can Be Occurred by Speech Contrast Enhancement in Hearing Aids (보청기에서 음성 대비 강조에 의해 발생할 수 있는 마스킹 현상)

  • Jeon, Y.Y.;Yang, D.G.;Bang, D.H.;Kil, S.K.;Lee, S.M.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.1 no.1
    • /
    • pp.21-28
    • /
    • 2007
  • In most of hearing aids, amplification algorithms are used to compensate hearing loss, noise and feedback reduction algorithms are used and to increase the perception of speeches contrast enhancement algorithms are used. However, acoustic masking effect is occurred between formants if contrast is enhanced excessively. To confirm the masking effect in speeches, the experiment are composed of 6 tests; test pure tone test, speech reception test, word recognition test, pure tone masking test, formant pure tone masking test and speech masking test, and for objective evaluation, LLR is introduced. As a result of normal hearing subjects and hearing impaired subjects, more making is occurred in hearing impaired subjects than normal hearing subjects when using pure tone, and in the speech masking test, speech reception is also lower in hearing impaired subjects than in normal hearing subjects. This means that acoustic masking effect rather than distortion influences speech perception. So it is required to check the characteristics of masking effect before wearing a hearing aid and to apply this characteristics to fitting curve.

  • PDF

A Study on Compensation of Amplitude in Multi Pulse (멀티펄스의 진폭보정에 관한 연구)

  • Lee, See-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.9
    • /
    • pp.4119-4124
    • /
    • 2011
  • In a MPC coding using excitation source of voiced and unvoiced, it would be a distortion of speech waveform in case of increasing or decreasing of speech signal amplitude in a frame. This is caused by normalization of synthesis speech signal in the process of restoration the multi-pulses of representation section. To solve this problem, this paper present a method of amplitude compensation(AC-MPC) in a multi-pulses each pitch interval in order to reduce distortion of speech waveform. I was confirmed that the method can be synthesized close to the original speech waveform. And I evaluate the MPC and AC-MPC using amplitude compensation method. As a result, SNRseg of AC-MPC was improved 0.7dB for female voice and 0.7dB for male voice respectively. Compared to the MPC, SNRseg of AC-MPC has been improved that I was able to control the distortion of the speech waveform finally. And so, I expect to be able to this method for cellular phone and smart phone using excitation source of low bit rate.

Speech enhancement system using the multi-band coherence function and spectral subtraction method (다중 주파수 밴드 간섭함수와 스펙트럼 차감법을 이용한 음성 향상 시스템)

  • Oh, Inkyu;Lee, Insung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.4
    • /
    • pp.406-413
    • /
    • 2019
  • This paper proposes a speech enhancement method through the process of combining the gain function with spectrum subtraction method in the two microphone array with close spacing. A speech enhancement method that uses a gain function estimated by the SNR (Signal-to Noise Ratio) based on the multi frequency band coherence function causes the performance degradation in high correlation between input noises of two channels. A new speech enhancement method is proposed where the weighted gain function is used by combining the gain function from the spectral subtraction. The performance evaluation of the proposed method was shown by comparison with PESQ (Perceptual Evaluation of Speech Quality) value which is an objective quality evaluation test provided by the ITU-T (International Telecommunications Union Telecommunication). In the PESQ tests, the maximum 0.217 of PESQ value is improved in the various background noise environments.

Phonological processes of consonants from orthographic to pronounced words in the Buckeye Corpus

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.55-62
    • /
    • 2019
  • This paper investigates the phonological processes of consonants in pronounced words in the Buckeye Corpus and compares the frequency distribution of these processes to provide a clearer understanding of conversational English for linguists and teachers. Both orthographic and pronounced words were extracted from the transcribed label scripts of the Buckeye Corpus. Next, the phonological processes of consonants in the orthographic and pronounced labels were tabulated separately by onsets and codas, and a frequency distribution by consonant process types was examined. The results showed that the majority of the onset clusters were pronounced as the same sounds in the Buckeye Corpus. The participants in the corpus were presumed to speak semiformally. In addition, the onsets have fewer deletions than the codas, which might be related to the information weight of the syllable components. Moreover, there is a significant association and strong positive correlation between the phonological processes of the onsets and codas in men and women. This paper concludes that an analysis of phonological processes in spontaneous speech corpora can contribute to a practical understanding of spoken English. Further studies comparing the current phonological process data with those of other languages would be desirable to establish universal patterns in phonological processes.

Phonological processes of consonants from orthographic to pronounced words in the Seoul Corpus

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.1-7
    • /
    • 2020
  • This paper investigates the phonological processes of consonants in pronounced words in the Seoul Corpus, and compares the frequency distribution of these processes to provide a clearer understanding of conversational Korean to linguists and teachers. To this end, both orthographic and pronounced words were extracted from the transcribed label scripts of the Seoul Corpus. Next, the phonological processes of consonants in the orthographic and pronounced forms were tabulated separately after syllabifying the onsets and codas, and major consonantal processes were examined. First, the results showed that the majority of the orthographic consonants' sounds were pronounced the same way as their pronounced forms. Second, more than three quarters of the onsets were pronounced as the same forms, while approximately half of the codas were pronounced as variants. Third, the majority of different onset and coda symbols were primarily caused by deletions and insertions. Finally, the five phonological process types accounted for only 12.4% of the total possible procedures. Based on these results, this paper concludes that an analysis of phonological processes in spontaneous speech corpora can improve the practical understanding of spoken Korean. Future studies ought to compare the current phonological process data with those of other languages to establish universal patterns in phonological processes.

Implementation of Speech Recognizer using DSP(Digital Signal Processor) (DSP를 이용한 음성인식기 구현)

  • 임창환;문철홍;전경남
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.187-190
    • /
    • 2000
  • In this paper, implementation of speech Recognizer system, Separated from Personal computer. By using DSP, this intends to extend the voice recognizing, limited into PC because of amount of data and calculations. For this performance The thesis uses the real time End point detector and organizes no additional device between human and the system, characteristic vector are that detects End point and voice from absolute energy and ZCR, that uses 12 difference Cepstrum from LPC, that uses the method to compensate the process of pattern separating and pre-calculated standard pattern limitation.

  • PDF

The Pitch Beginning Point Extraction Using Property of G-peak (G-Peak의 특성에 의한 피치시점검출)

  • 이해군
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1993.06a
    • /
    • pp.259-262
    • /
    • 1993
  • In this paper, a new pitch beginning point detection method by extracting the G-peak, is proposed. By the speech production model, the area of the first peak on a pitch interval of speech signals is emphasized. By using the above characteristics, this method have more advantages than the others for pitch beginning point detection. The defective decision caused by an impulsive noise is minimized and the pre-filtering is not necessary for this method, because the integration of signals takes place in the process.

  • PDF

Decision-Tree-Based Markov Model for Phrase Break Prediction

  • Kim, Sang-Hun;Oh, Seung-Shin
    • ETRI Journal
    • /
    • v.29 no.4
    • /
    • pp.527-529
    • /
    • 2007
  • In this paper, a decision-tree-based Markov model for phrase break prediction is proposed. The model takes advantage of the non-homogeneous-features-based classification ability of decision tree and temporal break sequence modeling based on the Markov process. For this experiment, a text corpus tagged with parts-of-speech and three break strength levels is prepared and evaluated. The complex feature set, textual conditions, and prior knowledge are utilized; and chunking rules are applied to the search results. The proposed model shows an error reduction rate of about 11.6% compared to the conventional classification model.

  • PDF

Research about auto-segmentation via SVM (SVM을 이용한 자동 음소분할에 관한 연구)

  • 권호민;한학용;김창근;허강인
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2220-2223
    • /
    • 2003
  • In this paper we used Support Vector Machines(SVMs) recently proposed as the loaming method, one of Artificial Neural Network, to divide continuous speech into phonemes, an initial, medial, and final sound, and then, performed continuous speech recognition from it. Decision boundary of phoneme is determined by algorithm with maximum frequency in a short interval. Recognition process is performed by Continuous Hidden Markov Model(CHMM), and we compared it with another phoneme divided by eye-measurement. From experiment we confirmed that the method, SVMs, we proposed is more effective in an initial sound than Gaussian Mixture Models(GMMs).

  • PDF

Stereo Vision Neural Networks with Competition and Cooperation for Phoneme Recognition

  • Kim, Sung-Ill;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1E
    • /
    • pp.3-10
    • /
    • 2003
  • This paper describes two kinds of neural networks for stereoscopic vision, which have been applied to an identification of human speech. In speech recognition based on the stereoscopic vision neural networks (SVNN), the similarities are first obtained by comparing input vocal signals with standard models. They are then given to a dynamic process in which both competitive and cooperative processes are conducted among neighboring similarities. Through the dynamic processes, only one winner neuron is finally detected. In a comparative study, with, the average phoneme recognition accuracy on the two-layered SVNN was 7.7% higher than the Hidden Markov Model (HMM) recognizer with the structure of a single mixture and three states, and the three-layered was 6.6% higher. Therefore, it was noticed that SVNN outperformed the existing HMM recognizer in phoneme recognition.